faizshah 3 hours ago

The copy-paste programmer will always be worse than the programmer who builds a mental model of the system.

LLMs are just a faster and more wrong version of the copy-paste stackoverflow workflow it’s just now you don’t even need to ask the right question to find the answer.

You have to teach students and new engineers to never commit a piece of code they don’t understand. If you stop at “I don’t know why this works” then you will never be able to get out of the famous multi hour debug loop that you get into with LLMs or similarly the multi day build debugging loop that everyone has been through.

The real thing that LLMs do that is bad for learning is that you don’t need to ask it the right question to find your answer. This is good if you already know the subject but if you don’t know the subject you’re not getting that reinforcement in your short term memory and you will find things you learned through LLMs are not retained as long as if you did more of it yourself.

  • nyrikki 2 hours ago

    It is a bit more complicated, as it can be harmful for experts also, and the more reliable it gets the more problematic it becomes.

    Humans suffer from automation bias and other cognitive biases.

    Anything that causes disengagement from a process can be challenging, especially with long term maintainability and architectural erosion, which is mostly what I actively search for to try and avoid complacency with these tools.

    But it takes active effort to avoid for all humans.

    IMHO, writing the actual code has always been less of an issue than focusing on domain needs, details, and maintainability.

    As distrusting automation is unfortunately one of the best methods of fighting automation bias I do try to balance encouraging junior individuals to use tools that boost productivity while making sure they still maintain ownership of the delivered product.

    If you use the red-green refactor method, avoiding generative tools for the test and refactor steps seems to work.

    But selling TDD in general can be challenging especially with Holmström's theorem, and the tendency of people to write implementation tests vs focusing on domain needs.

    It is a bit of a paradox that the better the tools become, the higher the risk is, but I would encourage people to try the above, just don't make the mistake of copying the prompts required to get from red to green as the domain tests, there is a serious risk of coupling to the prompts.

    We will see if this works for me long term, but I do think beginners manually refactoring with though could be an accelerator.

    But only with intentionally focusing on learning why over time.

    • faizshah 33 minutes ago

      I completely reject this way of thinking. I remember when I was starting out it was popular to say you learn less by using an IDE and you should just use a text editor because you never learn how the system works if you rely on a run button or debug button or a WYSIWYG editor.

      Well in modern software we stand on the shoulders of many many giants, you have to start somewhere. Some things you may never need to learn (like say learning git at a deep level when learning the concept of add, commit, rebase, pull, push, cherry pick and reset are enough even if you use a GUI to do it) and some thins you might invest in over time (like learning things about your OS so you can optimize performance).

      The way you use automation effectively is to automate the things you don’t want to learn about and work on the things you do want to learn about. If you’re a backend dev who wants to learn how to write an API in Actix go ahead and copy paste some ChatGPT code, you just need to learn the shape of the API and the language first. If you’re a rust dev who wants to learn how Actix works don’t just copy and paste the code, get ChatGPT to give you a tutorial and then your write your API and use the docs yourself.

    • nonrandomstring an hour ago

      > It is a bit of a paradox that the better the tools become, the higher the risk is

        "C makes it easy to shoot yourself in the foot; C++ makes it harder,
         but when you do it blows your whole leg off". -- Bjarne Stroustrup
  • rsynnott an hour ago

    I always wonder how much damage Stackoverflow did to programmer education, actually. There’s a certain type of programmer who will search for what they want to do, paste the first Stackoverflow answer which looks vaguely related, then the second, and so forth. This is particularly visible when interviewing people.

    It is… not a terribly effective approach to programming.

    • noufalibrahim an hour ago

      I'd qualify that (and the llm situation) with a level of abstraction.

      It's one thing to have the llm generate a function call for you where you don't remember all the parameters. That's a low enough abstraction where it serves as a turbo charged doc lookup. It's also probably okay to get a basic setup (toolchain etc. for an ecosystem you're unfamilar with(. But to have it solve entire problems for you especially when you're learning is a disaster.

    • exe34 an hour ago

      my workflow with stackoverflow is to try to get working code that does the minimum of what I'm trying to do, and only try to understand it after it works as I want it to. otherwise there's an infinite amount of code out there that doesn't work (because of version incompatibility, plain wrong code, etc) and I ran out of patience long ago. if it doesn't run, I don't want to understand it.

      • faizshah 31 minutes ago

        This is in my opinion the right way to use it. You can use stackoverflow or ChatGPT to get to “It works!” But don’t stop there, stop at “It works, and I know why it works and I think this is the best way to do it.” If you just stop at “It works!” You didn’t learn anything and might be unknowingly making new problems.

      • username135 11 minutes ago

        My general philosophy as well.

    • underlipton 23 minutes ago

      Leaning on SO was always the inevitable conclusion, though. "Write once" (however misinterpreted that may be) + age discrimination fearmongering hindering the transfer of knowledge from skilled seniors to juniors + the increasingly brutal competition to secure one's position by producing, producing, producing. With the benefit of the doubt and the willingness to cut/build in slack all dead, of course "learning how to do it right" is a casualty. Something has to give, and if no one's willing to volunteer a sacrifice, the break will happen wherever physically or mechanically convenient.

  • Buttons840 an hour ago

    You suggest learning the mental model behind the system, but is there a mental model behind web technologies?

    I'm reminded of the Wat talk: https://www.destroyallsoftware.com/talks/wat

    Is it worth learning the mental model behind this system? Or am I better off just shoveling LLM slop around until it mostly works?

    • tambourine_man an hour ago

      Of course there is. The DOM stands for Document Object Model. CSS uses the box model. A lot of thought went behind all these standards.

      JavaScript is weird, but show me a language that doesn’t have its warts.

      • hansvm 43 minutes ago

        > JavaScript is weird, but show me a language that doesn’t have its warts.

        False equivalence much? Languages have warts. JS is a wart with just enough tumorous growth factors to have gained sentience and started its plans toward world domination.

      • senko an hour ago

        All languages have their warts. In JavaScript, the warts have their language.

    • faizshah an hour ago

      The modern software space is too complex for any one person to know everything. There’s no one mental model. Your expertise over time comes from learning multiple mental models.

      For example if you are a frontend developer doing typescript in React you could learn how React’s renderer works or how typescript’s type system works or how the browser’s event listeners work. Over time you accumulate this knowledge through the projects you work on and the things you debug in prod. Or you can purposefully learn it through projects and studying. We also build up mental models of the way our product and it’s dependencies work.

      The reason a coworker might appear to be 10x or 100x more productive than you is because they are able to predict things about the system and arrive at solution faster. Why are they able to do that? It’s not because they use vim or type at 200 wpm. It’s because they have a mental of the way the system works that might be more refined than your own.

    • umpalumpaaa 12 minutes ago

      Uhhh. I hated JS for years and years until I started to actually look at it.

      If you just follow a few relatively simple rules JS is actually very nice and "reliable". Those "rules" are also relatively straight forward: let/const over var, === unless you know better, make sure you know about Number.isInteger, isSafeInteger, isObject etc etc. (there were a few more rules like this - fail to recall all of them - has been a few years since i touched JS) - hope you get the idea.

      Also when I looked at JS I was just blown away by all the things people built on top of it (babel, typescript, flowtype, vue, webpack, etc etc).

  • sgustard an hour ago

    Quite often I'm incorporating a new library into my source. Every new library involves a choice: do I just spend 15 minutes on the Quick Start guide (i.e. "copy-paste"), or a day reading detailed docs, or a week investigating the complete source code? All of those are tradeoffs between understanding and time to market. LLMs are another tool to help navigate that tradeoff, and for me they continue to improve as I get better at asking the right questions.

    • LtWorf 41 minutes ago

      If you spend less than 15 minutes before even deciding which library to include and if include it at all. You're probably doing it wrong.

      • smikhanov 37 minutes ago

        No, that person is doing it right. That’s 15 minutes of your life you’ll never get back; no library is worth it.

        • faizshah 28 minutes ago

          If your goal is “ship it” then you might be right. If your goal is “ship it, and don’t break anything else, and don’t cause any security issues in the future and don’t rot the codebase, and be able to explain why you did it that way and why you didn’t use X” then you’re probably wrong.

  • baxtr 2 hours ago

    I wonder if this is an elitist argument.

    AI empowers normal people to start building stuff. Of course it won’t be as elegant and it will be bad for learning. However these people would have never learned anything about coding in the first place.

    Are we senior dev people a bit like carriage riders that complain about anyone being allowed to drive a car?

    • UncleMeat an hour ago

      My spouse is a university professor. A lot of her students cheat using AI. I am sure that they could be using AI as a learning mechanism, but they observably aren't. Now, the motivations for using AI to pass a class are different but I think that it is important to recognize that there is using AI to build something and learn and there is using AI to build something.

      Engineering is also the process of development and maintenance over time. While an AI tool might help you build something that functions, that's just the first step.

      I am sure that there are people who leverage AI in such a way that the build a thing and also ask it a lot of questions about why it is built in a certain way and seek to internalize that. I'd wager that this is a small minority.

      • KoolKat23 17 minutes ago

        Back in school on occasion it was considered cheating to use a calculator, the purpose to encourage learning. It would be absurd in the work environment to ban the use of calculators, it's your responsibility as an employee to use it correctly. As you say the first step.

    • senko an hour ago

      The problem with using the current crop of LLMs for coding, if you're not a developer, is that they're leaky abstractions. If something goes wrong (as it usually will in software developent), you'll need to understand the underlying tech.

      In contrast, if you're a frontend developer, you don't need to know C++ even though browsers are implemented in it. If you're a C++ developer, you don't need to know assembly (unless you're working on JIT).

      I am convinced AI tools for software development will improve to the point that non-devs will be able to build many apps now requiring professional developers[0]. It's just not there yet.

      [0] We already had that. I've seen a lot of in-house apps for small businesses built using VBA/Excel/Access in Windows (and HyperCard etc on Mac). They've lost that power with the web, but it's clearly possible.

    • lovethevoid an hour ago

      I'm a huge fan of drivers with no experience or knowledge of a car getting on the highway. After all, look at how empowered they are!

    • faizshah an hour ago

      It has nothing to do with your level of knowledge or experience as a programmer. It has to do with how you learn: https://www.hup.harvard.edu/books/9780674729018

      To learn effectively you need to challenge your knowledge regularly, elaborate on that knowledge and regularly practice retrieval.

      Building things solely relying on AI is not effective for learning (if that is your goal) because you aren’t challenging your own knowledge/mental model, retrieving prior knowledge or elaborating on your existing knowledge.

    • lawn an hour ago

      Maybe the senior developers are just jaded having to maintain code that nobody, not even their authors, know how it's supposed to work?

      • jcgrillo an hour ago

        I've gotten a lot of mileage in my career by following this procedure:

        1. Talk to people, read docs, skim the code. The first objective is to find out what we want the system to do.

        2. Once we've reverse-engineered a sufficiently detailed specification, deeply analyze the code and find out how well (or more often poorly) it actually meets our goals.

        3. Do whatever it takes to make the code line up with the specification. As simply as possibly, but no simpler.

        This recipe gets you to a place where the codebase is clean, there are fewer lines of code (and therefore fewer bugs, better development velocity, often better runtime performance). It's hard work but there is a ton of value to be gained from understanding the problem from first principles.

        EDIT: You may think the engineering culture of your organization doesn't support this kind of work. That may be true, in which case it's incumbent upon you to change the culture. You can attempt this by using the above procedure to find a really nasty bug and kill it loudly and publicly. If this results in a bunch of pushback then your org is beyond repair and you should go work somewhere else.

  • stonethrowaway 2 hours ago

    If engineers are still taught engineering as a discipline then it doesn’t matter what tools they use to achieve their goals.

    If we are calling software developers who don’t understand how things work, and who can get away with not knowing how things work, engineers, then that’s a separate discussion of profession and professionalism we should be having.

    As it stands there’s nothing fundamentally rooted in software developers having to understand why or how things work, which is why people can and do use the tools to get whatever output they’re after.

    I don’t see anything wrong with this. If anyone does, then feel free to change the curriculum so students are graded and tested on knowing how and why things work the way they do.

    The pearl clutching is boring and tiresome. Where required we have people who have to be licensed to perform certain work. And if they fail to perform it at that level their license is taken away. And if anyone wants to do unlicensed work then they are held accountable and will not receive any insurance coverage due to a lack of license. Meaning, they can be criminally held liable. This is why some countries go to the extent of requiring a license to call yourself an engineer at all.

    So where engineering, actual engineering, is required, we already have protocols in place that ensure things aren’t done on a “trust me bro” level.

    But for everyone else, they’re not held accountable whatsoever, and there’s nothing wrong with using whatever tools you need or want to use, right or wrong. If I want to butt splice a connector, I’m probably fine. But if I want to wire in a 3 phase breaker on a commercial property, I’m either looking at getting it done by someone licensed, or I’m looking at jail time if things go south. And engineering or no different.

    • RodgerTheGreat an hour ago

      In many parts of the world, it is illegal to call yourself an "engineer" without both appropriate certification/training and legal accountability for the work one signs off upon, as with lawyers, medical doctors, and so on. It's frankly ridiculous that software "engineers" are permitted the title without the responsibility in the US.

    • faizshah 18 minutes ago

      If your goal is just to get something working then go right ahead. But if your goal is to be learning and improving your process and not introducing any new issues and not introducing a new threat etc. then you’re better off not just stopping at “it works” but also figuring out why it works and if this is the right way to make it work.

      The idea that wanting to become better at using something is pearl clutching is frankly why everything has become so mediocre.

tekchip 2 hours ago

While I don't disagree and understand the authors concern the bottom line is the author, and others of the same mind, will have to face facts. LLMs are a genie that isn't going back in that bottle. Humans have LLMs and will use them. The teaching angle needs to change to acknowledge this. "You need to learn long hand math because you won't just have a calculator in your pocket." Whoopsie! Everyone has a smart phone. Now I'm going back to school for my degree and classes are taught expecting calculators and even encouraging the use of various math and graphing websites.

By all means urge folks to learn the traditional, arguably better, way but also teach them to use the tools available well and safely. The tools aren't going away and the tools will continue to improve. Endeavour to make coders who use the tools well to produce valuable well written code 2x, 5x, 8x, 20x the amount of code as those of today.

  • BoiledCabbage an hour ago

    > You need to learn long hand math because you won't just have a calculator in your pocket." Whoopsie! Everyone has a smart phone.

    I hear this so often, that I have to reply. It's a bad argument. You do need to learn long hand math. And be comfortable with arithmetic. The reason given was incorrect (and a bit flippant), but you actually do.

    Anyone in any engineering, or STEM based field needs to be able to even estimate numbers and ballpark numbers mentally. It part if reasoning with numbers. Usually that means mentally doing a version of that arithmetic on rounded version of those number.

    Not being able comfortable doing math, means not being able to reason with numbers which impacts every day things like budgeting and home finances. Have a conversation with someone who isn't comfortable with math and see how much they struggle with intuition for even easy things.

    The reason to know those concepts is because basic math intuition is an essential skill.

  • jcgrillo 17 minutes ago

    You still have to manually review and understand every single line of code and your dependencies. To do otherwise is software malpractice. You are responsible for every single thing your computers do in production, so act like it. The argument that developers can all somehow produce 10x or more the lines of code by leaning on a LLM falls over in the face of code review. I'd say at most you'll get 2x, but even that's pushing it. I personally will reject pull requests if I ask the author a question about how something works and they can't answer it. Thankfully, this hasn't happened (yet).

    If you have an engineering culture at your company of rubber-stamp reviews, or no reviews at all, change that culture or go somewhere better.

  • lawn an hour ago

    > "You need to learn long hand math because you won't just have a calculator in your pocket." Whoopsie! Everyone has a smart phone.

    That's a shitty argument, and it wasn't even true back in the day (cause every engineer had a computer when doing their work).

    The argument is that you won't develop a mental model if you rely on the calculator for everything.

    For example, how do you quickly make an estimate if the result you calculated is reasonable, or if you made an error somewhere? In the real world you can't just lookup the answer, because there isn't one.

    • KoolKat23 13 minutes ago

      This allows you more time to develop a mental model, perhaps not at a learning stage but at a working stage. The LLM shows you what works and you can optimize it thereafter. It will even give you handy inline commentary (probably better than what a past developer provided on existing code).

yumraj 2 hours ago

I’ve been thinking about this, since LLMs helped me get something done quickly in languages/frameworks that I had no prior experience in.

But I realized a few things, that while they are phenomenally great when starting new projects and small code bases:

1) one needs to know programming/soft engineering in order to use these well. Else, blind copying will hurt and you won’t know what’s happening when code doesn’t work

2) production code is a whole different problem that one will need to solve. Copy pasters will not know what they don’t know and need to know in order to have production quality code

3) Maintenance of code, adding features, etc is going to become n-times harder the more the code is LLM generated. Even large context windows will start failing, and hell hallucinations may screw up without one even realizing

4) debugging and bug fixing, related to maintenance above, is going to get harder.

These problems may get solved, but till then:

1) we’ll start seeing a lot more shitty code

2) the gap between great engineers and everyone else will become wider

  • ainiriand 2 hours ago

    Related discussion we were having now on Mastodon: https://floss.social/@janriemer/113260186319661283

    • yumraj an hour ago

      I hadn’t even gone that far in my note above, but that is exactly correct.

      We’ll have a resurgence of “edge-cases” and all kinds of security issues.

      LLMs are a phenomenal Stackoverflow replacement and better at creating larger samples than just a small snippet. But, at least at the moment, that’s it.

      • james_marks 34 minutes ago

        100% on the SO replacement, which is a shame, as I loved and benefited deeply from SO over the years.

        I wonder about the proliferation of edge cases. Probably true, but an optimistic outlook, and at least in my own work, LLM’s deliver a failing test faster given new information, and the edge gets covered faster.

  • tomrod 2 hours ago

    A big part of the solution to this will be more, more focused, and more efficient QA.

    Test-driven development can inherently be cycled until correct (that's basically equivalent to what a Generative Adversarial Network does under the hood anyhow).

    I heard a lot of tech shops gutted their QA departments. I view that as a major error on their parts, if QA folks are current modern tooling (not only GenAI) and not trying to do everything manually.

    • yumraj an hour ago

      Many years ago I was at a very large software company, that everyone had heard of.

      Blackbox QA was entirely gutted, only some whitebox QA. Their titles were changed to software engineer from QA engineer. Dev were supposed to do TDD and that’s it, and there’s a fundamental issue there which looks like people don’t even realize.

      Anyway, we digress.

  • falcor84 2 hours ago

    > Even large context windows will start failing

    What do you mean by that?

    • yumraj an hour ago

      If you have a large code base, a software engineer has to look at many files, and step through a big stack to figure out the bugs. Forget about concurrency and multi-threaded scenarios.

      I’m assuming that an LLM will have to ingest that entire code base as part of the prompt to find the problem, refactor the code, add features that span edits across numerous files.

      So, at some point, even the largest context window won’t be sufficient. So what do you do?

      Perhaps a RAG of the entire codebase, I don’t know. Smarter people will have to figure it out.

  • lawn an hour ago

    And maintenance, with adding features to legacy code and debugging, is much more common (and important) than getting small green&field projects up and running.

    • yumraj an hour ago

      Exactly my point.

steve_adams_86 3 hours ago

I’ve come to the same conclusion in regards to my own learning, even after 15 years doing this.

When I want a quick hint for something I understand the gist of, but don’t know the specifics, I really like AI. It shortens the trip to google, more or less.

When I want a cursory explanation of some low level concept I want to understand better, I find it helpful to get pushed in various directions by the AI. Again, this is mostly replacing google, though it’s slightly better.

AI is a great rubber duck at times too. I like being able to bounce ideas around and see code samples in a sort of evolving discussion. Yet AI starts to show its weaknesses here, even as context windows and model quality has evidently ballooned. This is where real value would exist for me, but progress seems slowest.

When I get an AI to straight up generate code for me I can’t help but be afraid of it. If I knew less I think I’d mostly be excited that working code is materializing out of the ether, but my experience so far has been that this code is not what it appears to be.

The author’s description of ‘dissonant’ code is very apt. This code never quite fits its purpose or context. It’s always slightly off the mark. Some of it is totally wrong or comes with crazy bugs, missed edge cases, etc.

Sure, you can fix this, but this feels a bit too much like using the wrong too for the job and then correcting it after the fact. Worse still is that in the context of learning, you’re getting all kinds of false positive signals all the time that X or Y works (the code ran!!), when in reality it’s terrible practice or not actually working for the right reasons or doing what you think it does.

The silver lining of LLMs and education (for me) is that they demonstrated something to me about how I learn and what I need to do to learn better. Ironically, this does not rely on LLMs at all, but almost the opposite.

  • prisenco an hour ago

    I'm of the same mind, AI is a useful rubber duck. A conversation with a brilliant idiot: Can't take what it says at face value but perfect for getting the gears turning.

    But after a year of autocomplete/code generation, I chucked that out the window. I switched it to a command (ctrl-;) and hardly ever use it.

    Typing is the smallest part of coding but it produces a zen-like state that matures my mental model of what's being built. Skipping it is like weightlifting without stretching.

boredemployee 3 hours ago

Well, I must admit, LLMs made me lose the joy of learning programming and made me realize I like to solve problems.

There was a time I really liked to go through books, documentation, learn and run the codes etc. but these days are gone for me. I prefer to enjoy free time and go to the gym now

  • SoftTalker 3 hours ago

    I'm the same, and I think it's a product of getting older and being more and more acutely aware of the passage of time and not wanting to spend time on bullshit things. Nothing to do with LLMs. I still like solving problems in code but I no longer get any joy from learning yet another new language or framework to do the same things we've been doing for the past 30 years, but with a different accent.

  • mewpmewp2 3 hours ago

    It is kind of opposite to me. I do a lot more side projects now, because I enjoy building, and I enjoy using LLMs as this multiplying tool so I build more with the same amount of time. I think integrating LLM with your workflow is also problem solving and an exciting novel way to problem solve at this. It gets my imagination really running and it is awesome to be able to exchange back and forth to overall see things from more perspectives since LLM can give me more different and varied point of views than I alone could have come up with.

    • anonzzzies 2 hours ago

      Same here; I am building so much more and faster than ever in my life and it is great. When I was a kid in the early 80s learning about AI, like everyone who mentored me, I thought it would be replacing programmers by 2000; that might still happen but for now the productivity is a blast.

    • aerhardt 2 hours ago

      I am in your camp. LLMs have made everything better for me, both learning and producing.

    • VBprogrammer 2 hours ago

      LLMs really help me with the blank page problem. Just getting something, even partially working, to built upon can be a huge win.

    • volker48 41 minutes ago

      Same for me. Many times I would have an idea, but I would think ahead of all the mundane and tedious things I would need to complete to implement it and not even get started. Now I work with the LLM to do those more tedious and mechanical parts and frankly the LLM is generating pretty similar code to what I would have written anyway and if not I just rewrite it. A few times I've even been pleasantly surprised when the LLM took an approach I wouldn't have considered and I actually liked it better.

  • atomic128 2 hours ago

    This sentiment, I observe it everywhere. My coworkers and the people I interact with in engineering communities. A process of hollowing out and loss of motivation, a loss of meaning and purpose, as people delegate more and more of their thinking to the LLM.

    Some may ruminate and pontificate and lament the loss of engineering dignity, maybe even the loss of human dignity.

    But some will realize this is an inevitable result of human nature. They accept that the minds of their fellow men will atrophy through disuse, that people will rapidly become dependent and cognitively impaired. A fruitful stance is to make an effort to profit from this downfall, instead of complaining impotently. See https://news.ycombinator.com/item?id=41733311

    There's also an aspect of tragic comedy. You can tell that people are dazzled by the output of the LLM, accepting its correctness because it talks so smart. They have lost the ability to evaluate its correctness, or will never develop said ability.

    Here is an example from yesterday. This is totally nonsensical and incorrect yet the commenter pasted it into the thread to demonstrate the LLM's understanding: https://news.ycombinator.com/item?id=41747089

    Grab your popcorn and enjoy the show. "Nothing stops this train."

    • lgka 2 hours ago

      The wrong July 16th answer is hilarious! Half of the examples that are posted here as proof of the brilliance of LLMs are trivially wrong.

    • olddustytrail an hour ago

      No, they posted it to the thread to illustrate that the problem was in the training set. Which it obviously was.

      I don't know where your interpretation came from. Perhaps you're an LLM and you hallucinated it? :)

    • bongodongobob 2 hours ago

      It's called getting older. You guys are so dramatic about the llm stuff lol

      • sibeliuss 36 minutes ago

        So dramatic... There are so many people who are so psyched on what LLMs have allowed them to achieve, and so many beginners that can't believe they've suddenly got an app, and so many veterans who feel disenchanted, and so on and so forth. I'm quite tired of everyone generalizing LLM reactions based on their own experience!

      • lgka 2 hours ago

        No, it is called having one's open source output stolen by billionaires who then pay enough apologists, directly or indirectly, to justify the heist.

        • bongodongobob an hour ago

          No one stole anything from you. Other than maybe your self esteem.

          • loqeh 6 minutes ago

            Anyone who contradicts a (probably paid) apologist must be either old or lacking in self esteem. Well done, your masters will be happy.

          • LtWorf 8 minutes ago

            Self esteem? I mean plagiarism is a compliment. It's also a licence violation and a shame rich capitalists do it to screw everyone else as usual.

Rocka24 3 hours ago

I strongly disagree, I was able to learn so much about web development by using AI, it streamlines the entire knowledge gathering and dissemination process. By asking for general overviews then poking into the specifics of why things work the way they do, its possible to get an extremely functional and practical knowledge of almost any application of programming. For the driven and ambitious hacker, LLMs are practically invaluable when it comes to self learning. I think you have a case where you're simply dealing with the classic self-inflicting malady of laziness.

  • jay_kyburz an hour ago

    When I ask AI questions about things I know very little, I seem to get quite good results. When I ask it questions about things I know a lot about, I get quite bad answers.

  • lovethevoid 2 hours ago

    What have you learned about web development using AI that skimming the MDN docs couldn't net you?

    • Rocka24 2 hours ago

      Well the issue isn't about acquiring the knowledge in general. I think so far in my learning journey I've come to realize that "practical learning" is much better than learning in the hopes that something will be useful. For instance, almost everyone in the American education system at some point was forced to memorize that one mol of gas occupies 22.4 L at STP but almost noone will ever use that knowledge again.

      Going through the actual real world issues of web development with an LLM on the side that you can query for any issue is infinitely more valuable than taking a course in web development imo because you actually learn how to DO the things, instead of getting a toolbox which half of the items you don't use ever and a quarter of which you have no idea how to functionally use. I strongly support learning by doing and I also think that the education system should be changed in support of that idea.

      • righthand 2 hours ago

        There are plenty of courses, classes, and schooling as you dewcribe, it’s just a matter of cost. A LLM is more useful for studying because it feels interactive however a lot of software development in general is applying what learned and what you need.

        If you want to spend 10 years growing your career and understanding math with the help of an LLM you will work it out eventually, by gambling on your role at a company and their offering of projects.

        If you want to spend 6 months - 6 years understanding the pieces you need for a professional career at various levels (hence the range), you pay for that kind of education.

      • albedoa 38 minutes ago

        So...are you able to articulate what you have learned about web development through LLMs that skimming MDN wouldn't net you?

        Your "strong" disagreement and claim that you were able to learn so much about web development by using AI should be able to withstand simple and obvious followup questions such as "like what?".

    • sibeliuss 21 minutes ago

      Who wants to read all of those docs that they know nothing about?

      Better: use an LLM, get something working, realize it doesn't work _exactly_ as you need it to, then refer to docs. Beginners will need to learn this, however. They still need to learn to think. But practical application and a true desire to build will get them to where they need to be.

lofaszvanitt 3 hours ago

When a person is using LLMs for work and the result is abysmal, that person must go. So easy. LLMs will make people dumber in the long term, because the machine thinks instead of them and they will readily accept the result it gives if it works. This will have horrifying results in 1-2 generations. Just like social media killed people's attention spam.

But of course we don't need to regulate this space. Just let it go, all in wild west baby.

  • fhd2 2 hours ago

    It's the curse of our industry that we have such long feedback cycles. In the first year or so of a new system, bad code is similarly productive than good code, and often faster or at least cheaper to produce.

    Now a few more years down the line, you might find yourself in a mess and productivity grinds to a halt. Most of the decision makers who caused this situation are typically not around anymore at that point.

  • thatcat 3 hours ago

    California passed regulations based on model size.

    https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml...

    • 1propionyl 2 hours ago

      It was not signed into law, it was vetoed by the governor.

      https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Ve...

      • migro23 38 minutes ago

        Thanks for the link. Reading the rationale for not signing the legislation the governor wrote

        > "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good".

        This doesn't make much sense to me. Smaller models might be more dangerous so lets not place safeguards on the larger models because they advance in favor of the public good? Some pretty big assumptions are made here, does this make sense to anyone else?

        • LtWorf 5 minutes ago

          It makes sense to think he got bribed.

wkirby 2 hours ago

The reason I am a software engineer — why it keeps me coming back every week — is the satisfying click when something I didn’t understand becomes obvious. I’ve talked to a lot of engineers over the last 15 years of doing this, and for most of them, they possess some version of the same compulsion. What makes good engineers tick is, imo, a tenacity and knack for solving puzzles. LLMs are useful when they let you get to the meat of the problem faster, but as the article says, they’re a hindrance when they are relied on to solve the problem. Knowing the difference is hard, a heuristic I work on with my team is “use an LLM if you already know the code you want to write.” If you don’t already know the right answer you won’t know if the LLM is giving you garbage.

xyst 2 hours ago

Anybody remember the days of “macromedia”? I think it was dreamweaver that spit out WYSIWYG trash from people that didn’t know better.

For a period of time there was a segment of development cleaning up this slop or just redoing it entirely.

The AI-generated slop reminds me of that era.

weitendorf 27 minutes ago

I can’t help but think part of the problem is that web development is also an impediment to learning web development.

IME there is a lot more arcana and trivia necessary to write frontend/web applications than most other kinds of software, mostly because it’s both regular programming and HTML/CSS/browser APIs. While you can build a generalized intuition for programming, the only way to master the rest of the content is through sheer exposure - mostly through tons of googling, reading SO, web documentation, and trial and error getting it do the thing. If you’re lucky you might have a more experienced mentor to help you. And yes, there are trivia and arcana needed to be productive in any programming domain, but you can drop a freshly minted CS undergrad into a backend engineering role and expect them to be productive much faster than with frontend (perhaps partly why frontend tends to have a higher proportion of developers with non-CS backgrounds).

It doesn’t help that JavaScript and browsers are typically “fail and continue”, nor that there may be several HTML/CSS/browser features all capable of implementing the same behavior but with caveats and differences that are difficult to unearth even from reading the documentation, such as varying support across browsers or bad interactions with other behavior.

LLMs are super helpful dealing with the arcana. I’m recently writing a decent amount of frontend and UI code after spending several years doing backend/systems/infra - I am so much more productive with LLMs than without, especially when it comes to HTML and CSS. I kind of don’t care that I’ll never know the theory behind “the right way to center a div” - as long as the LLM is good enough at doing it for me why does it matter? And if it isn’t, I’ll begrudgingly go learn it. It’s like caring that people don’t know the trick to check “is a number even” in assembly.

btbuildem 2 hours ago

I disagree with the premise of the article -- for several reasons. You could argue that an LLM-based assistant is just a bigger footgun, sure. Nothing will replace a teacher who explains the big picture and the context. Nothing will replace learning how to manage, handle and solve problems. But having a tireless, nimble assistant can be a valuable learning tool.

Web development is full of arbitrary, frustrating nonsense, layered on and on by an endless parade of contributors who insist on reinventing the wheel while making it anything but round. Working with a substantial web codebase can often feel like wading through a utility tunnel flooded with sewage. LLMs are actually a fantastic hot blade that cuts through most of the self-inflicted complexities. Don't learn webpack, why would you waste time on that. Grunt, gulp, burp? Who cares, it's just another in a long line of abominations careening towards a smouldering trash heap. It's not important to learn how most of that stuff works. Let the AI bot churn through that nonsense.

If you don't have a grasp on the basics, using an LLM as your primary coding tool will quickly leave you with a tangle of incomprehensible, incoherent code. Even with solid foundations and experience, it's very easy to go just a little too far into the generative fairytale.

But writing code is just a small part of software development. While reading code doesn't seem to get talked about as much, it's the bread and butter of any non-solo project. It's also a very good way to learn -- look at how others have solved a problem. Chances that you're the first person trying to do X are infinitesimally small, especially as a beginner. Here, LLMs can be quite valuable to a beginner. Having a tool that can explain what a piece of terse code does, or why things are a certain way -- I would've loved to have that when I was learning the trade.

xnx 3 hours ago

"Modern" web development is so convoluted I'm happy to have a tool to help me sort through the BS and make something useful. In the near future (once the thrash of fad frameworks and almost-databases has passed) there may be a sane tech stack worth knowing.

  • lolinder 3 hours ago

    This exact comment (with subtle phrasing variations) shows up in every article that includes "web" in the title, but I feel like I'm living in an alternate universe from those who write comments like these. Either that or the comments got stuck in the tubes for a decade and are just now making it out.

    My experience is that React is pretty much standard these days. People create new frameworks still because they're not fully satisfied with the standard, but the frontend churn is basically over for anyone who cares for it to be. The tooling is mature, IDE integration is solid, and the coding patterns are established.

    For databases, Postgres. Just Postgres.

    If you want to live in the churn you always can and I enjoy following the new frameworks to see what they're doing differently, but if you're writing this live in 2024 and not stuck in 2014 you can also just... not?

    • zelphirkalt 2 hours ago

      React and frameworks based on it being used mostly for websites, where none of that stuff is needed in the first place, is part of what is wrong with frontend development.

      • lolinder 2 hours ago

        Then write your websites JavaScript-free or with minimal vanilla JS, no frameworks (much less framework churn) needed. That's been possible since the foundation of the web, and is nearly unchanged to this day for backwards compatibility reasons.

        • zelphirkalt 2 hours ago

          Yes, of course, you are right. And that is what I would do. And actually what I did do. Recently made a JS-free personal website, still fully responsive and has some foldable content and so on.

          However, people at $job would not listen to me, when I said, that it could be done without jumping on the React hype train and went ahead with React and a framework based on React, to make a single page app, completely unnecessary and occupying multiple frontend devs fulltime with that, instead of simply using a traditional web framework with a templating engine and knowledge about HTML and CSS. So I am no longer in that role to make some as-little-as-possible-JS thing happen. I was a fullstack developer, but I don't want to deal with the madness, so I withdrew from the frontend part.

          See, I don't have a problem with doing this. It is just that people think they need a "modern web framework" and single page apps and whatnot, when they actually don't and have very limited interactive widgets on their pages and have rather pages of informational nature. Then comes the router update taking 2 weeks, or a framework update taking 2-3 weeks, or new TS version being targeted... Frequent work, that wouldn't even exist with a simpler approach.

  • grey-area 3 hours ago

    You don't have to use 'Modern Frameworks' (aka an ahistorical mish-mash of Javascript frameworks) to do web development at all. I'm really puzzled as to why people refer to this as modern web development.

    If you're looking for a sane tech stack there are plenty of languages to use which are not javascript and plenty of previous frameworks to look at.

    Very little javascript is needed for a useful and fluid front-end experience and the back end can be whatever you like.

    • xnx 2 hours ago

      Absolutely true. All technologies that previously existed (e.g. PHP3 + MySQL) still exist. Unfortunately, if you're looking to make use of other people's libraries, it is very difficult to find them for "obsolete" stacks.

    • zelphirkalt 2 hours ago

      Well, I wish more developers had your insight and could make it heard at their jobs. Then the web would be in a better state than it is today.

      • lovethevoid 2 hours ago

        Vast majority of the web's downfalls stem from advertising and tracking. Unless you're proposing a way to remove advertising, then the problems will remain no matter what tech the developers opted for.

        • mdhb 2 hours ago

          You are conflating two entirely different issues. Both are true but neither at the expense of the other

          • lovethevoid 2 hours ago

            They aren't entirely different issues at all and are quite tightly interwoven. It doesn't matter how many ms you shave off by using/not using react when your page loads a full screen video ad and has 50MB of trackers to aid in its quest to access personal info.

            • mdhb 14 minutes ago

              They are very literally different issues.

            • jay_kyburz an hour ago

              One is a tech problem, the other is a business problem.

              There are no ads on my websites.

  • mplewis 3 hours ago

    It’s only been thirty years, but keep waiting. I’m sure that solution is just around the corner for you.

orwin 3 hours ago

For people who like me mostly do Backend/Network/System development and who disagree on how helpfull LLMs are (basically a waste of time if you're using it for anything other than rubber ducking/writing tests cases/autocomplete), LLMs can basically write a working front-end page/component in 10s. Not an especially well-designed one, but "good enough". I find it especially shine in writing the html/css parts. It cannot write a FSM on its own, so basically when i write a page, i still write the states, actions and the reducer, but then i can generate the rest and it's really good.

  • dopp0 2 hours ago

    which LLM are you using for those frontend usecases? chatgpt? and you ask in prompts for some framework such as tailwind?

aatarax 27 minutes ago

This section sums it up and I agree with the author here

> LLMs are useful if you already have a good mental model and understanding of a subject. However, I believe that they are destructive when learning something from 0 to 1.

Super useful if you have code in mind and you can get an LLM to generate that code (eg, turning a 10 minute task into a 1 minute task).

Somewhat useful if you have a rough idea in mind, but need help with certain syntax and/or APIs (eg, you are an experienced python dev but are writing some ruby code).

Useful for researching a topic.

Useless for generating code where you have no idea if the generated code is good or correct.

dennisy 3 hours ago

I feel this idea extends past just learning, I worry using LLMs to write code is making us all lazy and unfocused thinkers.

I personally have banned myself from using any in editor assistance where you just copy the code directly over. I do still use chatGPT but without copy pasting any code, more along the lines of how I would use search.

  • steve_adams_86 3 hours ago

    I do this as well. I have inline suggestions enabled with supermaven (I like the tiny, short, fast suggestions it creates), but otherwise I’m really using LLMs to validate ideas, not actually generate code.

    I find supermaven helps keep me on track because its suggestions are often in line with where I was going, rather than branching off into huge snippets of slightly related boilerplate. That’s extremely distracting.

    • dennisy 3 hours ago

      Yes! This is the other point is that it is also just distracting as you are thinking through a hard problem to have code just popping up which you inevitably end up reading even if you know what you planned to write.

      Just had a glimpse at supermaven and not sure why that would be better, the site suggest it is a faster copilot.

      • steve_adams_86 an hour ago

        It’s better for me because the suggestions are much faster and typically more brief and useful. However, I haven’t used copilot for quite a while, so it might be similar these days. I recall it having very verbose, tangential suggestions.

heisenbit 34 minutes ago

Not just web development learning is affected. Students handing in homework in all kind of coursed generated with AI. The problem of course is that part of learning depends on spaced repetition (ask any AI how it learned ;-) ) so skipping that part - all across the board - is having an impact already now.

cladopa an hour ago

I disagree. I am a programmer and entrepreneur myself with engineering education. I know lots of languages very well (c,c++,scheme, python) and made my own tech company so managing it takes a big amount of my time.

I always wanted to program(and understand deeply) the web and could not. I bought books and videos, I went to courses with real people but I could not progress. I had limited time and there were so many different things, like CSS, and js and html and infinite frameworks you had to learn at once.

Thanks to ChatGPT and Claude I have understood web development, deeply. You can ask both general and deep questions and it helps you like no teacher could (the teachers I had access to).

Something I have done is creating my own servers to understand what happens under the hood. No jQuery teacher could help with that. But ChatGPT could.

AI is a great tool if you know how to use it.

gwbas1c 2 hours ago

All the mistakes Ben describes smell like typical noob / incompetent programmer mistakes.

All the LLM is doing is helping people make the same mistakes... faster.

I really doubt that the LLM is the root cause of the mistake, because (pre LLM) I've come across a lot of similar mistakes. The LLM doesn't magically understand the problem; instead a noob / incompetent programmer misapplies the wrong solution.

  • mdhb 2 hours ago

    The examples he gives were explicitly called out as mistakes you wouldn’t normally make as a beginner because they are so esoteric and I don’t disagree with him at all on that one.

    • gwbas1c 2 hours ago

      > A page written in HTML and vanilla JavaScript, loaded from the public/ directory, completely outside of the Next.js + React system.

      I once had a newcomer open up a PR that completely bypassed the dependency injection system.

      > Vanilla JavaScript loaded in via filesystem APIs and executed via dangerouslySetInnerHTML

      I wish I had more context on this one, it looks like someone is trying to bypass React. (Edit) Perhaps they learned HTML, wanted to set the HTML on an element, and React was getting in the way?

      > API calls from one server-side API endpoint to another public API endpoint on localhost:3000 (instead of just importing a function and calling it directly)

      I once inherited C# code that, instead of PInvoking to call a C library, pulled in IronPython and then used the Python wrapper for the C library.

calibas 23 minutes ago

Having something else do something for you is an impediment to learning anything.

jt2190 an hour ago

> Use of LLMs hinders learning of web development.

I’m sure this is true today, but over time I think this will become less true.

Additionally, LLMs will significantly reduce the need for individual humans to use a web browser to view advertisement-infested web pages or bespoke web apps that are difficult to learn and use. I expect the commercial demand for web devs is going to slowly decline for these college-aged learners as the internet transitions, so maybe it’s ok if they don’t become experts in web development.

BinaryMachine 3 hours ago

Thank you for this post.

I use LLMs sometimes to understand a step by step mathematical process (this can be hard to search google). I believe getting a broad idea by asking someone is the quickest way to understand any sort of business logic related to the project.

I enjoyed your examples, and maybe there should be a dedicated site just for examples of code related to the web that used an LLM to generate any logic, the web changes constantly and I wonder how these LLMs will keep up with the specs, specific browsers, frameworks, etc.

fimdomeio an hour ago

I'm the person that was copy-past school work in the 90's for things that I wasn't interested in. I'm also the person who spent years learning things that I was passionate for without a end goal in mind. The issue here is not AI, it's motivation.

infinite-hugs 2 hours ago

Certainly agree that copy pasting isn’t a replacement to teaching but I can say I’ve had success learning coding basics while just asking Claude or gpt to explain the code output line by line.

csallen 2 hours ago

AI is an impediment to learning high-level programming languages. High-level programming languages are an impediment to learning assembly. Assembly is an impediment to learning machine code. Machine code is an impediment to learning binary.

  • gizmo686 an hour ago

    The difference is that all of those other technologies have a rock solid abstraction layer. In my entire career, I have never encountered an assembler error [0], and have encountered 1 compiler error (excluding a niche compiler that I was actively developing). Current generative AI technology is fundamentally not that.

    [0] which isn't that impressive, as I've done very little assembly.

  • lovethevoid 2 hours ago

    Who needs to learn how to read anyways, isn't everything just audiobooks now amiright?

Krei-se 3 hours ago

I like AI to help me fixing bugs and looking up errors, but i usually architect everything on my own and i'm glad i can use it for everything i would've put off to some coworker who can do the lookups and works on a view or sth. that has no reconnect to the base system architecture.

So he's not wrong, you have to ask the right questions still, but with later models that think about what they do this could still become a non-issue sooner than some breathing in relieve think.

We are bound to a maximum of around 8 working units in our brain, a machine is not. Once AI builds a structure graph like wikidata next to the attention vectors we are so done!

userbinator 2 hours ago

AI is an impediment to learning.

Also ask yourself this the next time you feel compelled to just take an AI solution: what value are you providing, if anyone can simply ask the AI for the solution? The less your skills matter, the more easily you'll be replaced.

Buttons840 2 hours ago

Does anyone else feel that web technologies are the least worthy of mastery?

I mean, a lot of effort has gone into making poorly formed HTML work, and JavaScript has some really odd quirks that will never be fixed because of backwards compatibility, and every computer runs a slightly different browser. True mastery of such a system sounds like a nightmare. Truly understanding different browsers, and CSS, and raw DOM APIs, none of this feels worthy my time. I've learned Haskell even though I'll never use it because there's useful universal ideas in Haskell I can use elsewhere. The web stack is a pile of confusion; there's no great insight that follows learning how JavaScript's if-statements work, just more confusion.

If there was ever a place where I would just blindly use whatever slop a LLM produces, it would be the web.

elicksaur 3 hours ago

If it’s true for the beginner level, then it’s true for every level, since we’re always learning something.

kgeist an hour ago

>API calls from one server-side API endpoint to another public API endpoint on localhost:3000 (instead of just importing a function and calling it directly).

>LLMs will obediently provide the solutions you ask for. If you’re missing fundamental understanding, you won’t be able to spot when your questions have gone off the rails.

This made me think: most of the time, when we write code, we have no idea (and don't really care) what kind of assembly the compiler will generate. If a compiler expert looked at the generated assembly, they’d probably say something similar: "They have no idea what they’re doing. The generated assembly shows signs of a fundamental misunderstanding of the underlying hardware," etc. I'm sure most compiled code could be restructured or optimized in a much better, more "professional" way and looks like a total mess to an assembly expert—but no one has really cared for at least two decades now.

At the end of the day, as long as your code does what you intend and performs well, does it really matter what it compiles to under the hood?

Maybe this is just another paradigm shift (forgive me for using that word) where we start seeing high-level languages as just another compiler backend—except this time, the LLM is the compiler, and natural human language is the programming language.

  • jrflowers an hour ago

    > does it really matter what it compiles to under the hood?

    The example you quoted could trigger a DDoS if a page using that code got popular.

    • kgeist an hour ago

      I'm not claiming the code is perfect; early compilers that generated assembly often produced inefficient code as well. I hope LLMs' coding abilities will improve over time. For now, I'm not ready myself to use LLMs beyond basic prototyping.

tetha 2 hours ago

I very much agree with this.

If I have a structured code base, I understood the patterns and the errors to look out for, something like copilot is useful to bang out code faster. Maybe the frameworks suck, or the language could be better to require less code, but eh. A million dollars would be nice to have too.

But I do notice that colleagues use it to get some stuff done without understanding the concepts. Or in my own projects where I'm trying to learn things, Copilot just generates code all over the place I don't understand. And that's limiting my ability to actually work with that engine or code base. Yes, struggling through it takes longer, but ends up with a deeper understanding.

In such situations, I turn off the code generator and at most, use the LLM as a rubber duck. For example, I'm looking at different ways to implement something in a framework and like A, B and C seem reasonable. Maybe B looks like a deadend, C seems overkill. This is where an LLM can offer decent additional inputs, on top of asking knowledgeable people in that field, or other good devs.

ellyagg 2 hours ago

Or is learning web development an impediment to learning AI?

FpUser 30 minutes ago

Don't we already have enough self certified prophets telling everyone how to do things "properly"? Nobody pushes you to use LLM. As for us - we'll figure out what forks to our benefit

manx 2 hours ago

Humanity was only able to produce one generation who knows how computers work.

menzoic 3 hours ago

Learning how to use ̶C̶a̶l̶c̶u̶l̶a̶t̶o̶r̶s̶ LLMS is probably the skill we should be focusing on.

monacobolid 2 hours ago

Web development is impediment to learning web development.

seydor 3 hours ago

I don't think the thing called 'modern web development' is defensible anyway

jMyles 2 hours ago

I've been engineering (mostly backend but lots of full stack too) web technologies for almost two decades. Not the world's greatest sage maybe, but I have some solid contributions to open source web frameworks, have worked on projects of all different scales from startups to enterprise media outfits, etc.

And I have this to say: any impediment to learning web development is probably a good thing insofar as the most difficult stumbling block isn't the learning at all, but the unlearning. The web (and its tangential technologies) are not only ever-changing, but ever-accelerating in their rate of change. Anything that helps us rely less on what we've learned in the past, and more on what we learn right in the moment of implementation, is a boon to great engineering.

Every one of the greatest engineers I've worked with doesn't actually know how to do anything until they're about to do it, and they have the fitness to forget what they've learned immediately so that they have to look at the docs again next time.

LLMs are lubricating that process, and it's wonderful.

meiraleal 3 hours ago

Code School employee says: AI is an impediment to learning web development

camillomiller 3 hours ago

> For context, almost all of our developers are learning web development (TypeScript, React, etc) from scratch, and have little prior experience with programming.

To be fair, having non programmers learn web development like that is even more problematic than using LLMs. What about teaching actual web development like HTML + CSS + JS, in order to have the fundamentals to control LLMs in the future?

blackeyeblitzar 3 hours ago

Almost every student I know now cheats on assignments using ChatGPT. It’s sad.

  • synack 3 hours ago

    If all the students are using ChatGPT to do the assignments and the TA is using ChatGPT to grade them, maybe it's not cheating, maybe that's just how things are now.

    It's like using a calculator for your math homework. You still need to understand the concepts, but the details can be offloaded to a machine. I think the difference is that the calculator is always correct, whereas ChatGPT... not so much.

    • grey-area 3 hours ago

      Yes that's why it's nothing like using a calculator. If the LLM had a concept of right or wrong or knew when it was wrong, that would be entirely different.

      As it is you're getting a smeared average of every bit of similar code it was exposed to, likely wrong, inefficient and certainly not a good tool for learning at present. Hopefully they'll improve somehow.

    • bigstrat2003 2 hours ago

      It is both cheating, and also the way things are now. Also your calculator example is odd, because when you're learning the math that calculators can do, using a calculator is cheating. Nobody would say we should let third graders bring a calculator instead of learning to do arithmetic, it defeats the purpose of the learning.

    • Rocka24 2 hours ago

      We are now in a world where the common layman can get their hands on a GPT (a GPT that is predicted to be equivalent to a pHD in intelligence soon), instead of the person scrolling hugging face and churning out their custom built models.

      I think in the future it'll be pretty interesting to see how this changes regular blue collar or secretarial work. Will the next future of startups be just fresh grads looking for B2B ideas that eliminate the common person?

cush 3 hours ago

I find it particularly ironic when someone who goes to a top university with $70k/yr tuition attempts to gatekeep how learning should be. LLMs are just another tool to use. They're ubiquitously accessible to everyone and are an absolute game-changer for learning.

Folks in an academic setting particularly will sneer at those who don't build everything from first principles. Go back 20 years, and the same article would read "IDEs are an impediment to learning web development"

  • wsintra2022 3 hours ago

    Hmm not so sure. If you don’t know or understand some web development fundamentals; having a friend who just writes the code for you and also sometimes makes up wrong code and presents it as the right code. Can definitely be a hindrance to learning rather than a help.

  • o11c 28 minutes ago

    The problem is that the AI always gives wrong answers that are indistinguishable from correct answers if you don't already know what you're doing.

    And if you know what you're doing, you don't need the AI.

dickerd0 2 hours ago

So what? The internet is a cesspool. Its been ruined forever. Oh no we have poorly designed websites because of AI on top of everything else.

j45 2 hours ago

This article feels out to lunch.

If you use AI to teach you HTML / programming concepts first, then support you using them, that is learning.

Having AI generate an answer and then not have it satisfy you usually means the prompt could use improvement. In that case, the prompter (and perhaps the author) may not know the subject well enough.

wslh 3 hours ago

As Python is an impediment to learning assembler?

MicolashKyoka an hour ago

sure, let's hear it from the "head of engineering" of an academic club with "9-12" intern level devs who has barely 2y of experience as a dev himself what he thinks about the industry. i mean it's fine to have an opinion and not particularly hating on the guy, but why is it given any credence and making the front page? are people this afraid?

llms are a tool, if you can't make it work for you or learn from using them, sorry but it's just a skill/motivation issue. if the interns are making dumb mistakes, then you need to guide them better and chop up the task into smaller segments, contextualize it for them as needed.