faizshah 41 minutes ago

The copy-paste programmer will always be worse than the programmer who builds a mental model of the system.

LLMs are just a faster and more wrong version of the copy-paste stackoverflow workflow it’s just now you don’t even need to ask the right question to find the answer.

You have to teach students and new engineers to never commit a piece of code they don’t understand. If you stop at “I don’t know why this works” then you will never be able to get out of the famous multi hour debug loop that you get into with LLMs or similarly the multi day build debugging loop that everyone has been through.

The real thing that LLMs do that is bad for learning is that you don’t need to ask it the right question to find your answer. This is good if you already know the subject but if you don’t know the subject you’re not getting that reinforcement in your short term memory and you will find things you learned through LLMs are not retained as long as if you did more of it yourself.

  • nyrikki 15 minutes ago

    It is a bit more complicated, as it can be harmful for experts also, and the more reliable it gets the more problematic it becomes.

    Humans suffer from automation bias and other cognitive biases.

    Anything that causes disengagement from a process can be challenging, especially with long term maintainability and architectural erosion, which is mostly what I actively search for to try and avoid complacency with these tools.

    But it takes active effort to avoid for all humans.

    IMHO, writing the actual code has always been less of an issue than focusing on domain needs, details, and maintainability.

    As distrusting automation is unfortunately one of the best methods of fighting automation bias I do try to balance encouraging junior individuals to use tools that boost productivity while making sure they still maintain ownership of the delivered product.

    If you use the red-green refactor method, avoiding generative tools for the test and refactor steps seems to work.

    But selling TDD in general can be challenging especially with Holmström's theorem, and the tendency of people to write implementation tests vs focusing on domain needs.

    It is a bit of a paradox that the better the tools become, the higher the risk is, but I would encourage people to try the above, just don't make the mistake of copying the prompts required to get from red to green as the domain tests, there is a serious risk of coupling to the prompts.

    We will see if this works for me long term, but I do think beginners manually refactoring with though could be an accelerator.

    But only with intentionally focusing on learning why over time.

  • stonethrowaway 17 minutes ago

    If engineers are still taught engineering as a discipline then it doesn’t matter what tools they use to achieve their goals.

    If we are calling software developers who don’t understand how things work, and who can get away with not knowing how things work, engineers, then that’s a separate discussion of profession and professionalism we should be having.

    As it stands there’s nothing fundamentally rooted in software developers having to understand why or how things work, which is why people can and do use the tools to get whatever output they’re after.

    I don’t see anything wrong with this. If anyone does, then feel free to change the curriculum so students are graded and tested on knowing how and why things work the way they do.

    The pearl clutching is boring and tiresome. Where required we have people who have to be licensed to perform certain work. And if they fail to perform it at that level their license is taken away. And if anyone wants to do unlicensed work then they are held accountable and will not receive any insurance coverage due to a lack of license. Meaning, they can be criminally held liable. This is why some countries go to the extent of requiring a license to call yourself an engineer at all.

    So where engineering, actual engineering, is required, we already have protocols in place that ensure things aren’t done on a “trust me bro” level.

    But for everyone else, they’re not held accountable whatsoever, and there’s nothing wrong with using whatever tools you need or want to use, right or wrong. If I want to butt splice a connector, I’m probably fine. But if I want to wire in a 3 phase breaker on a commercial property, I’m either looking at getting it done by someone licensed, or I’m looking at jail time if things go south. And engineering or no different.

yumraj 13 minutes ago

I’ve been thinking about this, since LLMs helped me get something done quickly in languages/frameworks that I had no prior experience in.

But I realized a few things, that while they are phenomenally great when starting new projects and small code bases:

1) one needs to know programming/soft engineering in order to use these well. Else, blind copying will hurt and you won’t know what’s happening when code doesn’t work

2) production code is a whole different problem that one will need to solve. Copy pasters will not know what they don’t know and need to know in order to have production quality code

3) Maintenance of code, adding features, etc is going to become n-times harder the more the code is LLM generated. Even large context windows will start failing, and hell hallucinations may screw up without one even realizing

4) debugging and bug fixing, related to maintenance above, is going to get harder.

These problems may get solved, but till then:

1) we’ll start seeing a lot more shitty code

2) the gap between great engineers and everyone else will become wider

  • tomrod 7 minutes ago

    A big part of the solution to this will be more, more focused, and more efficient QA.

    Test-driven development can inherently be cycled until correct (that's basically equivalent to what a Generative Adversarial Network does under the hood anyhow).

    I heard a lot of tech shops gutted their QA departments. I view that as a major error on their parts, if QA folks are current modern tooling (not only GenAI) and not trying to do everything manually.

infinite-hugs a few seconds ago

Certainly agree that copy pasting isn’t a replacement to teaching but I can say I’ve had success learning coding basics while just asking Claude or gpt to explain the code output line by line.

boredemployee an hour ago

Well, I must admit, LLMs made me lose the joy of learning programming and made me realize I like to solve problems.

There was a time I really liked to go through books, documentation, learn and run the codes etc. but these days are gone for me. I prefer to enjoy free time and go to the gym now

  • SoftTalker an hour ago

    I'm the same, and I think it's a product of getting older and being more and more acutely aware of the passage of time and not wanting to spend time on bullshit things. Nothing to do with LLMs. I still like solving problems in code but I no longer get any joy from learning yet another new language or framework to do the same things we've been doing for the past 30 years, but with a different accent.

  • mewpmewp2 40 minutes ago

    It is kind of opposite to me. I do a lot more side projects now, because I enjoy building, and I enjoy using LLMs as this multiplying tool so I build more with the same amount of time. I think integrating LLM with your workflow is also problem solving and an exciting novel way to problem solve at this. It gets my imagination really running and it is awesome to be able to exchange back and forth to overall see things from more perspectives since LLM can give me more different and varied point of views than I alone could have come up with.

    • aerhardt 12 minutes ago

      I am in your camp. LLMs have made everything better for me, both learning and producing.

  • atomic128 12 minutes ago

    This sentiment, I observe it everywhere. My coworkers and the people I interact with in engineering communities. A process of hollowing out and loss of motivation, a loss of meaning and purpose.

    Some may ruminate and pontificate and lament the loss of engineering dignity.

    But some realize this is an inevitable result of human nature. They accept that the minds of their fellow men will atrophy through disuse, that people will become dependent and unable to think. A fruitful stance is to make an effort to profit from the downfall, instead of complaining impotently. See https://news.ycombinator.com/item?id=41733311

    There's also an aspect of tragic comedy. You can tell that people are "dazzled" by the output of the LLM, accepting its correctness because it looks so smart. Here is an example from yesterday. This is totally incorrect yet the commenter pasted it into the thread to demonstrate the LLM's understanding: https://news.ycombinator.com/item?id=41747089

    • bongodongobob a minute ago

      It's called getting older. You guys are so dramatic about the llm stuff lol

lofaszvanitt an hour ago

When a person is using LLMs for work and the result is abysmal, that person must go. So easy. LLMs will make people dumber in the long term, because the machine thinks instead of them and they will readily accept the result it gives if it works. This will have horrifying results in 1-2 generations. Just like social media killed people's attention spam.

But of course we don't need to regulate this space. Just let it go, all in wild west baby.

btbuildem 6 minutes ago

I disagree with the premise of the article -- for several reasons. You could argue that an LLM-based assistant is just a bigger footgun, sure. Nothing will replace a teacher who explains the big picture and the context. Nothing will replace learning how to manage, handle and solve problems. But having a tireless, nimble assistant can be a valuable learning tool.

Web development is full of arbitrary, frustrating nonsense, layered on and on by an endless parade of contributors who insist on reinventing the wheel while making it anything but round. Working with a substantial web codebase often feels like wading through a utility tunnel flooded with sewage. LLMs are actually a fantastic hot blade that cuts through most of the self-inflicted complexities. Don't learn webpack, why would you waste time on that. Grunt, gulp, burp? Who cares, it's just another in a long line of abominations careening towards a fiery highway pileup. It's not important to learn how most of that stuff works. Let the AI bot churn through that nonsense.

If you don't have a grasp on the basics, using an LLM as your primary coding tool will quickly leave you with a tangle of incomprehensible, incoherent code. Even with solid foundations and experience, it's very easy to go just a little too far into the generative fairytale.

But writing code is a small part of software development. Reading code doesn't seem to get talked about as much, but it's the bread and butter of any non-solo project. It's also a very good way to learn -- look at how others have solved a problem. Chances that you're the first person trying to do X are infinitesimally small, especially as a beginner. Here, LLMs can be quite valuable to a beginner. Having a tool that can explain what a piece of terse code does, or why things are a certain way -- I would've loved to have that when I was learning the trade.

Rocka24 34 minutes ago

I strongly disagree, I was able to learn so much about web development by using AI, it streamlines the entire knowledge gathering and dissemination process. By asking for general overviews then poking into the specifics of why things work the way they do, its possible to get an extremely functional and practical knowledge of almost any application of programming. For the driven and ambitious hacker, LLMs are practically invaluable when it comes to self learning. I think you have a case where you're simply dealing with the classic self-inflicting malady of laziness.

  • lovethevoid 20 minutes ago

    What have you learned about web development using AI that skimming the MDN docs couldn't net you?

    • Rocka24 10 minutes ago

      Well the issue isn't about acquiring the knowledge in general. I think so far in my learning journey I've come to realize that "practical learning" is much better than learning in the hopes that something will be useful. For instance, almost everyone in the American education system at some point was forced to memorize that one mol of gas occupies 22.4 L at STP but almost noone will ever use that knowledge again.

      Going through the actual real world issues of web development with an LLM on the side that you can query for any issue is infinitely more valuable than taking a course in web development imo because you actually learn how to DO the things, instead of getting a toolbox which half of the items you don't use ever and a quarter of which you have no idea how to functionally use. I strongly support learning by doing and I also think that the education system should be changed in support of that idea.

steve_adams_86 an hour ago

I’ve come to the same conclusion in regards to my own learning, even after 15 years doing this.

When I want a quick hint for something I understand the gist of, but don’t know the specifics, I really like AI. It shortens the trip to google, more or less.

When I want a cursory explanation of some low level concept I want to understand better, I find it helpful to get pushed in various directions by the AI. Again, this is mostly replacing google, though it’s slightly better.

AI is a great rubber duck at times too. I like being able to bounce ideas around and see code samples in a sort of evolving discussion. Yet AI starts to show its weaknesses here, even as context windows and model quality has evidently ballooned. This is where real value would exist for me, but progress seems slowest.

When I get an AI to straight up generate code for me I can’t help but be afraid of it. If I knew less I think I’d mostly be excited that working code is materializing out of the ether, but my experience so far has been that this code is not what it appears to be.

The author’s description of ‘dissonant’ code is very apt. This code never quite fits its purpose or context. It’s always slightly off the mark. Some of it is totally wrong or comes with crazy bugs, missed edge cases, etc.

Sure, you can fix this, but this feels a bit too much like using the wrong too for the job and then correcting it after the fact. Worse still is that in the context of learning, you’re getting all kinds of false positive signals all the time that X or Y works (the code ran!!), when in reality it’s terrible practice or not actually working for the right reasons or doing what you think it does.

The silver lining of LLMs and education (for me) is that they demonstrated something to me about how I learn and what I need to do to learn better. Ironically, this does not rely on LLMs at all, but almost the opposite.

orwin 41 minutes ago

For people who like me mostly do Backend/Network/System development and who disagree on how helpfull LLMs are (basically a waste of time if you're using it for anything other than rubber ducking/writing tests cases/autocomplete), LLMs can basically write a working front-end page/component in 10s. Not an especially well-designed one, but "good enough". I find it especially shine in writing the html/css parts. It cannot write a FSM on its own, so basically when i write a page, i still write the states, actions and the reducer, but then i can generate the rest and it's really good.

  • dopp0 28 minutes ago

    which LLM are you using for those frontend usecases? chatgpt? and you ask in prompts for some framework such as tailwind?

wkirby 9 minutes ago

The reason I am a software engineer — why it keeps me coming back every week — is the satisfying click when something I didn’t understand becomes obvious. I’ve talked to a lot of engineers over the last 15 years of doing this, and for most of them, they possess some version of the same compulsion. What makes good engineers tick is, imo, a tenacity and knack for solving puzzles. LLMs are useful when they let you get to the meat of the problem faster, but as the article says, they’re a hindrance when they are relied on to solve the problem. Knowing the difference is hard, a heuristic I work on with my team is “use an LLM if you already know the code you want to write.” If you don’t already know the right answer you won’t know if the LLM is giving you garbage.

gwbas1c 13 minutes ago

All the mistakes Ben describes smell like typical noob / incompetent programmer mistakes.

All the LLM is doing is helping people make the same mistakes... faster.

I really doubt that the LLM is the root cause of the mistake, because (pre LLM) I've come across a lot of similar mistakes. The LLM doesn't magically understand the problem; instead a noob / incompetent programmer misapplies the wrong solution.

  • mdhb 7 minutes ago

    The examples he gives were explicitly called out as mistakes you wouldn’t normally make as a beginner because they are so esoteric and I don’t disagree with him at all on that one.

xnx an hour ago

"Modern" web development is so convoluted I'm happy to have a tool to help me sort through the BS and make something useful. In the near future (once the thrash of fad frameworks and almost-databases has passed) there may be a sane tech stack worth knowing.

  • lolinder an hour ago

    This exact comment (with subtle phrasing variations) shows up in every article that includes "web" in the title, but I feel like I'm living in an alternate universe from those who write comments like these. Either that or the comments got stuck in the tubes for a decade and are just now making it out.

    My experience is that React is pretty much standard these days. People create new frameworks still because they're not fully satisfied with the standard, but the frontend churn is basically over for anyone who cares for it to be. The tooling is mature, IDE integration is solid, and the coding patterns are established.

    For databases, Postgres. Just Postgres.

    If you want to live in the churn you always can and I enjoy following the new frameworks to see what they're doing differently, but if you're writing this live in 2024 and not stuck in 2014 you can also just... not?

    • zelphirkalt 25 minutes ago

      React and frameworks based on it being used mostly for websites, where none of that stuff is needed in the first place, is part of what is wrong with frontend development.

      • lolinder 14 minutes ago

        Then write your websites JavaScript-free or with minimal vanilla JS, no frameworks (much less framework churn) needed. That's been possible since the foundation of the web, and is nearly unchanged to this day for backwards compatibility reasons.

  • grey-area an hour ago

    You don't have to use 'Modern Frameworks' (aka an ahistorical mish-mash of Javascript frameworks) to do web development at all. I'm really puzzled as to why people refer to this as modern web development.

    If you're looking for a sane tech stack there are plenty of languages to use which are not javascript and plenty of previous frameworks to look at.

    Very little javascript is needed for a useful and fluid front-end experience and the back end can be whatever you like.

    • xnx 5 minutes ago

      Absolutely true. All technologies that previously existed (e.g. PHP3 + MySQL) still exist. Unfortunately, if you're looking to make use of other people's libraries, it is very difficult to find them for "obsolete" stacks.

    • zelphirkalt 23 minutes ago

      Well, I wish more developers had your insight and could make it heard at their jobs. Then the web would be in a better state than it is today.

      • lovethevoid 13 minutes ago

        Vast majority of the web's downfalls stem from advertising and tracking. Unless you're proposing a way to remove advertising, then the problems will remain no matter what tech the developers opted for.

        • mdhb 4 minutes ago

          You are conflating two entirely different issues. Both are true but neither at the expense of the other

  • mplewis 36 minutes ago

    It’s only been thirty years, but keep waiting. I’m sure that solution is just around the corner for you.

dennisy an hour ago

I feel this idea extends past just learning, I worry using LLMs to write code is making us all lazy and unfocused thinkers.

I personally have banned myself from using any in editor assistance where you just copy the code directly over. I do still use chatGPT but without copy pasting any code, more along the lines of how I would use search.

  • steve_adams_86 an hour ago

    I do this as well. I have inline suggestions enabled with supermaven (I like the tiny, short, fast suggestions it creates), but otherwise I’m really using LLMs to validate ideas, not actually generate code.

    I find supermaven helps keep me on track because its suggestions are often in line with where I was going, rather than branching off into huge snippets of slightly related boilerplate. That’s extremely distracting.

    • dennisy 43 minutes ago

      Yes! This is the other point is that it is also just distracting as you are thinking through a hard problem to have code just popping up which you inevitably end up reading even if you know what you planned to write.

      Just had a glimpse at supermaven and not sure why that would be better, the site suggest it is a faster copilot.

xyst 13 minutes ago

Anybody remember the days of “macromedia”? I think it was dreamweaver that spit out WYSIWYG trash from people that didn’t know better.

For a period of time there was a segment of development cleaning up this slop or just redoing it entirely.

The AI-generated slop reminds me of that era.

csallen 21 minutes ago

AI is an impediment to learning high-level programming languages. High-level programming languages are an impediment to learning assembly. Assembly is an impediment to learning machine code. Machine code is an impediment to learning binary.

  • lovethevoid 17 minutes ago

    Who needs to learn how to read anyways, isn't everything just audiobooks now amiright?

j45 a minute ago

This article feels out to lunch.

If you use AI to teach you HTML / programming concepts first, then support you using them, that is learning.

Having AI generate an answer and then not have it satisfy you usually means the prompt could use improvement. In that case, the prompter (and perhaps the author) may not know the subject well enough.

BinaryMachine an hour ago

Thank you for this post.

I use LLMs sometimes to understand a step by step mathematical process (this can be hard to search google). I believe getting a broad idea by asking someone is the quickest way to understand any sort of business logic related to the project.

I enjoyed your examples, and maybe there should be a dedicated site just for examples of code related to the web that used an LLM to generate any logic, the web changes constantly and I wonder how these LLMs will keep up with the specs, specific browsers, frameworks, etc.

manx 8 minutes ago

Humanity was only able to produce one generation who knows how computers work.

Krei-se an hour ago

I like AI to help me fixing bugs and looking up errors, but i usually architect everything on my own and i'm glad i can use it for everything i would've put off to some coworker who can do the lookups and works on a view or sth. that has no reconnect to the base system architecture.

So he's not wrong, you have to ask the right questions still, but with later models that think about what they do this could still become a non-issue sooner than some breathing in relieve think.

We are bound to a maximum of around 8 working units in our brain, a machine is not. Once AI builds a structure graph like wikidata next to the attention vectors we are so done!

tetha 22 minutes ago

I very much agree with this.

If I have a structured code base, I understood the patterns and the errors to look out for, something like copilot is useful to bang out code faster. Maybe the frameworks suck, or the language could be better to require less code, but eh. A million dollars would be nice to have too.

But I do notice that colleagues use it to get some stuff done without understanding the concepts. Or in my own projects where I'm trying to learn things, Copilot just generates code all over the place I don't understand. And that's limiting my ability to actually work with that engine or code base. Yes, struggling through it takes longer, but ends up with a deeper understanding.

In such situations, I turn off the code generator and at most, use the LLM as a rubber duck. For example, I'm looking at different ways to implement something in a framework and like A, B and C seem reasonable. Maybe B looks like a deadend, C seems overkill. This is where an LLM can offer decent additional inputs, on top of asking knowledgeable people in that field, or other good devs.

ellyagg 27 minutes ago

Or is learning web development an impediment to learning AI?

elicksaur an hour ago

If it’s true for the beginner level, then it’s true for every level, since we’re always learning something.

monacobolid 17 minutes ago

Web development is impediment to learning web development.

menzoic 44 minutes ago

Learning how to use ̶C̶a̶l̶c̶u̶l̶a̶t̶o̶r̶s̶ LLMS is probably the skill we should be focusing on.

cush an hour ago

I find it particularly ironic when someone who goes to a top university with $70k/yr tuition attempts to gatekeep how learning should be. LLMs are just another tool to use. They're ubiquitously accessible to everyone and are an absolute game-changer for learning.

Folks in an academic setting particularly will sneer at those who don't build everything from first principles. Go back 20 years, and the same article would read "IDEs are an impediment to learning web development"

  • wsintra2022 an hour ago

    Hmm not so sure. If you don’t know or understand some web development fundamentals; having a friend who just writes the code for you and also sometimes makes up wrong code and presents it as the right code. Can definitely be a hindrance to learning rather than a help.

seydor 44 minutes ago

I don't think the thing called 'modern web development' is defensible anyway

meiraleal 36 minutes ago

Code School employee says: AI is an impediment to learning web development

camillomiller 41 minutes ago

> For context, almost all of our developers are learning web development (TypeScript, React, etc) from scratch, and have little prior experience with programming.

To be fair, having non programmers learn web development like that is even more problematic than using LLMs. What about teaching actual web development like HTML + CSS + JS, in order to have the fundamentals to control LLMs in the future?

wslh an hour ago

As Python is an impediment to learning assembler?

blackeyeblitzar an hour ago

Almost every student I know now cheats on assignments using ChatGPT. It’s sad.

  • synack an hour ago

    If all the students are using ChatGPT to do the assignments and the TA is using ChatGPT to grade them, maybe it's not cheating, maybe that's just how things are now.

    It's like using a calculator for your math homework. You still need to understand the concepts, but the details can be offloaded to a machine. I think the difference is that the calculator is always correct, whereas ChatGPT... not so much.

    • grey-area an hour ago

      Yes that's why it's nothing like using a calculator. If the LLM had a concept of right or wrong or knew when it was wrong, that would be entirely different.

      As it is you're getting a smeared average of every bit of similar code it was exposed to, likely wrong, inefficient and certainly not a good tool for learning at present. Hopefully they'll improve somehow.

    • Rocka24 27 minutes ago

      We are now in a world where the common layman can get their hands on a GPT (a GPT that is predicted to be equivalent to a pHD in intelligence soon), instead of the person scrolling hugging face and churning out their custom built models.

      I think in the future it'll be pretty interesting to see how this changes regular blue collar or secretarial work. Will the next future of startups be just fresh grads looking for B2B ideas that eliminate the common person?