iammjm 3 days ago

I feel with people that say that "AI have take the fun out of programming" for them, but at the same time I think to myself: is it about doing, or is it about getting things done? Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no? For the great majority of work, it is not about fun, but about doing something other people need or want. For me, AI allows me to realize my ideas, and get things done. Some of it might be good, some of it might be bad. I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.

  • al_be_back 3 days ago

    >> doing, or is it about getting things done

    Who says the thing is done? there is a massive danger now, with the sheer amount of complexity & speed brought by ai, in that it's increasingly harder to verify / do proof-of-work.

    >> AI allows me to realize my ideas

    sure for a personal/pet project. however, when working for a customer/client, they've ideas, needs, wants and usually have their own users and shareholders to satisfy - need proof.

    >> lighting up the gas-powered street lights

    ok, no this metaphor may well be loved by ai companies, but doesn't actual work in so many levels. For one, ai (as actually provided) is not electricity or a physical system, a brain, or a mind, it's software (I use it v-selectively). Second, the job being done (lighting, or coding) is ultimately to produce / output the desired outcome for whoever ordered it - a solution to a problem - failing that it's just work and wages for the worker but no effective solution (lighting the dark side of the moon, kinda).

    I agree with the OP, as system complexity went up, so does the ability to keep up.

  • zahlman 3 days ago

    > Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no?

    Lighting or extinguishing a gas lamp does not allow for creative expression.

    Writing a program does.

    The comparison is almost offensive.

    > For the great majority of work, it is not about fun, but about doing something other people need or want.

    Some of us write code for reasons that are not related to employment. The idea that someone else might find the software useful is speculative, and perhaps an expression of hubris; it's not the source of motivation.

    > I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.

    So does the time of the "real programmers".

    • itsdrewmiller 3 days ago

      Ok but then none of this is about you. People still make art even though artists don’t make any money, and that is wonderful. Improving productivity for actual work could let everyone have more time for creative self expression. Hasn’t seemed to work out that way in practice but maybe this time is different!

      • fl0id 2 days ago

        Under any current / capitalist system by design it won’t be different because managers/capital want to increase gains, which contradicts allowing you to keep the gains.

    • aetherspawn 3 days ago

      Not everyone needs creative expression to enjoy their job, sometimes it’s about the process (sales people, mechanics, etc)

      • agumonkey 3 days ago

        somehow mechanics share the problem solving aspect of programming

  • glenstein 3 days ago

    >For the great majority of work, it is not about fun, but about doing something other people need or want

    The essence of this, I think, is that a sense of craftsmanship and appreciation for craft often goes hand in hand with the ethos of learning and understanding what you are working with.

    So there is the issue of who rightly deserves to get the satisfaction out of the getting things done. But there's also the fact that satisfaction goes hand in hand with craft, with knowledge. And that informs a perspective of being able to do things.

    I finally read Adrift, 76 Days at Sea, a fantastic book about surviving in a life raft while drifting across the ocean. But the difference between life and death was an individual with an abundance of practical survival, sailing and navigation knowledge. So there's something to the idea of valuing the ability to call on hard earned deep knowledge, and a relationship to knowledge that doesn't abstract it away.

    Almost paralleling questions of hosting your own data or entrusting it in centralized services.

    • dangus 3 days ago

      Craft is in the eye of the beholder.

      I’ve never even able to make a mobile app before. My skillset was just a bit too far off and my background more in the backend.

      Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.

      For some people building furniture from IKEA is an accomplishment. But a woodworker building an IKEA piece isn’t going to feel great about it.

      It sounds like the person who made this repo didn’t need help but used the help anyway and had a bad time.

      • jackdoe 3 days ago

        > It sounds like the person who made this repo didn’t need help but used the help anyway and had a bad time.

        tbh, it would've taken me 10x the time, the docs are not very obvious rp2350 is fairly new, and its riscv is not used as much and is afterthought, if I was writing it for arm it would've been much easier as the arm swd docs are very clear.

        I am also new to the pico world.

        It is not easy to make myself do something when I know its going to take 10 times longer and its going to be 10 times harder, even if I know I will feel 10 times better.

        You know when they say "find what for you is play and for others is work"? well..

        • dangus 3 days ago

          Well, for what it's worth (maybe nothing), I think you can feel relatively good about your accomplishment.

          The technical leader who essentially dictated to me how to build one of my recent deliverables down to nearly the exact architecture was basically treating me like an AI. If they didn't have that deep knowledge I would have also taken 10x longer to arrive at the endpoint. I followed their architecture almost exactly, and due to their much more deep knowledge than mine I encountered very few issues with that development process as a result. Had I been on my own I would have probably tried multiple things that simply didn't work.

          That person also has to be a little bit willfully ignorant about the code that I am going to produce. They don't know what I'm going to write or if it's going to suck, and maybe they won't even understand it because it's spaghetti. And they won't actually have the time to fix it because they have a zillion management-level priorities and multiple layers of reporting chain below them.

          Is this AI world kind of shitty and scary how it might just screw our industry over and be bad for the world? It might be, we might be like the last factory workers before Ford Motor Company goes from 100,000 workers on the line to 10,000 or 1,000.

          But like every cordless drill given to engineers, it's tough not to use it.

      • allenu 3 days ago

        > I’ve never even able to make a mobile app before. My skillset was just a bit too far off and my background more in the backend. > Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.

        AI is such an existential threat to many of us since we value our unique ability to create things with our skills. In my opinion, this is the source of immediate disgust that a lot of people have.

        A few months ago, I would've bristled at the idea that someone was able to write a mobile app with AI as that is my personal skillset. My immediate reaction when learning about your experience would've been, "Well, you don't really know how to do it. Unlike myself, who has been doing it for many, many years."

        Now that I've used AI a bit more, like yourself, I've been able to do more that I wasn't able to before. That's changed my perspective of how I look at skills now, including my own. I've recognized that AI is devaluing our unique skillsets. That obviously doesn't feel great, but at the same time I don't know if there's much to be done about that. It's just the way things are now, so the best I can do is lean into the new tools available and improve in other ways.

        • dangus 3 days ago

          It's entirely possible that this will turn us all into much less of a special highly-compensated profession, and that would suck.

          Although when you say "AI is devaluing our unique skillsets," I think it's important to recognize that even without AI, it's not our skillsets that ever held value.

          Code is just a means to translate business logic into an automated process. If we had the same skillset but it couldn't make the business logic do the business, it has no value.

          Maybe this is a pedantic distinction, but it's essentially saying that the "engineer" part of "software engineer" is the important bit - the fact that we are just using tools in our toolbox to get whatever "thing" needs to get done.

          At least for now, it seems like actually possessing a skillset is helpful and/or critical to using these tools. They can't handle large context, and even if that changes, it still seems to be extremely helpful to be able to articulate on a detailed level what you want the AI to develop for you.

          An analogy to that is that if you just put your customer in front of a development team and tell them how to make the application, versus putting a staff engineer or experienced product manager in front of them. The AI might be able to complete the project in both cases, but with that experienced person it's going to avoid a lot of pitfalls and work faster/better.

          This analogy reminds me of a real-life instance where I built something that someone higher than director level basically spelled out exactly, essentially dictating the architecture to me that I was to follow. They don't really see my code, they might even hate my code, I am like an AI to them. And indeed, by dictating to me a very good architecture, I was able to basically follow that blindly and ran into very few problems.

      • wartywhoa23 2 days ago

        > Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.

        It's the sense of accomplishment of a toddler who sits on the daddy's neck while all aunties around make round eyes and babble about how tall our boy is.

        • dangus 2 days ago

          Maybe my app isn't real enough for you but the payouts I get from Apple and Google seem to be in US dollars.

    • chickensong 3 days ago

      Re: craft vs git 'er dun, I don't think these have to be mutually exclusive. AI-boosted development is definitely different from the old ways, but the craft approach is a mindset and AI is just another tool.

      In some ways, I find that agent-assisted development opens doors to producing even higher quality code. Little OCD nitpicks, patterns that appear later in development, all the nice but not really necessary changes...these time-consuming refactors are now basically automated in 1-shot by an agent.

      People who rush to ship the minimum were writing slop long before LLMs. At least now we have these new tools to refactor the slop faster and easier than ever.

  • denysvitali 3 days ago

    I truly enjoy programming, but the most frustrating part for me was that I had many ideas and too little time to work on everything.

    Thanks to AI I can now work on many side projects at the time, and most importantly just (as you mentioned) get stuff done quickly and most of the time in good enough (or sometimes excellent) results.

    I'm both amazed and a bit sad, but the reality is that my output has increased significantly - although the quality might have dropped a bit in certain areas.

    Time is limited, and if I can increase my results in the same way as the electric street lights, I can simply look back at the past and smile that I lived in a time where lighting up gas-powered street lights was considered a skill.

    As you perfectly put it, it's not about the process per se, it's about the result. And the result is that now the lights are only 80% lit. In a few months / years we'll probably reach the threshold where the electric street lights will be brighter than the gas-powered ones, and you'd be a fool if you decide to still light them up one by one.

    • ta12653421 2 days ago

      THIS

      8h work, 1 or 2h commute, then a little bit of self care etc - there is not much time to work on "sideprojects", unfortunately: AI is a superbooster here, as it allows to much forward much quicker than before.

    • shepherdjerred 3 days ago

      I’m in the same bucket. I absolutely love programming. What I love even more is being able to do all of these projects and fast-forward through them.

      • igravious a day ago

        Dunno why you got down-voted for saying this. Why down-vote someone for their personal subjective opinion?

  • jimbokun 2 days ago

    AI Coding has the same problem as "self driving cars".

    Until the car can be completely trusted to drive itself and never need human intervention, the human has to stay in a weird state of not driving the car, but being completely alert and attentive and ready to resume control in an instant. This can be more tiring and stressful than just driving yourself.

    Vibe coding is very similar. The AI can generate code at an astounding rate. But all of it has to be examined carefully for strange errors that a human would be very unlikely to make.

    In both cases, it's very questionable whether there is significant savings in the time or attention of the human still in the loop vs just performing the activity completely by herself.

  • Vegenoid 3 days ago

    Making things is often not just about making the thing right in front of you, but about building the skills to make bigger and better things. When you consider the long view, the struggle that makes it harder to make the thing at hand is well worth it. We have long considered taking shortcuts that don’t build skills to be detrimental in the long term. This pretty much only stops being the case when the thing you are short cutting becomes totally irrelevant. We have yet to see how the atrophying of programming skills will affect our collective ability to make reliable and novel software.

    In my experience, I have not seen much new software that I’m happy about that is the fruit of LLMs. I have experienced web apps that I’ve been using for years getting buggier.

    • teaearlgraycold 3 days ago

      I feel that too much reliance on LLMs will leave engineers with at best a linear increase in skill over time, compared to the exponential returns of accumulated knowledge. For some I fear they will actually get negative returns when using AI.

  • dlisboa 3 days ago

    You like having the painting, you just don't like to paint. You can think of a painting and have it appear before you.

    That's OK, but surely you can see how painters wouldn't enjoy that in the slightest.

    • brandall10 3 days ago

      Historically, many master painters used teams of assistants/apprentices to do most of the work under their guidance, with them only stepping in to do actual painting in the final details.

      Similar with famous architects running large studios, mostly taking on a higher level conceptual role for any commissions they're involved in.

      Traditionally in software (20+ years ago) architects typically wouldn't code much outside of POC work, they just worked with systems engineers and generated a ton of UML to be disseminated. So if we go back to that type of role, it somewhat fits in with agentic software dev.

      • gridspy 3 days ago

        Sure, but you currently cannot teach AI models to generate novel art in the same way that you can teach a human apprentice.

        • brandall10 3 days ago

          I was addressing the 'enjoyment' factor, when at the end of the day, esp. at scale, it's a job to produce something someone paid for.

          • dlisboa 3 days ago

            That's where we're at a marked disagreement. "It's just a way to get paid" reduces every human knowledge to a monetary transaction, so the value of any type of learning is only worth what is being paid for.

            Thankfully the people that came before us didn't see it that way otherwise we wouldn't even have anything to program on.

      • jimbokun 2 days ago

        And how did the master painter learn his craft, without first having been an assistant or apprentice?

      • jimbokun 2 days ago

        > they just worked with systems engineers and generated a ton of UML to be disseminated. So if we go back to that type of role, it somewhat fits in with agentic software dev.

        I've never met one of those UML slingers that added much value.

    • grim_io 3 days ago

      You can still enjoy painting, but there is no guarantee that you will be paid for it.

  • Ekaros 2 days ago

    I fully and absolutely agree future is bright. Soon we can outsource both the work and the ideas to LLMs. Make fully automated system to produce fully complete novels, music, movies, videos and software. Just prompt AI to make a movie, book, music or even SaaS. No humans involved. Absolute superior system. Just instruct LLM to start producing programs and monetizing them. No ideas needed, no effort. No thought.

    You can even source ideas from it. No need to think or have any personal input anymore.

    • krige 2 days ago

      And then we can have a second LLM read and digest the book for us. In fact, we can create a pipeline, where LLM writes, LLM reads, and then LLM leaves reviews and reddit comments, all without any human input or oversight on any of these steps, while you can do the fun stuff, like uhm, washing dishes or something.

    • anthk 2 days ago

      >Superior

      Superior disasters you mean.

  • BrenBarn 2 days ago

    The thing is that a large portion of what people are using AI (and tech in general) to do simply doesn't need to be done. We don't need a "smart" dental floss dispenser, or something that automatically buys toilet paper for you, or little Clippy-the-paper-clip bots popping up everywhere to ask if you need help. A lot of the tech that's coming out is a through-and-through waste of everyone's time and energy --- its users' as well as its makers'.

    • jimbokun 2 days ago

      I've had the same thought.

      What if the real reason for recent softness in software engineer hiring, is that we have almost all the software we really need?

      I feel like it's been a while since I even saw some software and thought "oh, I really need that!" Vs "here is something we will force you to download and install on your phone in order to do something that previously didn't require software". Like online menus in restaurants or event tickets or parking meter apps.

      • BrenBarn 2 days ago

        I think we've had almost all the software we need for years now. Most new software is small variations on existing software. In terms of software that would make you go "oh I really need that" because it's a genuinely novel type of functionality. . . it's hard for me to think what's the most recent software I use that falls into that category. Actually more common is that I use an old, working program but then it stops working for whatever reason (e.g., not compatible with latest upgrades) and then I need to look for "new" software to do the same old thing, which it often doesn't do as well as the old software.

  • allenu 3 days ago

    Programming really is fascinating as a skill because it can bring so much joy to the practitioner on a day-to-day problem-solving level while also providing much value to companies that are using it to generate profit. How many other professions have this luxury?

    As a result, though, I think AI taking over a lot of what we're able to do has the dual issue of making your day to day rough both as a personally-enriching experience but also as a money-making endeavor.

    I've been reading The Machine That Changed the World recently and it talks about how Ford's mass production assembly line replaced craftsmen building cars by hand. It made me wonder if AI will end up replacing us programmers in a similar way. Craftsmen surely loved the act of building a vehicle, but once assembly lines came along, it no longer made sense to produce cars in that fashion since more unskilled labor could get the job done faster and cheaper. Will we get to a place where AI is "good enough" to replace most developers? You could always argue that craftspeople could generate better code, but I can see a future where that becomes a luxury and unnecessary if tools do most of the work well enough.

  • minimaxir 3 days ago

    How people derive utility varies from person to person and I suspect is the root cause of most AI generation pipeline debates, creative and code-wise. There are two camps that are surprisingly mutually exclusive:

    a) People who gain value from the process of creating content.

    b) People who gain value from the end result itself.

    I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Also, getting frustrated with code configuration and writing boilerplate code is not personally gratifying.

  • TomasBM 3 days ago

    Yeah, this resonates with me.

    As much as I dislike not having a good mental model of all the code that does things for me, ultimately, I have to concede the battle to get things done. This is not that different from importing packages that someone else wrote, or relying on codebases of my colleagues.

    That said, even if I have to temporarily give up on understanding, I don't believe there's any reason to completely surrender control. I'll call a technician when things need fixing right away, but that doesn't mean I shouldn't learn (some of) the fixes myself.

  • yodsanklai 3 days ago

    > is it about doing, or is it about getting things done?

    It's both. When you climb a mountain, the joy is reaching the summit after the hard hike. The hike is hard but also enjoyable in itself, and makes you appreciate reaching the top even more.

    If there's a cable car or a road leading to the summit, the view may still be nice, but I'll go hiking somewhere else.

  • tobyjsullivan 3 days ago

    This reminds me of the debate around Soylent when that came out. Are meals for enjoyment, flavour, and the experience or are they about consuming nutrients and providing energy?

    I’d say that debate was largely philosophical with proponents on both sides. And really the answer might be that both things are true for different people at different times. Though I also observe that soylent did not, by and large, end up replacing meals for the vast majority.

  • _betty_ 2 days ago

    Daniel Pink's book "Drive" explains that true motivation comes from intrinsic factors: autonomy, mastery, and purpose. It’s not about external rewards or doing every task yourself, but about having the freedom to direct your work, the drive to improve your skills, and a meaningful purpose behind what you do. In programming, AI can free us from routine tasks, letting us focus on creative problem-solving and realizing our ideas - this aligns perfectly with what Pink calls the deeper, more fulfilling motivation to get things done in a way that matters. So, it’s less about losing fun and more about shifting to meaningful engagement and impact.

  • jimkleiber 2 days ago

    I was reflecting on this yesterday, as I have often hated AI for generating emails and other written text, but kinda am loving it for writing code.

    One realization was what you said about me just wanting the code done so I can use the app.

    The second was that, for me, I care about the output of the code, not the code itself. Whereas with the written word, I care about the word. Perhaps if I used AI to summarize what someone wanted in the email then I would care less about the written word coming from a human, but right now I still want to read what they've written. You can say that there are programmers who want to read the code from someone else, but I don't think there's the equivalent of code abstracted away into a UI that exists for the written word (open to that being challenged).

    The last and maybe biggest realization is that computer language exists as multiple levels of abstraction. Machine language, assembly language, high-level language, etc. I'm not sure human languages have as many layers of abstraction, or if they do, they exist within the same language.

    I'll keep reflecting, just my short two cents for now.

  • dvfjsdhgfv 3 days ago

    The correct analogy would be that half of the lights wouldn't light up randomly and then you'd have to go out anyway but in a hurry and only to certain ones just do discover you need to get back 20 minutes later because there is another problem with the same light, and your boss would expect that you do everything much faster and you end up frustrated even more.

  • gyomu 3 days ago

    > is it about doing, or is it about getting things done?

    No, this is a false dichotomy and slippery slope dangerous thinking.

    It’s about building a world where we can all live in and find meaning, joy, dignity, and fulfillment, which requires a balance between pursuing the ends and preserving the means as worthwhile human pursuits.

    If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.

    Human society and civilization is for the benefit of humans, not for filling checkboxes above all else.

    • RamblingCTO 2 days ago

      The amount of pushback for "civilization, life is for the benefit of humans and not for filling checkboxes" is bewildering. Drones not questioning what cancerours thinking they have. Life is for living, not for the economy or whatever technocratic utility function you think you're optimizing for. They are all tools, not the destiny.

      • igleria 2 days ago

        username checks out.

        I'm sadly unsurprised, but me ranting about silicon valley mentality on HN feels like yelling at a cloud. Best we can do is keep trying to make people's lives better even if it is not in the best interest of the shareholders :)

        • RamblingCTO a day ago

          Trying my best ;)

          Funnily enough, the OG silicon valley vibe includes "let's make this world a better place to live", the hippie stuff imho. Nowadays it's more like "let's maximize shareholder value and extract what we can" and that's lazy. Bring back the old school SV and take more acid!

    • protocolture 3 days ago

      >If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.

      So much infrastructure is built by people having a less than good time.

      An Engineer might get the jollies designing a bridge, but the workers who work on it dont.

      The goal is to give lots of people happiness from not having to drive 100km out of their way.

      If we solve a lot of problems for a lot of people and all it costs is the happiness of a few software engineers, well I am not convinced they were happy to begin with. Fund it.

      • com2kid 3 days ago

        > An Engineer might get the jollies designing a bridge, but the workers who work on it dont.

        My father was a laborer and he was incredibly proud of every project he worked on to build the city he loved.

        He was exceptionally proud the day I drove to my first day of work at Microsoft, along a road he helped pave.

        We should all aim to build things we can take pride in.

        • theturtlemoves 2 days ago

          Same here. My father was a bricklayer. Backbreaking work. On weekends he drove us to the houses he had worked on. We didn't appreciate it as young children, but he was definitely very proud of what he built.

      • vikramkr 2 days ago

        And why should the workers who work on the bridge be denied happiness and satisfaction from their work? Building and creating physical stuff is incredibly rewarding in concept for so many people - especially in a culture that values/glorifies physical and manual labor like parts of the US. I mean bob the builder is a popular kids show and "all boys are fascinated by big trucks and construction projects" is both an incredibly common stereotype and to a significant extent just a true statement.

        • protocolture 2 days ago

          They shouldnt, but their happiness should also not prevent the bridge.

      • mzajc 2 days ago

        > So much infrastructure is built by people having a less than good time.

        If the "work only as means to an end" line of thinking continues, us programmers will be there in no time.

      • intended 2 days ago

        Many people get immense satisfaction by looking at a physical object they helped create and saying - “I helped build that”.

      • cheeseface 2 days ago

        > An Engineer might get the jollies designing a bridge, but the workers who work on it dont.

        What is this viewpoint based on?

        The majority of blue collar workers I know get plenty of pleasure from their work and would absolutely hate sitting on a computer 8 hours per day.

      • jimbokun 2 days ago

        > An Engineer might get the jollies designing a bridge, but the workers who work on it don't.

        I think work becoming more abstract and not seeing anything concrete like a bridge or a road or a building after the work is complete is the source of a lot of mental illness, melancholy, and even suicidal ideation in modern society.

    • hiAndrewQuinn 2 days ago

      How do I know this very comment wasn't written by someone who was having a bad time, though? The tone is frustrated and critical. I'd put the odds at maybe 1 in 5.

      Where do we draw the line where we have to delete our own grouchiness from the Internet for fear of letting others consume something we created in anger?

      • 8n4vidtmkvmk 2 days ago

        On reddit, I delete it daily. Partly for that reason and partly because the Internet is scary.

        The line though, is probably when you put my harm out into the world than good. That's probably a good place to draw it.

    • potato3732842 2 days ago

      >If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.

      What if the people, miserable or not, getting paid to make the meal have a fundamentally opposed world view to you and will use some amount of their wealth to try and enforce it on you in roundabout ways.

      Because I assure you, some guy in a warehouse in Des Moines filling out some bullshit web form just so he can swipe his employee key card and start his forklift, doesn't want to enrich you, or me, or just about anyone else on HN. And his boss who felt compelled to buy the crap just to save a buck on insurance probably feels about the same.

    • prmph 3 days ago

      I was a bit disappointed by your response because, from the way you started it, I was expecting a stronger argument. I do agree with your point, but I think a key aspect of the false dichotomy is that there is evidence that AI is not actually "getting things done"

      • linsomniac 3 days ago

        >there is evidence that AI is not actually "getting things done"

        But there is also evidence that AI is actually getting things done, right?

        Most of the evidence that AI can't get things done that I've seen tends to be along the lines of asking it to do a large job and it struggling. It seems like a lot of people stop there, and either don't investigate problems where it might be a better fit.

        • aleph_minus_one 3 days ago

          > It seems like a lot of people stop there, and either don't investigate problems where it might be a better fit.

          The AI sceptics do think deeply about where AI might be a better fit. They indeed thought deeply about this, but for every hypthetical use case they could come up with, they had to conclude that

          - AI has to become much much more reliable to be suitable for this use case

          - the current AI architectures (as "the thing that bigtech markets") will likely by principle never be able to achieve this kind of reliability

          This is exactly why these AI-sceptical people got so sceptical about AI, and also the reason why these AI sceptics got so vocal about their opinions.

        • queenkjuul 3 days ago

          I've put far more time than i should have trying to get AI to successfully complete tasks of varying sizes in our codebase at work. It simply cannot do things reliably and adequately when working in a large codebase. It lacks sufficient context, it ignores established conventions, worst of all, it often ignores instructions (endless unnecessary comments being my personal biggest peeve).

          So i think i have, in fact, tried my best to use it.

          It's great for little tiny things. Give me a one-off script to transform some command's output, translate some code from Python to Typescript, write individual unit tests for individual functions. But none of that is transforming how i do my job, it's saving me minutes, not hours.

          Nobody at my company is getting serious quantities of programming done with AI, either, and not for lack of trying. I've never been one to claim it's useless, just that it's usefulness (i.e. "how much is getting done") is drastically overblown.

          • linsomniac 3 days ago

            I think we're largely in agreement here, though I wouldn't go so far as to say it's limited to "little tiny" things, but I guess that's a matter of scale. I use it for a lot of tooling, which is typically in the 500-5,000 line size, and it works really well for these sorts of things. A lot of them it will just one-shot and not break a sweat.

            I have cases where it saves hours for sure, but they are fewer and further between. Last week we used it to solve 600+ linting warnings in 25 year old code, which probably saved me the better part of a day. It did a fantastic job of converting %-format strings to f-strings. I created a skill telling it how to test a %-to-f conversion in isolation, and it was able to use that skill to flawlessly convert all of our strings to modern usage.

        • lmm 2 days ago

          > But there is also evidence that AI is actually getting things done, right?

          Is there? I haven't seen a single AI success story that rang true, that wasn't coming from someone with a massive financial interest in it being successful. A few people subjectively think AI is making them more productive, but there's no real objective validation of that; they're not producing super-impressive products, and there was that recent study that had senior developers thinking they were being more productive using AI when in fact it was the opposite.

          • linsomniac 2 days ago

            You seem to be setting a high bar (AI success stories don't ring true), while taking the study as fact. This feels like a cognitive bias.

            I believe you are talking about the study: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. It is an interesting data point, but it's far from conclusive. It studied 16 developers working on large (1MLOC+) codebases, and the AI tooling struggles with large codebases (new tools like Brokk are attempting to improve performance there). The authors acknowledge that participants dropped hard AI-disallowed issues "reducing the average AI-disallowed difficulty". Some of the selected developers seem to have been inexperienced at AI use.

            Smaller tools and codebases and unfamiliar code are sweet spots for the AI tools. When I need a tool to help me do my job, the AIs can take a couple sentence description and turn it into working code, often on the first go. Monitoring plugins, automation tools, programs in the few thousand lines of code, writing tests, these are all things the AIs are REALLY good at. Also: asking questions about the code.

            A few examples: Last night I had Claude Code implement a change to the helix editor so that if you go back into a file you previously edited, it takes you back to the spot you were at in your last editing session. I don't know the helix code nor Rust at all. Claude was able to implement this in the background while I was working on another task and then watching TV in the evening. A few weeks ago I used Claude Code to fix 600+ linting errors in 20 year old code, in an evening while watching TV, these easily would have taken a day to do manually. A few months ago Claude built me a "rsync but for block devices" program; I did this one as a comparison of writing it manually vs vibe coding it with Claude, and Claude had significant advantages.

            But, I'm guessing these will fall into the "does not ring true" category, probably also "no real objective validation". But to me, personally, there is absolutely evidence that AI is actually getting things done.

            • lmm 2 days ago

              > You seem to be setting a high bar (AI success stories don't ring true), while taking the study as fact. This feels like a cognitive bias.

              I think it's interesting that you jump to that. I consider a study, even a small one, to be better evidence than subjective anecdotes; isn't that the normal position that one should take on any issue? I'm not taking that study as gospel, but I think it's grounds to be even more skeptical of anecdotal evaluations than normal.

              > Some of the selected developers seem to have been inexperienced at AI use.

              This seems to be a constant no-true-Scotsman argument from AI advocates. AI didn't work in a given setting? Clearly the people trying it were inexperienced, or the AI they were testing was an old one that doesn't reflect the current state of the art, or they didn't use this advocate's super-awesome prompt that solves all the problems. I never hear these objections before someone tries to evaluate AI, only after they've done it and got a bad result.

              > But, I'm guessing these will fall into the "does not ring true" category, probably also "no real objective validation".

              Well, yes. Duh. When the best available evidence shows little objective effectiveness from AI, and suggests that people who use AI are biased to think it's more effective than it was, I'm going to go with that, unless and until better evidence comes along.

              • linsomniac 15 hours ago

                >I consider a study, even a small one, to be better evidence than subjective anecdotes

                We're coming at it from very different places is the thing. The GenAI tooling is allowing me to do things that I otherwise wouldn't have time to do, which objectively to me is a clear win. So, I'm going to look at a study like that and pick it apart, because it doesn't match my objective observations. You are coming from a different angle.

                • lmm 12 hours ago

                  > The GenAI tooling is allowing me to do things that I otherwise wouldn't have time to do, which objectively to me is a clear win. So, I'm going to look at a study like that and pick it apart, because it doesn't match my objective observations.

                  What do you think the word "objectively" means?

                  • linsomniac 11 hours ago

                    Ahh, we've reached the point in the discussion where you're arguing semantics...

                    "With a basis in observable facts". I am observing that I am getting things done with GenAI that I wouldn't be able to otherwise, due to lack of time.

                    While you were typing your message above, Claude was modifying a 100KLOC software project in a language I'm unfamiliar with to add a feature that'll make the software have one less rough edge for me. At the same time, I was doing a release of our software for work.

                    Feels pretty objective from my perspective. Yes, I realize from your perspective it is subjective.

              • keeda a day ago

                > When the best available evidence shows little objective effectiveness from AI, and suggests that people who use AI are biased to think it's more effective than it was, I'm going to go with that, unless and until better evidence comes along.

                Well you're in luck, a ton of better evidence across much larger empirical studies has been available for a while now! Somehow they just didn't get the same amount of airtime around here. You can find a few studies linked here: https://news.ycombinator.com/item?id=45379452

                But if you want to verify that's a representative sample, do a simple Google Scholar search and just read the abstracts of any random sample of the results.

            • jimbokun 2 days ago

              You can't be serious?

              No, subjective experience is not reliable and is the whole reason humanity invented the scientific method to have a more reliable method of ascertaining truth.

              • linsomniac 2 days ago

                Yeah, but there are also 3 types of lies: lies, damn lies, and statistics.

        • rsynnott 2 days ago

          There not much in the way of what you’d call strong positive evidence. Lots of user testimonials, which are, as always, kinda useless.

          The few serious studies attempting to measure out if (vs asking people “do you think this helped you”; again, that’s not useful evidence of anything), seem to have come come out anywhere from “a wash” to “mildly detrimental”.

        • mat_b 3 days ago

          Exactly. Break the problem down and interactively plan before generating code.

          I'm getting a lot more done than I could have without AI by using this approach with the agent.

      • brazukadev 3 days ago

        Your argument is much weaker, still about filling checkboxes above all else.

    • rustystump 3 days ago

      I guess you are a vegan too right? I get this take but it is naivety. Not everything must pass the morality purity test.

      Did mass processed food production stop people from cooking or enjoying human made food? No it did not. The same is true in almost all domains where a form of industrialization happens.

      • nullgeo 3 days ago

        > Did mass processed food production stop people from cooking or enjoying human made food?

        Yeah but what if I'm getting pitted against my coworkers who are vibe coding and getting things done faster than I am. Some people write code with pride because it's their brainchild. AI completely ruins the fun for those people when they have to compete against their coworkers for productivity.

        I'm not in disagreement with you or the GP comment, but this it is super hard to make nuanced comments about GenAI.

        • rustystump 3 days ago

          That is an issue that exists regardless of ai but i do get it. Most furniture is not hand made. But that doesnt preclude people from enjoying buying or making handmade furniture.

          The fact that i think people need to get over is that you are blessed beyond measure to have a fun job that gives you creative joy and satisfaction. Losing that because of ai/new tool is not some unprecedented event signaling the end of creativity. A job is a job.

          What amuses me is i have just as much fun clacking away with some ai help as i did before. But then again i like the problem solving process more than writing the same code in one specific programming language.

        • generativenoise 3 days ago

          They are wrong however. To take the food example, the existence of processed food production creates artifacts like food deserts. If you are privileged these things don't effect you as much as you get more agency.

          Just the existence of quick to eat and prepare foods are going to put limits on how long you are going to be given for lunch and dinner. Even if you wanted to prepare fresh food, the system is going to make it difficult since it becomes an unsupported activity in terms of time allowances and market access.

          • rustystump 3 days ago

            A subjective take that processed quick food is bad or less than the opposite. It is all about new things that optimize the old.

            To be clear when i say processed i dont mean tv dinners but that have prepared food that human didnt cook.

            • generativenoise 2 days ago

              I made no judgement about the quality of processed food or where the different options rank in terms access to calories and nutrition, or what is actually feasible. It was simply about how changes can become mildly to severely obligate to certain populations in our economic system.

      • jimbokun 2 days ago

        No, but it certainly made the nutritional content of the food we eat worse.

        It saved countless people from starvation while introducing the disease of obesity on a massive scale. I suppose that's a reasonable tradeoff.

    • xnx 2 days ago

      > If I am eating a delicious meal but the people preparing it had a miserable time ... I don’t want it.

      Vegetarian I presume?

    • pembrook 2 days ago

      > slippery slope dangerous thinking

      > If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.

      So every restaurant you go to, you head to the back to run a purity test on the political beliefs and “happiness” of the people making your food to make sure they line up exactly with what you believe?

      This just screams luxury beliefs to me, and historically, Utopianism like this has been the actual dangerous slippery slope. Like…tens of millions of people starving dangerous.

      I just don’t think this fully automated luxury communism thing you are fantasizing about will make you happy. Seeking pleasure 24/7 is pathological and means you stop feeling it, and doing things for the benefit of others instead of yourself is miserable…but ultimately more fulfilling.

      • bonobo 2 days ago

        > So every restaurant you go to, you head to the back to run a purity test on the political beliefs and “happiness” of the people making your food to make sure they line up exactly with what you believe?

        Does their argument gets invalidated if they don't verify *every* restaurant ever? Nobody has the time nor the resources to follow their moral standards with 100% precision, but if we're doing our best I'd argue we can still take that moral stance.

        Recently a slave labour scheme was dismantled in my country in which some wineries were keeping slaves to produce grape juice. The companies were on the news, and although I do love some grape juice I will never ever buy from them again. Do I check *every* single source of the products I consume? Of course no. Can they eventually do some marketing tricks and fool me into buying from them again? Maybe. But I do my best and I feel like this is sufficient to claim this is a good moral stance nonetheless.

    • yawnxyz 2 days ago

      good analogy; personally i hate microwaving food but can’t shame people for doing it

    • HDThoreaun 3 days ago

      This is dumb. Every fast food meal you eat was prepared by someone having a miserable time. Guess what, theyd be more miserable without the job. Getting stuff done is what benefits humans, not feel good jobs.

      • feisty0630 3 days ago

        It's not "dumb", you're just presenting a steelman that directly contradicts what the person you're replying to wrote.

        You might indeed be shocked to find that not everyone consumes fast food.

        • CSSer 3 days ago

          What strikes me about this exchange is no one is talking about the money. In the past, you could do either and no one had to care except you. Now a lot of jobs that people could find fulfilling aren't because the economy is so distorted, so how are we supposed to honestly look at this? I guess let's walk these people off the plank and get this over with...

        • JustExAWS 3 days ago

          Did the people who worked on the farm to grow and harvest your food enjoy it?

          • feisty0630 3 days ago

            Are you seriously and earnestly arguing that harm-minimisation is useless and we should all just open the human-suffering throttle, or did you just not think that far ahead?

            I am hoping the latter. Being foolish is far more temporary a condition than being cruel.

            • JustExAWS 3 days ago

              How are you “minimizing harm” by pearl clutching about not eating fast food? The front line people you are interacting with at the fast food restaurant or the grocery store have it easiest in the chain of events that it takes food to get to you. Do you think that fast food workers have it harder than the people at the grocery store?

              • feisty0630 3 days ago

                0.8*harm < 0.81*harm - hope this helps!

                Also, the core point is about people being able to find meaning in their work. That you've decided to laser in on this specific point to go on a tangent of whattaboutism is largely irrelevant.

                Have a nice day.

                • raw_anon_1111 3 days ago

                  The fact is that most of the 3-4 billion+ people on earth don’t “find meaning in their work” and they only work because they haven’t overcome their addiction to food and shelter. If the point was irrelevant to your argument, why make it?

                  • feisty0630 3 days ago

                    I didn't actually make the point initially. I was challenging the reply's point that:

                    a) just because some people are miserable at work, doesn't mean we shouldn't care that other people might become miserable at work

                    b) Someone saying they prefer their food to be made without suffering is clearly a hypocrite in all cases because... there are miserable people in fast food jobs?

                    I mean... really. Come on now.

                    • raw_anon_1111 3 days ago

                      People who work in fast food may not be “passionate” about their job. But they aren’t “suffering”. You aren’t relieving anyone’s “suffering” by not eating fast food or even if there was no fast food. They aren’t “suffering” anymore than people working at the grocery store.

                      Cry me a river for software developers (been delivering code professionally for 30 years and before that as a hobbyist) because now we have something that makes us more efficient.

                      • feisty0630 2 days ago

                        I don't know if you're intentionally being obtuse or you just failed third grade reading comprehension, but can you please go argue with the people actually making these points (rather than me, a random person who has replied to them)?

                        • raw_anon_1111 2 days ago

                          So exactly what point are you trying to make? That software developers - at least the employed ones - “are suffering” because of AI? That you don’t eat fast food because you believe the employees are being exploited? What exactly is your point?

            • HDThoreaun 2 days ago

              Increasing productivity is how we minimize harm. Many people hate their job but are happy to have it because it allows them to consume things. More production = less suffering

      • cgriswald 3 days ago

        Have you worked fast food? I have and I loved it.

        • Sabinus 2 days ago

          What percentage of fast food workers would you say are loving it?

          • cgriswald 2 days ago

            More than 0%.

            • ribosometronome 2 days ago

              More than 1%?

              • cgriswald 2 days ago

                Here is the original claim:

                > Every fast food meal you eat was prepared by someone having a miserable time.

                If you want more data, then maybe ask the one making the claim for his data instead and subtract it from 100%?

      • zeofig 3 days ago

        One of the reasons I try to avoid fast food and indeed everything.

        • HDThoreaun 2 days ago

          I really don’t see how this helps the fast food workers. When less people eat their food they lose jobs and become even more miserable. Sure if you hire them as a private chef you’re helping them out but if just cook yourself you haven’t done a thing to improve their life.

    • KronisLV 2 days ago

      > If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.

      Unfortunately I feel compelled to express the doomer take here, but I don't think most people care how their fast fashion or iPhones are made. And very few find it practically doable to boycott a company like Nestle. People trying to go full Stallman (sans the problematic stuff, rather along the lines of FSF) also find it just difficult.

      Most people are just happy that the boot is on the other foot or someone else's back and that they have enough convenience and isolation from the rest of the world not to care. Or honestly it's hard to get by for them as well and all of those trinkets and unethically made products help them get through the day.

      > Human society and civilization is for the benefit of humans, not for filling checkboxes above all else.

      I really wish that was the case, instead of for the extraction of what little wealth we have by corpos and the oligarchs (call them whatever you want), to push us more towards a rat race of sorts where we get by just barely enough to keep consuming but not enough to effect meaningful change most of the time. Then again, could be better, could be worse - it's cool to see passionate people choosing to make something just for the sake of the experience and creating something unique, not always with a profit in mind.

      In regards to software development in particular, I'm reminded of a few specific paragraphs in https://www.stilldrinking.org/programming-sucks

        Every programmer occasionally, when nobody’s home, turns off the lights, pours a glass of scotch, puts on some light German electronica, and opens up a file on their computer. It’s a different file for every programmer. Sometimes they wrote it, sometimes they found it and knew they had to save it. They read over the lines, and weep at their beauty, then the tears turn bitter as they remember the rest of the files and the inevitable collapse of all that is good and true in the world.
        
        This file is Good Code. It has sensible and consistent names for functions and variables. It’s concise. It doesn’t do anything obviously stupid. It has never had to live in the wild, or answer to a sales team. It does exactly one, mundane, specific thing, and it does it well. It was written by a single person, and never touched by another. It reads like poetry written by someone over thirty.
        
        Every programmer starts out writing some perfect little snowflake like this. Then they’re told on Friday they need to have six hundred snowflakes written by Tuesday, so they cheat a bit here and there and maybe copy a few snowflakes and try to stick them together or they have to ask a coworker to work on one who melts it and then all the programmers’ snowflakes get dumped together in some inscrutable shape and somebody leans a Picasso on it because nobody wants to see the cat urine soaking into all your broken snowflakes melting in the light of day. Next week, everybody shovels more snow on it to keep the Picasso from falling over.
      
      You don't really get that Good Code with AI that much, or at least I haven't felt that way looking at it. Then again, I could say that about most code written by other people, not sure what that means. Maybe I just have an odd taste in code that so little of it seems pleasant.
    • macinjosh 3 days ago

      Yet you use your toilet. Do you think the sewer workers love their job?

      • feisty0630 2 days ago

        Are you from the middle ages, or are you so out of touch with blue-collar work that you're under the impression the average sewer worker has to manually handle waste?

  • tjr 3 days ago

    I am reminded of Dijkstra's remark on Lisp, that it "has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."

    (I imagine that this is not limited to Lisp, though some languages may yield more or less results.)

    If we consider programming entirely as a means to and end, with the end being all that matters, we may lose out on insights obtained while doing the work. Whether if those insights are of practical value, or economic value, or of no value at all, is another question, but I feel there is more likely to be something gained by actually doing the programming, compared to actually lighting the street lamps.

    (Of course, what you are programming matters too. Many were quick to turn to AI for "boilerplate"; I doubt many insights are found in such code.)

  • sanjayjc 2 days ago

    > is it about doing, or is it about getting things done?

    For me it is getting things done while also understanding the whole building, from its foundation up. Only with such a comprehensive mental model can I predict how my code will behave in unanticipated situations. I've only ever achieved this metal model by doing.

    Succinctly, "it is about doing" to guarantee I'm "getting things really done".

    > my time goes into thinking and precisely defining what I want

    I'm reminded of the famous quote "Programs must be written for people to read, and only incidentally for machines to execute." [1]

    A programming language is exactly the medium that lets me precisely define my thoughts! I think the only way to achieve equivalent precision using human language is to write them in legalese, just as a lawyer does when poring over the words and punctuation in a legal contract (and that depends upon so much case law to make the words really precise).

    > For me, AI allows me to realize my ideas, and get things done.

    More power to you! Bringing our ideas to life is what we're all after.

    [1] https://web.archive.org/web/20180427140749/https://mitpress....

  • Loxicon 2 days ago

    While I agree with your point that it's sometimes about getting things done, but your example is flawed. Your example about gas-powered street lights is arguing for technology evolution. But the people who say "AI have take the fun out of programming" are fighting for craftsmanship and love.

    Nobody ever found craftsmanship or pleasure out of lighting up gas-powered street lights. But there are a lot of programmers that value "doing" programming because it's their craft or art-form.

    I have never had a programming job. But I program all day to serve my customers for the products I created. Because it's my art-form. I love "doing" it (my way!).

    It will get done. I just want to be the person to do it.

  • FloorEgg 3 days ago

    I agree wholeheartedly that it's about getting things done, and is what the universe cares about. As individuals we enjoy being in flow, and when the nature of the work changes we may lose our flow and shake our fists in frustration...

    Change can be painful, but that's because it takes energy.

    From particles to atoms to cells to people to civilizations, it seems like the whole point is to get more stuff done. Why? Probably because getting stuff done is more interesting than the alternative.

  • boesboes 2 days ago

    > Some of it might be good, some of it might be bad.

    None of it will be good, all of it will be bad. mmw

  • wartywhoa23 2 days ago

    > is it about doing, or is it about getting things done?

    Forgive me some frivolousness and let me reduce this ad the inevitable absurdum:

    Is life about living, or is it about getting life ended?

  • beefnugs 3 days ago

    Once again no one is capable of coming up with a good analogy: the analogy here would be that someone comes up with occasional exploding electrical lights that sometimes create black holes to suck up all the surrounding light for a block, and then really work as intended under 60% of the time. But the city rushes to implement as recklessly and quickly as possible because promises and lies. Also the whole time its happening they keep saying not a single gas-lighter will lose their job because the blackholes need to be fed human flesh sometimes... so we will get them to do that

  • antoniuschan99 2 days ago

    It’s about productivity and increasing it to be more competitive against other nations. Look at South Korea. No natural resources so post war plans was to base the future on human capital. It’s why the density and workforce is concentrated in Seoul.

  • agumonkey 3 days ago

    so far the economy is not build on getting things done alone

    the promise of progress is that not having to do chores will make us happier, it's partly true, and partly false

    people hate doing too much of too harmful things, beside that if you need me to redo your shelves, or help you get milk in the morning, i'm happy to oblige

    but back to the point of things getting done and the march of progress, we're entering a potential kurzweil runaway, where computer understand and operate on the world faster, better and longer than us, leaving us with having nothing to do, so we'll see, but i'm not betting a lot on that, it's gonna be toxic (big 4 becoming our main dependency, unstable and a potential depression frenzy)

    look at how often people say "i wanna do something that matters", "i wanna help others".. it's a bit strange to say this because we spend our lives maintaining the worlds to be comfortable, but having everything done for you all the time might not be heaven on earth (even in the ideal best case)

  • danb1974 2 days ago

    It's about writing good, easy to read and maintanable code, not "yet another piece of garbage that almost works... for me..."

    10k lines that nobody including you understands? That's a liability if anywhere else than your home project that only you use

    Use AI as a hyper-documentation system and "show me how to" guide but not to spew code that neither it or you understand

  • kevinsync 3 days ago

    I'm a bit late to the conversation but I'm on month 4 (?) of building a (greenfield) desktop app with Claude Code + Codex. I've been coding since Pulp Fiction hit theaters, and I'm confident I could have just written this thing from scratch without LLMs with a lot fewer headaches, but I really wanted to get my hands dirty with new tools and see what they are and aren't capable of.

    Some brief takeaways:

    1. I'm on probably the 10th complete-restart iteration; I had a strong vision for what it was going to be, with a very weak grasp on how to technically achieve it, as well as a tenuous-at-best grasp on some of what turned out to be the most difficult parts (clever memory management, optimizations for speed, wrangling huge datasets, algorithms, etc) -- I started with a CLI-only prototype thinking I could get it all together reasonably quickly and then move onto a hand-crafted visual UI that I'd go over with a fine-toothed comb.

    I'm still working on the fundamentals LOL with a janky UI that I'll get to when the foundation is solid.

    2. By iteration 4 or 5, I realized I wanted to implement stuff that was incompatible with the less-complicated foundations already laid; this becomes a big issue when you vibe code and have it write docs, and then change your mind / discover a better way to do it. The amount of sprawl and "overgrowth" in the codebase becomes a second job when you need to pivot -- you become a glorified hedge trimmer trying to excise both code AND documentation that will very confidently poison the agents moving forward if you don't.

    3. Speaking of overconfidence, I keep finding myself in situations where the LLMs (due to not being able to contextualize the entire codebase at any single time) offer solutions/approaches/algorithms that work (and work well!) until you push more data at it. For validation purposes, I started with very limited datasets, so I could hand-check results and audit the database. By the time you're at a million rows, spot-checking becomes really hard, shit starts crashing because you didn't foresee architectural problems due to lack of domain experience, etc. You start asking for alternative solutions and approaches, you get them, but the LLM (not incorrectly) also wants to preserve what's already there, so a whole new logic path gets cut, and the codebase grows like a jungle. The docs get stale without getting pruned. There's conflicting context. Switch to a different LLM and sometimes naming conventions mysteriously shift like it's speaking a different dialect. On and on.

    Are the tools worth it? Depends. For me, for this one, on the whole, yes; it has taken an extremely long time (in comparison to the promises of 10x productivity) to get to where I've been able to try out a dozen approaches that I was unfamiliar with, see first-hand what works and what doesn't, and get a real working grasp of how off-the-rails agentic coding can take you if you're just exploring.

    I am now left with some really good, relevant code to reference, a BUNCH of really misguided code to flush down the shitter, a strong mental map of how to achieve what I'm building + where things are supposed to go, and now I'm starting yet another fresh iteration where I can scaffold and piece together the whole thing with refactored / reformatted / readable code. And then actually implement the UI I've been designing lol.

    I get the whole "just bully the LLM until it seems like it works, then ship it" mentality; objectively that's not much different than "just bully the developer until it seems like it works, then ship it" mentality of a product manager. But as amazing as these tools are for conjuring something into existence from thin air, I really think the devil is truly in the details, and if you're making something you hope to ever be able to build upon and expand and maintain, you have to go far beyond "vibes" alone.

cnity 3 days ago

Sometimes I read something on the internet and I think: finally someone has articulated something the way that I think about it. And it is very validating. And it cuts through a bunch of noise about how "oh you should be tuning and tweaking this prompt and that" and really speaks to the human experience. Thanks for this.

  • all2 3 days ago

    Same. After using AI for too long I get the same mental feeling as I do when scrolling endlessly on YouTube, a listless empty purposeless feeling that I find difficult to break out of without a whole night's rest.

    • jackdoe 3 days ago

      Programming was very meditative and fulfilling experience for me, "building something" whatever it is, now I can see it slipping through my fingers.

      You know the feeling of starting a new mmorpg video game? The first time you enter a new world, you dont know what to do, where to go, there is no "optimal" way to play it, there are no guides, you just try things and explore and play and have fun. Every new project I start I have this feeling.

      Few years later the game is a chore, you have daily quests, guides and optimal strategies and simmulations and if you dont play what elitistjerks say you are doing it wrong.

      With AI it feels the game is never new.

      • all2 3 days ago

        > Programming was very meditative and fulfilling experience for me, "building something" whatever it is, now I can see it slipping through my fingers.

        I've been characterizing it to others as the difference between hand-carving a post for a bed frame vs. letting a CNC mill do it. The artistry-labor is lost, and time-savings are realized. In the process, the meditation of the artist, the labor and blood, sweat, and tears are all lost.

        It isn't 'bad', but it has this dulling effect on my mind. There's something about being involved at a deep level that is satisfying and uplifting to my mind. When I cede that to a machine, I have lost that satisfaction.

        Some years ago, I noticed this same issue just looking at typing vs. hand-writing things. I _think_ very differently on paper than I do typing at a terminal. My mind is slow and methodical with a pen, as if I actually have time to think. At a keyboard, I am less patient, more prone to typing before I think.

        • CooCooCaCha 3 days ago

          I’m the opposite. I’d rather spend more time in a flow-like state where I’m dreaming of possibilities and my thoughts come to life quickly and effortlessly.

          I often find tools frustrating because they are imperfect and even with the best tools you inevitably have to break from your flow sometimes to do stuff in a more manual way.

          If a tool could take care of building while I remain in flow I’d be in heaven.

      • CooCooCaCha 3 days ago

        That’s interesting because i love computers and parts of programming. Algorithms are fascinating and I get a deep sense of satisfaction when my program works.

        But at the same time I find programming to be a frustrating experience because I want to spend as much time as possible thinking about what I’m trying to build.

        In other words I’d rather spend time in the dream-like space of possibilities, and iterating on my thoughts quickly than “dropping down” to reality and thinking through how I’m actually going to build it, what algorithms to use, how to organize code, etc.

        Because of that I’ve found vibe coding to be enjoyable even if it’s not perfect.

        • mfro 3 days ago

          Love of the process vs the product

          • all2 3 days ago

            These are intertwined, though, and rather tightly in some cases. Game dev is an excellent example of this.

            • CooCooCaCha 3 days ago

              Perhaps you're confusing enjoyment with necessity. Iteration is necessary to build a good game, but I want to minimize iteration time as much as possible so I can finish the game.

              In that sense, the process is the enemy. A long, laborious process kills games.

    • mentalgear 3 days ago

      Maybe this is Doom-Coding (as Instagram's empty DoomScrolling).

      • Izkata 3 days ago

        A while ago I suggested "doom prompting", also from "doom scrolling", but it was for a slightly different mental effect: "It's so close, just one more and it might be exactly right".

    • afc 3 days ago

      Wonder if you've tried spec driven development (as opposed to just prompting)?

      I used to create requirement-oriented prompts and I felt something similar to what you describe. However, I've switched to generating parts of my source code from my specs in a semi-automated way and it made the process much more pleasant (and efficient, I think).

      I wrote a bit about my current state here: https://alejo.ch/3hi - for my Duende project I generate 8821 lines of code (2940 of implementation, 5881 of tests) from 1553 lines in specifications.

      • all2 a day ago

        I have. I get into the same head space, either way.

  • cyanydeez 3 days ago

    Some think current AI is like Excel and you just need to know the hotkeys and formulas.

    Others see its mostly a slot machine that more often than not gives you almost right answers.

    Knowing how the psychology of gambling machine design is maybe a big barrier between these people.

csallen 3 days ago

> After about 3-4k lines of code I completely lost track of what is going on... Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code

It's hard to take very much away from somebody else's experiences in this area. Because if you've been doing a substantial amount of AI coding this year, you know that the experience is highly dependent on your approach.

How do you structure your prompts? How much planning do you do? How do you do that planning? How much review do you do, and how do you do it? Just how hands-on or hands-off are you? What's in your AGENTS.md or equivalent? What other context do you include, when, why, and how? What's your approach to testing, if any? Do you break down big projects into smaller chunks, and if so, how? How fast vs slow are you going, i.e. how many lines of code are you letting the AI write in any given time period? Etc.

The answers to these questions vary extremely wildly from person to person.

But I suspect a ton of developers who are having terrible experiences with AI coding are quite new to it, have minimal systems in place, and are trying "vibe coding" in the original sense of the phrase, which is to rapidly prompt the LLM with minimal guidance and blindly trust its code. In which case, yeah, that's not going to give you great results.

  • verdverm 3 days ago

    I spent considerable time trying to coax the agentic systems into decent coding capabilities. The thing that struck me most is how creative they are at finding new ways to fail and make me adjust my prompt.

    It got tiring, so I'm on a break from ai coding until I have bandwidth to build my own agent. I don't think this is something we should be outsourcing to the likes of OpenAI, Microsoft, Anthropic, Google, Cursor, et al. Big Tech has shown their priorities lie elsewhere from our success and well being

    • prmph 3 days ago

      Exactly my experience too. I'm now using AI like 25% of the time or less. I always get to a point where I see that using agentic coding is making me not want to actually think, there's no way anyone can convince me that that is a superior approach, because every time I took days off the agents to actually think, I came up with a far superior architecture and code that even rendered much of what the agents were hammering away at moot.

      Agentic coding is like a drug or slot machine, it slowly draws you in with the implicit promise of getting much for little. The only ways it is useful to me now is for very focused tasks where I have spent a lot of time defining the architecture down to the last detail, and the agents are used to fill in the blanks, as it were.

      I also think I could write a better agent, and as to why the bog corps have not done so is baffling to me. Just event getting the current agents to obey the guidelines in the agent .md files is a struggle. They forget pretty much everything two prompts down the line. Why can't the CLI systemically prompt them to check every time, etc.?

      Something tell me the future is about domain-aware agents that help users to wring better performance out of the models, based on some domain-specific deterministic guardrails.

      • csallen 3 days ago

        I've had experiences like this before but if that's the ONLY experience you've had, or if you have that experience 75% of the way, I think you're doing something wrong. Or perhaps you're just in a very different coding domain than I am (web dev, HTML/CSS/JS) where the AI happens to suck.

        The biggest mistakes imo are:

        1. Underplanning. Trying to do huge projects in one go, rather than breaking them down into small projects, and breaking those small projects down into well thought out plans.

        2. Too much of a focus on prompting rather than context. Prompt engineering is obsessing with the perfect way to say or phrase something. Whereas context engineering putting relevant information into the LLM's working memory, which requires you to go out and gather that info (or use the LLM to get it).

        • verdverm 3 days ago

          I've had my share of good and bad experiences, one section of an existing project more than 90% ai created. How you say things is equally important to the context you provide, partly because the agents will start trying to decide what is and is not good context, which they are unreliable in doing, even after you give them the limited context and tell them not to edit other files or bring in more context. For example, if you use a lot of colloquial phrases, you activate that area of the network, taking away from using other parts (MoE activation, also lower level too)

          They are not good readers (see research results around context collapse and context poisoning)

      • jcattle 2 days ago

        If we take Elon Musks approach to challenging engineering problems, which in this exact order is:

        1. Question every requirement

        2. Delete any part of the process you can

        3. Simplify and optimize

        4. Accelerate cycle time

        5. Automate

        In my experience coding agents at the moment are really good at 4. and 5. and they absolutely suck at 1. and 2.

        3, they are okay at if prompted well.

        Humans are okay at 1. and 2. IF they understand the system well and critically question requirements. With LLM generated codebases this system understanding is often missing. So you can't even start with 1.

    • LeafItAlone 3 days ago

      >The thing that struck me most is how creative they are at finding new ways to fail

      Wow, they are really going for that human-like behavior aren’t they?

      • verdverm 3 days ago

        If we're talking about emulating users, sure, but this is supposed to be a tool that helps me get my job done.

        If (i.e.) you dig into how something like copilot works, they do dumb things like ask^ the LLM to do glob matching after a file read (to pull in more instructions)... just use a damn glob library instead of a non-deterministic and known to be unreliable method

        ^ it's just a table in the overall context, so "asking" is a bit anthropomorphizing

        • cluckindan 3 days ago

          I would consider a bunch of ”dumb/power user” agents more useful than coding agents. The more they fail to use my software, the better!

        • zahlman 3 days ago

          > ^ it's just a table in the overall context, so "asking" is a bit anthropomorphizing

          I interpreted GP as just saying that you are already anthropomorphizing too much by supposing that the models "find" new ways to fail (as if trying to defy you).

          • verdverm 3 days ago

            most humans do not seek out ways to defy after a certain age

            I did not mean to imply active choice by "find", more that they are reliably non-deterministic and have a hard time sticking to, or easy time ignoring, the instructions I did write

  • fhennig 3 days ago

    I think you're making a fair comment, but it still irks me that you're quite light on details on what the "correct" approach is supposed to be, and it irks me also because it seems to now be a pattern in the discussion.

    Someone gives a detailed-ish account of what they did, and that it didn't work for them, and then there are always people in the comments saying that you were doing it wrong. Fair! But at this point, I haven't seen any good posts here on how to do it _right_.

    I remember this post which got a lot of traction: https://steipete.me/posts/just-talk-to-it 8 agents in parallel and so on, but light on the details.

    • getnormality 3 days ago

      This dynamic reminds me of an experience I had a year ago, when I went down a Reddit rabbit hole related to vitamins and supplements. Every individual in a supplement discussion has a completely different supplement cocktail that they swear by. No consensus ever seems to be reached about what treatment works for what problem, or how any given individual can know what's right for them. You're just supposed to keep trying different stuff until something supposedly works. One must exquisitely adjust not only the supplements themselves, but the dosage and frequency, and a bit of B might be needed to cancel out a side effect of A, except when you feel this way you should do this other thing, etc etc etc.

      I eventually wrote the whole thing off as mostly one giant choose-your-own-adventure placebo effect. There is no end to the epicycles you can add to "perfect" your personal system.

    • makk 3 days ago

      Try using spec kit. Codex 5 high for planning; Claude code sonnet 4.5 for implementation; codex 5 high for checking the implementation; back to Claude code for addressing feedback from codex; ask Claude code to create a PR; read the PR description to ensure it tracks your expectations.

      There’s more you’ll get a feel for when you do all that. But it’s a place to start.

    • enraged_camel 2 days ago

      Speaking as someone for whom AI works wonderfully, I’ll be honest: the reason I’ve kept things to myself is because I don’t want to be attacked and ridiculed by the haters. I do want to share what I’ve learned but I know that everything I write will be picked apart with a fine toothed comb and I have to interest in exposing myself to toxicity that comes with such behavior.

    • csallen 3 days ago

      Relentlessly break things down. Never give the LLM a massive, complex project. You should be subdividing big projects into smaller projects, or into phases.

      Planning is 80% of the battle. If you have a well-defined plan, that defines the architecture well, then your LLM is going to stick to that plan and architecture. Every time my LLM makes mistakes, it's because there were gaps in my plan, and my plan was wrong.

      Use the LLM for planning. It can do research. It can brainstorm and then evaluate different architectural approaches. It can pick the best approach. And then it can distill this into a multi-phased plan. And it can do this all way faster than you.

      Store plans in Markdown files. Store progress (task lists) in these same Markdown files. Ensure the LLM updates the task lists as you go with relevant information. You can @-mention these files when you run out of context and need to start a new chat.

      When implementing a new feature, part of the plan/research should almost always be to first search the codebase for similar things and take note of the patterns used. If you skip this step, your LLM is likely to unnecessarily reinvent the wheel

      Learn the plan yourself, especially if it's an ambitious one. I generally know what my LLM is going to do before it does it, because I read the plan. Reading the plan is tedious, I know, so I generally ask the LLM to summarize it for me. Depending on how long the plan is, I tell it to give me a 10-paragraph or 20-paragraph or 30-paragraph summary, with one sentence per paragraph, and blank lines in between paragraphs. This makes the summary very easy to skim. Then I reply with questions I have, or requests for it to make changes to the plan.

      When the LLM finishes a project, ask it to walk you through the code, just like you asked it to walk you through the plan ahead of time. I like to say, "List each of the relevant code execution paths, then walk me through each one one step at a time." Or, "Walk me through all the changes you made. Use concentric circles of explanation, that go from broad to specific."

      Put your repeated instructions into Markdown files. If you're prompting the LLM to do something repeatedly, e.g. asking the LLM to make a plan, to review its work, to make a git commit, etc., then put those instructions in prompt Markdown files and just @-mention it when you need it, instead of typing it out every time. You should have dozens of these over time. They're composable, too, as they can link to each other. When the LLM makes mistakes, go tweak your prompt files. They'll get better over time.

      Organize your code by feature not by function. Instead of putting all your controllers in one folder, all your templates in another, etc., make your folders hold everything related to a particular feature.

      When your codebase gets large enough, and you have more complex features that touch more parts of the code, have the LLM write doc files on them. Then @-mention those doc files whenever working on these features or related features. They'll help the LLM be more accurate at finding what it needs, etc.

      I could go on.

      If you're using these tools daily, you'll have a similar list before long.

      • fhennig 2 days ago

        Thanks! I got some useful things out of your suggestions (generate plan into actual files, have it explain code execution paths), and noted that I already was doing a few of those things (asking it to look for similar features in the code).

        Thanks!

      • nightshift1 2 days ago

        This is a good list. Once the plan is in good shape, I clear the context and ask the LLM to evaluate the plan against the codebase and find the flaws and oversights. It will always find something to say but it will become less and less relevant.

    • dudeinhawaii 2 days ago

      I think it's hard because it's quite artistic and individualistic, as silly as that may sound.

      I've built "large projects" with AI, which is 10k-30k lines of algorithmic code and 50k-100k+ lines of UI/Interface.

      I've found a few things to be true (that aren't true for everyone).

      1. The choice of model (strengths and weaknesses) and OS, dramatically affect how you must approach problems.

      2. Being a skilled programmer/engineer yourself will allow you to slice things along areas of responsibility, domains, or other directions that make sense (for code size, context preservation, and being able to wrap your head around it).

      3. For anything where you have a doubt, ask 3 or more models -- have them write their findings down in a file each -- and then have 3 models review the findings with respect to the code. More often than not, you march towards consensus and a good solution.

      4. GPT-5-Codex via OpenAI Codex CLi on Linux/WSL was, for me, the most capable model for coding while Claude is the most capable for quick fixes and UI.

      5. Tooling and ways to measure "success" are imperative. If you can't define the task in a way that success is easy to define -- neither a human nor AI would complete it satisfactorily. You'll find that most engineer tasks are laid out in very "hand-wavy" way -- particularly UI tasks. Either lay it out cleanly or expect to iterate.

      6. AI does not understand the physical/visual world. It will fail hard on things which have an implied understanding. For instance, it will not automatically intuit the implication of 50 parallel threads trying to read from an SSD -- unless you guide it. Ditto for many other optimizations and usage patterns where code meets real-world. These will often be unique and interesting bugs or performance areas that a good engineer would know straight out.

      7. It's useful to have non-agentic tools that can perform massive codebase analysis for tough problems. Even at 400k tokens context, a large codebase can quickly become unwieldy. I have built custom python tools (pretty easy) to do things like "get all files of a type recursively and generate a context document that will submit with my query". You then query GPT-5-high, Claude Opus, Gemini 2.5 Pro and cross-check.

      8. Make judicious use of GIT. The pattern doesn't matter, just have one. My pattern is commit after every working agentic run (let's say feature). If it's a fail and taking more than a few turns to get working -- I scrap the whole thing and re-assess my query or how I might approach or break down the task.

      9. It's up to you to guide the agent on the most thoughtful approaches -- this is the human aspect. If you're using Cloud Provider X and they provide cheap queues then it's on you to guide your agent to use queues for the solution rather than let's say a SQL db -- and it's on you to understand the tradeoffs. AI will perhaps help explain them but it will never truly understand your business case and requirements for reliability, redundancy, etc. Perhaps you can craft queries for this but this is an area where AI meets real world and those tend to fail.

      One more thing I'd add is that you should make an attempt to fix bugs in your 'new' codebase on occasion. You'll get an understanding for how things work and also how maintainable it truly is. You'll also keep your own troubleshooting skills from atrophying.

  • tinfoilhatter 3 days ago

    Still waiting to see that large, impressive, complex, open-source project that was created through vibe coding / vibe engineering / whatever gimmicky phrase they come up with next!

    • rstuart4133 3 days ago

      If "large and impressive" means "has grown to that size via many contributions from lots of random developers", then I'd agree.

      I don't think there is much doubt AI can produce split out a lot of code, that mostly works. It's not too hard to imagine that one day an can AI produce so much code that it's considered a "large, complex project". A single mind dedicated to a task can do remarkable things, be it human or silicon. Another mind reading what they have done, and understanding it is another thing entirely.

      All long term, large projects I'm familiar have been developed over a long time by many contributors, and as a consequence there has been far more reading and understanding going on than writing new code. This almost becomes self evident when you look at large open source projects, because the code quality is so high. Everything is split into modules a single mind can pick up relatively quickly and work on in isolation. Hell, even complier error messages become self explanatory essays over time.

      Or to put it another way, no open source project is a ball of mud. Balls of mud can only be maintained by the person who wrote them, who get away with it because they have most of the details stored in their context window, courtesy of writing it. Balls of mud are common in proprietary code (I've worked on a few). They are produced by a single small group were paid to labour away for years at one task. And now if this post it to be believed, AI vibe coded projects are also a source of balls of mud. Given current AI's are notoriously bad at modifying even well structured projects, they won't be maintainable by anyone.

  • nemomarx 3 days ago

    After all of that effort is it faster than coding stuff yourself? This feels like getting into project management because you don't want to learn a new Library in something

    • beezlewax 3 days ago

      All that effort and the writing of very specific prompts in very specific ways in order to create a determenistic output just feels like a bad version of a programming language.

      If we're not telling the computer exactly what do then we're leaving the LLM to make (wrong) assumptions.

      If we are telling the computer exactly what to do via natural language then it is as complicated as normal programming if not more complicated.

      As least that's how I feel about it

      • lazide 3 days ago

        Have you ever used a WYSIWG editor?

        One of the most frustrating (but common) things is you do v1. It looks good enough.

        Then you go to tweak it a little (say move one box 10-15 pixels over, or change some text sizing or whatever), and it loses its mind.

        So then you spend the next several days trying every possible combination of random things to get it to actually move the way you want. It ends up breaking a bunch of other things in the process.

        Eventually, you get it right, and then never ever want to touch it ever again.

      • MattRix 3 days ago

        These are all just stopgaps, this tech is still in its infancy. If it keeps improving, it will reach a point where it can implement complex things from simple prompts, the way that talented programmers can.

        I’m not a fan of a lot of this AI stuff, but there is no reason to expect it won’t get to that level.

        • beezlewax 2 days ago

          Talented programmers often don't get things right though. They can make the wrong assumptions about what a product person wants or what a client wants.

          And that normally stems from lack of information or communication problems.

        • hitarpetar 2 days ago

          > there is no reason to expect it won’t get to that level.

          that's not an argument. that's just magical thinking

          • MattRix a day ago

            It’s magical thinking to think it won’t! We already have one example of it being possible (in humans). Unless you think humans have a “soul” or some other intangible element, then there’s no reason they can’t be emulated.

            • hitarpetar 19 hours ago

              lots and lots of things are theoretically possible

    • tetha 3 days ago

      Personally, I find it faster if I use LLMs for the use cases I've found them to work well.

      One example is just laborious typing-heavy stuff. Like I recently needed a table converted to an enumeration. 5 years ago I'd have spent half a day to figure out a way to sed/awk/perl that transformation. Now I can entertain an AI for half an hour or so to either do the transformation (which is easy to verify) or to setup a transformation script.

      Or I enjoy that I can give an LLM a problem and 2-3 solution approaches I'd see, and get back 4-5 examples on how that code would look like in those solution approaches, and some more. Again, this would take me 1-2 days and I might not see some of the more creative approaches. Those approaches might also be entire nonsense, mind you.

      But generating large amounts of code just won't be a good, time-efficient idea long-term if you have to support and change it. A lot of our code base is rather simple python, but it carries a lot of reasoning and thought behind it. Writing that code is not a bottleneck at all.

    • csallen 3 days ago

      Yes, it often is much faster, and significantly so.

      There are also times where it isn't.

      Developing the judgment for when it is and isn't faster, when it's likely to do a good job vs isn't likely, is pretty important. But also, how good of a job it does is often a skill issue, too. IMO the most important and overlooked skill is the having the foresight and the patience to give it the context it needs to do a good job.

      • sodapopcan 3 days ago

        > There are also times where it isn't.

        Should this have the "Significantly so" qualifier as well?

        • csallen 3 days ago

          I'm not sure. I think it's asymmetric: high upside potential, but low downside.

          Because when the AI isn't cutting it, you always have the option to pull the plug and just do it manually. So the downside is bounded. In that way it's similar the Mitch Hedberg joke: "I like an escalator, because an escalator can never break. It can only become stairs."

          The absolute worse-case scenario is situations where you think the AI is going to figure it out, so keep prompting it, far past the time when you should've changed your approach or gfiven up and done it manually.

          • Dzugaru 2 days ago

            This is so far from an absolute worst-case scenario.

            You could have a codebase subtly broken on so many levels that you cannot fix it without starting from scratch - losing months.

            You could slowly lose your ability to think and judge.

          • sodapopcan 3 days ago

            Ha, great answer! Of course there are a lot of nuances to that but I don't want to get into beating dead horses.

    • HDThoreaun 3 days ago

      The thing is it's not actually that much effort. A day of work to watch some videos and set things up then the only issue is its another thing to remember. But we developers remember thousands of arcane incantations. This isnt any harder than any of the other ones and when applied correctly writes code very very quickly.

  • seattle_spring 3 days ago

    Has the definition of "vibe coding" changed to represent all LLM-assisted coding? Because from how I understand it, what you're talking about is not "vibe coding."

  • BeetleB 3 days ago

    > How do you structure your prompts? How much planning do you do? How do you do that planning? How much review do you do, and how do you do it? Just how hands-on or hands-off are you? What's in your AGENTS.md or equivalent? What other context do you include, when, why, and how? What's your approach to testing, if any? Do you break down big projects into smaller chunks, and if so, how? How fast vs slow are you going, i.e. how many lines of code are you letting the AI write in any given time period? Etc.

    It wouldn't be vibe coding if one did all that ;-)

    The whole point of vibe coding is letting the LLM run loose, with minimal checks on quality.

    Original definition (paraphrased):

    "Vibe coding describes a chatbot-based approach to creating software where the developer describes a project or task to a large language model (LLM), which generates code based on the prompt. The developer does not review or edit the code, but solely uses tools and execution results to evaluate it and asks the LLM for improvements. Unlike traditional AI-assisted coding or pair programming, the human developer avoids examination of the code, accepts AI-suggested completions without human review, and focuses more on iterative experimentation than code correctness or structure."

    OK. I guess strictly speaking, you could do most of what you're suggesting and still call it vibe coding.

  • consumer451 2 days ago

    I used agentic LLM dev tools to build the core of my webapp. It took months, I had a QA person at the beginning, and I looked at every line of committed code. It was a revelatory experience and resulted in a very reliable webapp.

    Last month I thought: "OK, I have all kinds of rules, guardrails, and I am relatively excellent at managing context. Let's try to 'vibe code' some new features."

    It has been a total disaster and worse than a waste of time. I keep finding entirely new weird bugs it created. This is just a React/Vite/Supabase app, nothing nuts. The worst part is that I showed these vibed features to stakeholders, and they loved it. Now I have to explain why recreating these features is going to take much longer.

    I knew better, as the magic of vibe coding is to explore the MVP space, and I still fell for it.

  • baxtr 3 days ago

    The way you describe it vibe coding results are a proxy for a person’s ability to plan.

    Since vibe coding is so chaotic, rigorous planning is required, which not every developer had to do before.

    You could simply "vibe" code yourself, roam, explore, fix.

    Is that a fair description of your comment?

    • riskable 3 days ago

      Coding with an AI is an amplifier. It'll amplify your poor planning just as much as it amplifies your speed at getting some coding task done.

      An unplanned problem becomes amplified by 10-100x worse than if you coded things slowly, by hand. That's when the AI starts driving you into Complexity Corner™ (LOL) to work around the lack of planning.

      If all you're ever doing is using prompts like, `write a function to do...` or `write a class...` you're never going to run into the sorts of super fucking annoying problems that people using AI complain the most about.

      It's soooooo tempting to just tell the AI to make complete implementations of things and say to yourself, "I'll clean that up later." You make so much progress so fast this way it's unreal! Then you hit Complexity Corner™ where the problem is beyond the (current) LLM's capabilities.

      Coding with AI takes discipline! Not just knowledge and experience.

      • danielbln 3 days ago

        I agree, but would maybe argue that the level of instructions can be slightly higher level than "write function" or "write class" without ending up in a veritable cluster fuck, especially if guard rails are in place.

    • verdverm 3 days ago

      It's far more than planning. You have to "get to know your LLM" and it's quirks so you know how to write for it, and when they release new updates (cut off time or version), you have to do it again. Same for the agentic frameworks, when they change their system prompts and such.

      It's a giant, non-deterministic, let's see what works good based on our vibes, mess of an approach right now. Even within the models, architecturally, there are recent results that indicate people are trying out weird things to see if they work, unclear if they are coming from first-principle intuition and hypothesis formation, or just throwing things at the wall to see what sticks

    • aleph_minus_one 3 days ago

      > Since vibe coding is so chaotic, rigorous planning is required, which not every developer had to do before.

      I do believe the problem is different:

      I think I am pretty good at planning, but I have a tendency (in particular for private projects) to work on things where the correctness requirements are very high.

      While I can describe very exactly what the code is supposed to do, even small errors can make the code useless. If the foundations are not right, it will be complicated to detect errors in the higher levels (to solve this issue, I implement lots of test cases).

      Also, I often have a very specific architecture for my code in mind. If the AI tries to do things differently, the code can easily become much less useful. In other words: concerning this point, exactly because I plan things carefully (as you claimed), the AI becomes much less useful if it "does its own thing" instead of following my plan.

  • brandall10 3 days ago

    > ...it took 10 hours to write close to 10000 lines of code...

    So there couldn't have been much in the way of planning, process, review, etc.

    • csallen 3 days ago

      Yeah that's my read. True vibe coding, minimal guidance, ofc it was a mess.

    • oplicktis 3 days ago

      Ah yes, the eternal llm refrain, "You're holding it wrong!"

  • mronetwo 3 days ago

    This sounds dreadful and boring. Like who's interested in writing AGENTS.md...?

    • HDThoreaun 3 days ago

      Seeing an AI competently summarize my codebase is one of the least boring things I do as a developer.

    • thomasfromcdnjs 3 days ago

      Find a codebase that you wrote that you enjoy, ask Claude to analyse it and write an agents.md based off of it.

      • bootsmann 2 days ago

        Yes of course, have the AI write the agents.md file which the AI can then use to make changes to the project. This of course works better than just having the AI write changes to the project directly.

    • danielbln 3 days ago

      Do you not write documentation for what you build? Or guidelines for others on how to build it?

      • verdverm 3 days ago

        Are writing docs for humans using what you built anything like writing docs for what you want an ai to build?

        Do you need to write long-ass, hyper-detailed instructions for your coworkers?

        • danielbln 3 days ago

          I do not, but I don't do that for LLMs either. Conventions and documentation I write and present are as succinct or lengthy as they need to be, no matter if the recipient is human or machine.

    • csallen 3 days ago

      I mean I had my LLM generate most of my AGENTS.md, and I tweak it maybe once every week or two. It's minimal investment, and it's a gift that keeps on giving.

  • h4ck_th3_pl4n3t 3 days ago

    How about sharing your working prompts then, so that others can learn from it?

    • danielbln 3 days ago

      There is no "working prompt". There is context that is highly dependant on the task at hand. Here are some general tips:

      - tell it to ask you clarifying questions, repeatedly. it will uncover holes and faulty assumptions and focus the implementation once it gets going

      - small features, plan them, implement them in stages, commit, PR, review, new session

      - have conventions in place, coding style, best practices, what you want to see and don't want to see in a codebase. we have conventions for python code, for frontend code, for data engineering etc.

      - make subagents work for you, to look at a problem from a different angle (and/or from within a different LLM altogether)

      - be always critical and dig deeper if you have the feeling that something is off or doesn't make sense

      - good documentation helps the machine as well as the human

      And the list goes on.

  • energy123 3 days ago

      > that's not going to give you great results.
    
    I'm not sure that's what OP is saying. The results per se might be fine, but it was not a fun experience.
  • cactusplant7374 3 days ago

    I suspect you've gotten lucky. I do a lot of planning and prompt editing and have plenty of outrageous failures that don't make any sense given the context.

  • beefnugs 3 days ago

    Hmm you would think if there was a proper way to do it, they would write up this nice concise manual for everyone to follow

  • hotpaper75 3 days ago

    I completely agree with this approach. I just finished an intensive coding session with Cursor, and my workflow has evolved significantly. Previously, I'd simply ask the AI to implement entire features and copy-paste code until something worked. Now I take a much more structured approach: I scope changes at the component level, have the agent map out dependencies (state hooks, etc.), and sometimes even use a separate agent to prototype the UI before determining the necessary architecture changes. When tackling unfamiliar territory, I pause and build a small toy example myself first before bringing Cursor into the picture. This shift has been transformative. I used to abandon projects once they hit 5K lines because I'd get lost in complexity. Now, even though I don't know every quirk of my codebase, I have a clear mental model of the architecture and understand the key aspects well enough to dive in, debug, and make meaningful progress across different parts of the application. What's interesting is that I started very deliberately—slowly mapping out the architecture, deciding which libraries to use or avoid, documenting everything in an agent.md file. Once I had that foundation in place, my velocity increased dramatically. It feels like building a castle one LEGO brick at a time, with Cursor as my construction partner.

kpil 3 days ago

I think that the important conclusion to make of this is that publicly available code is not created or even curated by humans anymore, and it will be fed back into data sets for training.

It's not clear what the consequences are. Maybe not much, but there's not that much actual emergent intelligence in LLMs, so without culling by running the code there's seems to be a risk that the end result is a world full of even more nonsense than today.

This already happened a couple of years ago for research on word frequency in published texts. I think the consensus is that there's no point in collecting anymore since all available material is tainted by machine generated content and doesn't reflect human communication.

  • johnnyApplePRNG 3 days ago

    I think we'll be fine. AIs definitely generate a lot of garbage, but then they have us monkeys sifting through it, looking for gems, and occasionally they do drop some.

    My point is, AI generated code still has a human directing it the majority of the time (I would hope!). It's not all bad.

    But yea, if you're 12 and just type "yolo 3d game now" into Claude Code, I'd say I'd be worried about that but then immediately realized no... that'd be awesome.

    So yea, I think we'll be fine.

  • hackncheese 3 days ago

    This is a really interesting point, I wonder if this will have a similar effect to model poisoning

ipaddr 3 days ago

I have felt similiar thoughts. You start off with a mental model of how to develop an app based on experience. You can quickly get the pieces working and wire them up.

What get's lost is when you normally develop an app that takes days you create a mind model as you go along that you take with you throughout the day. In the shower you may connect some dots and reimagine the pieces in a more compelling way. When the project is done you have mental model of all of the different pieces; thoughts of where to expand and fears of where you know the project will bottleneck with a mental note to circle back when you can.

When you vibe code you don't get the same highs and lows. You don't mentally map each piece. It's not a surprise that opening up and reading the code is the most painful thing but reading my own code is always a joy.

  • danielbln 3 days ago

    I feel I still get that just not on the code level but on the systems level. I know which systems exist, how they connect, how the data flows. The lower level code and implementation details stay foggy, because I didn't write them, but I did design and/or spec the involved systems and data models.

yodsanklai 3 days ago

Pretty much my experience, LLMs have taken the fun out of programming for me. My coding sessions are:

1. write prompt

2. slack a few minutes

3. go to 1

4. send code for review

I know what the code is doing, how I want it to look eventually, and my commits are small and self-contained, but I don't understand my code as much because I didn't spend so much time manipulate it. Often I spend more time in my loops than if I was writing the code myself.

I'm sure that with the right discipline, it's possible to tame the LLM, but I've not been able to reach that stage yet.

  • vorticalbox 3 days ago

    I’ve stopped getting LLM to code and use it to spitball ideas, solutions etc to the issue.

    This lets you get a solution plan done, with all the files and then you get to write the code.

    Where I do let it code is in tests.

    I write a first “good” passing test then ask it to create all the others bad input etc. saves a bunch of time and it can copy and paste faster then I can.

    • MikeNotThePope 2 days ago

      I'm experimenting with how to code w/ LLMs. I used an AI assistant for about a month w/ a React app, prompting it to do this & that, and I learned almost nothing in that month about React itself. Then I prompted it to tell me what to do, but I did the typing, and I learned quite a bit in a short period of time.

  • jrochkind1 2 days ago

    Why are you doing it? Direction from management? You think it's better code even though it's as you say less fun, and you're not sure if faster or not? Other?

  • dmix 3 days ago

    At a minimum I write my own automated tests for LLM code (including browser automation) and think them through carefully. That always exposes some limitations to Claude's solutions, discovers errors, and lets you revisit it so you fully understand what you're generating.

    Mostly LLMs do the first pass and I rewrite a lot of it with a much better higher level systems approach and "will the other devs on the team understand / reuse this".

    I'd still prefer deciphering a lot of default overly-verbose LLM code to some of the crazy stuff that past devs have created by trying to be clever.

  • M4v3R 2 days ago

    Have you tried Composer 1 from Cursor? It enables a totally different way of AI coding - instead of giving the LLM a long prompt and waiting minutes for it to finish, you give it a shorter prompt to just write one small thing and it finishes in seconds. There’s no interruption, you stay in the flow, and in control of what you’re building.

cadamsdotcom 3 days ago

The solution is to ground your model.

In code, one way I’ve found to ground the model and make its output trustworthy is test-driven development.

Make it write the tests first. Make it watch the tests fail. Make it assert to itself that they fail for the RIGHT reason. Make it write the code. Make it watch the tests pass. Learn how to provide it these instructions and then take yourself out of the loop.

When you’re done you’ve created an artefact of documentation at a microscopic level of how the code should behave, which forms a reference for yourself and future agents for the life of the codebase.

  • paul_h 2 days ago

    Agree. I'm having lots of fun with Gen-AI and I still have insights it does not. Test-at-the-same-time-as-prod-code is also doable with gen-AI. All the ones I tried are shit at testability by default in my experience. And every now and again they forget about tests being important.

mcalus3 a day ago

Vibe coding is cursed gold from the first 'Pirates of the Caribbean' movie.

> "For too long I've been parched of thirst and unable to quench it. Too long I've been starving to death and haven't died. I feel nothing. Not the wind on my face nor the spray of the sea. Nor the warmth of a woman's flesh." [steps into moonlight becoming a skeleton]

anonymousiam 3 days ago

Reading through to the end of the README.md on the GitHub page, I noticed that he's claiming copyright on the code, even though he admits that 3/4 of it is machine generated, and he doesn't understand it all.

It reminded me of the legal challenges for copyright of content that was not created by a human. In every case that I'm aware of so far, courts have ruled that content that wasn't created by a person cannot be copyrighted.

  • boxedemp 2 days ago

    Still a good business move, in case things change.

abathologist 3 days ago

A key -- perhaps THE key -- remark here, IMO is the following:

> I do want to make things, and many times I dont want to know something, but I want to use it

This confesses the desire to make, to use, and to make use of, without ANY substantive understanding.

Of course this seems attractive for some reasons, but it is a wrong, degenerative way to be in the world. Thinking and being belong together. Knowing and using are two dimensions of the same activity.

The way of these tools is a making without understanding, a using without learning, a way of being that is thoughtless.

There's nothing preventing us from thoughtful, rigorous, enriching use of generative ML, except that the systems we live and work in don't want us to be thoughtful and enriched and rigorous. They want us pliant and reactive and automated and sloppy.

We don't have to bend to their wants tho.

  • glenstein 3 days ago

    >Of course this seems attractive for some reasons, but it is a wrong, degenerative way to be in the world.

    I share your sense that there's something psychologically vivid and valuable in that passage, but it's part of an implicit bargain that's uncontroversial in other respects - I don't have to be an electrician to want a working light switch. I don't personally inspect elevators or planes or, in many cases, food. It's the basic bargain of modernity.

    I suppose, to your point, the important distinction here is that I wouldn't call myself an electrician if my relationship to the subject matter doesn't extend beyond the desire to flip a switch.

    • abathologist 3 days ago

      I'd argue that you understand what a light switch does well enough to use it effectively for its purpose.

      When me move from just making use of something to using something to make with, that is when we should have a deeper understanding I think.

      Does that sound right?

      > the important distinction here is that I wouldn't call myself an electrician if my relationship to the subject matter doesn't extend beyond the desire to flip a switch.

      Yeah, that seems right to me!

raphman 3 days ago

Given the code has been completely vibe-coded, what does this mean in practice?:

> Copyright (c) 2025

Whose copyright? IIRC, it is consensus that AI cannot create copyrightable works. If the author does not own the copyright, can they add a legally binding license? If not, does this have any legal meaning?:

> IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY

agjmills 2 days ago

Whilst I ostensibly agree with the sentiment of the linked page, my personal experience is very different - my suspicion is due to the different technologies at play

I enjoy building little SaaS side hustles that one day (I can dream) might make me a couple of grand, but I don’t enjoy writing 20+ CRUD controllers, with matching validation, and HTML forms. I’m probably a bit neurospicy, and I have a young family, but before LLMs came along I might “finish” one SaaS every couple of years. I’ve been able to complete 3 so far this year. It’s a wild uptick in productivity.

I’m well aware of the dangers that come with it too, but having been in the mines churning out this code for the last couple of decades I feel well versed in what to prompt for, just as I would with a keen yet naive junior engineer. I’d also argue that LLMs are much better at enforcing a particular style on the code base. I feel strongly that with an opinionated framework, in a relatively simple language, solving repetitive simple problems - you’ll have a great time with LLMs and you’ll be more productive than ever.

The problems arise when we delegate jobs like writing READMEs or tests (the boring stuff, right?) without really getting into the weeds.

  • notarobot123 2 days ago

    I think it's fair to say that CRUD apps are in a different category to low level systems programming. The former are never particularly difficult to reason about unless they are written poorly, the latter are rarely easy to understand off the bat unless they are extremely written well.

mronetwo 3 days ago

> After about 3-4k lines of code I completely lost track of what is going on, and I woudn't consider this code that I have written, but adding more and more tests felt "nice", or at least reassuring.

> There was a some gaslighting, particularly when it misunderstood dap_read_mem32 thinking it is reading from ram and not MEM-AP TAR/DRW/RDBUFF protocol, which lead to incredible amount of nonsense.

> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.

Ah yes, we can now mass produce faulty code, we feel even more alienated from our work, the sense of achievement gets taken away, no ownership, barely any skill growth. Wonderful technology. What a time to bring value to the shareholders!

  • lazide 3 days ago

    Just wait until you see how it’s being used for robot boyfriends/girlfriends/porn. Just…. Wow.

1gn15 3 days ago

That was a bit overdramatic, I think. But it does mesh with my experience, though as a robot of course I say this with a lot less emotion:

Use LLMs for "compressing and understanding large amounts of existing code", autocomplete, and "vibe coding prototypes, especially for non-programmers". Do not use LLMs for "vibe coding production projects".

stpedgwdgfhgdd 2 days ago

Vibe coding sucks at this moment in time. On the other hand, when was the last time you looked at assembler code and thought, mmmm, I do not like the style? There is also room for optimization. If i would have written this myself, it would be way faster.

l'histoire se répète

mentalgear 3 days ago

This reflects my XP as well: use LLMs for semantic search. Do not trust it with your code.

> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.

> In contrast, using AI to read all the docs (which are thousands of pages) and write helpful scripts to decode the oscilloscope data, create packed C structs from docs and etc, was very nice, and I did feel good after.

alienbaby 2 days ago

This was my exact first experience trying to build anything of significant size.

Several projects later, I can work with the AI now to produce several thousand lines of code, I understand it, it is what I asked for, has the architecture I wanted, is not bloated and is not buggy.

Spend time planning, figuring out the architecture you want, refining, breaking things down into smaller implementable features with implementation plans for each feature. Review these things aggressively. Make sure they align with your expectations.

Then get it to start writing code. Writing tests for everything. Include detailed runtime logging. Read what it produces, understand it and redo it if it wasn't what you expected. Your experience will be vastly superior to that first attempt.

  • weq a day ago

    Greenfields projects where this was possible only make up a tiny fraction of the stuff devs work on. Good devs find it easier, quicker, and more reliable writing the code themselves then explaining the context of the situation to someone.

    At some point the idea of coupling your buisness to a set of tokens to an ai company that cant make profit on the tokens they sell to you - is going to crash. AI companies have an incentive to generate LoC because thats how they get paid. LoC = context. Your greenfields project turns into a unmaintainable bloated legacy codebase sooner then you expect. Like every other tech innovation in the last 10yrs, after the honeymoon period ends so do those perks that you enjoy today. Just imagine in 5years that time your AI will only recommend libraries they can yeild off, and will charge an even more premium price then you can imagine today to enjoy the benifits of ad free code generation. By that time your skills could be replicated by AI, because you keep training their AIs, and they will be worthless. Only those devs who maintain there understanding of code will be worth anything, and they will be brought in to cleanup the mess.

    </end musings>

onion2k 3 days ago

Is this what programming is now?

No.

Vibe coding in the sense of handing all responsibility and accountability for the code in a change request over to AI and then claiming the bad code is the fault of AI is not a thing. It's still your change request regardless of how you created it. If you write every line it's yours. If you copy it from SO into your editor and committed it, that's your choices, and therefore your code. If you prompted an LLM to write something, you are responsible for that.

If there is AI slop in your codebase it is only because you put it there.

  • causal 3 days ago

    IMO this is why Claude Sonnet is better than ChatGPT: Sonnet is so much better at clarifying, drawing diagrams, writing documentation. It TRIES really hard to keep you in the loop, but of course you can choose to ignore everything it writes and just say "do more" without understanding anything.

    • hitarpetar 2 days ago

      no. read the code before you check it in. don't rely on your chatbot to explain it for you

      • causal 2 days ago

        Yes that's part of what I meant by read, but understanding a codebase goes a lot faster when you begin with some higher level understanding.

        • hitarpetar 2 days ago

          I'm sure it feels that way to you!

  • edfletcher_t137 2 days ago

    > If there is AI slop in your codebase it is only because you put it there.

    Nailed it, came here to say this.

    If anything, this entire post should just be titled "AI PEBKAC".

    Don't blame the tool because you're using it wrong.

alexpotato 3 days ago

Many years ago (early 2000s), I had to write a tool to scrape Yahoo message boards. The business was that folks were running "pump and dump" scams on the finance boards. The companies whose stock was being "pumped" hired law firms who, in turn, hired the company I worked for.

I was VERY new to Perl and didn't realize that LWP::simple already existed. I therefore ended up writing my own library using TCP socket handling and sending GET requests "by hand".

It was a great learning experience and taught me a lot about how message boards, TCP and HTTP work. At the same time, it was slow, took a lot of time and had limited features and very little error handling.

I now use Python's requests module all the time and have never, not ever, thought "I should go peak inside the library to see how it actually works under the hood".

My point in this story is that LLMs will probably move us more and more towards "AI as library". Sure, if you are writing super higher performant code that ties tightly to hardware you might still dig down into the details.

Most of us will probably just use the next generation "library".

  • QuadrupleA 2 days ago

    Tricky though because this generation of LLMs involve random sampling, so unlike computer code and libraries it's non-deterministic and inherently unreliable. Not sure if autoregressive random token sampling will ever be the winning paradigm.

mbesto 3 days ago

> I don't consider this my project, and I have no sense of acomplishment or growth.

Trigger warning incoming... if you are in a for-profit company, does the business really care whether you feel accomplished as long as you are producing code? As an analog - the assembly line worker on a highly automated Tesla assembly line is essentially a replaceable commodity at this point.

> The main issue is taste, when I write code I feel if its good or bad, as I am writing it, I know if its wrong, but using claude code I get desensitized very quickly and I just can't tell, it "reads" OK, but I don't know how it feels. In this case it happened when the code grew about 4x, from 1k to 4k lines. And worse of all, my mental model of the code is completely gone, and with it my ownership.

Does the code work? If so, why does any of this matter?

In an age of automated manufacturing, I've noticed more and more independent wood workers. This is okay - but you aren't going to supply the world's furniture needs with thousands or hundreds of thousands of artisan wood workers.

  • DauntingPear7 3 days ago

    But a chair cannot be copied from one home to another. Code uniquely can be. Good (perhaps artisanal) code is useful and better for everyone. The foundational improvements by a single person can get magnified throughout a project, while with other crafts the quality of their output does not have the same effect.

    • mbesto 3 days ago

      > Good (perhaps artisanal) code is useful and better for everyone.

      How so? We actually don't have any specific characteristics on what makes code "good", nor do we have any quantifiable metrics that say "said good code is better"?

      I may think your chair is more comfortable or that it lasts longer under load, but that doesn't always mean is "better". A cheaper chair could be considered better because its cheaper to acquire, even if its not as comfortable or doesn't last as long.

    • aleph_minus_one 3 days ago

      > But a chair cannot be copied from one home to another.

      Some 3D printing enthusiast: "Hold my beer ..." :-)

  • daliusd 2 days ago

    I really feel the same about LLM generated code: LLM does not know what is good, it does not have `taste`. When I explain what's wrong with LLM I use the same word `taste` and it is kind of amusing that someone has the same feeling.

    So why this matter if code works? If you will not look at it anymore then no problem at all, but that's definition of dead code that does not need support and will not be used anymore. I have projects that I do not support for years, but I can return to them anytime and work on them, because they age well like wine.

JustExAWS 3 days ago

I don’t enjoy coding. I haven’t enjoyed coding since the first 6 years I started coding in 6th grade in 1986 in a combination of Basic and assembly through 1992 when I graduated high school.

After that it was to get my degree and after that it was to exchange labor for money to fund my life. There hasn’t been a time since 1996 that I haven’t had some outside interest that I would rather be doing than sitting down at a computer after work.

While my official title hasn’t been “software engineer” since 2020 and the last time I had to get a job based solely on my coding ability was 2012, I am still at 51 expected to know how to spit out production level code as part of my job. If AI can help me create code that gets me paid faster so be it. At 51, no one hires me based on my coding skills or even interviews me based on it.

krainboltgreene 3 days ago

"Our products would be so many mirrors in which we saw reflected our essential nature."

All the way from 1844.

lux_sprwhk 3 days ago

I don't get it. Any time I step into a human-dev project, I feel exactly the same. Whenever a program gets large enough to be useful, it's too complex for anyone to understand without putting some work into it.

It's like spaghetti code only existed after 2022.

  • ares623 2 days ago

    It’s like the people that hate the job the most are now driving change. People who hated working with others so much, that those who enjoy or don’t mind working with others now have to change with them. I don’t know, but it just doesn’t feel right to me.

internet_points 2 days ago

Looking forward to when we have compiler code written by llms. How hard would it be for a rogue llm provider/employee to channel the spirit of ken and inject little trusting trust attacks, if no one is reading the code any longer? (How likely is it to go off on such a tangent its own, after writing thousands of mind-numbingly boring lines of assembly code, given the existence of many such proof-of-concepts in its training set and the copious examples of temporary llm insanity?)

rkerno 2 days ago

I personally find AI generated code to be pretty average. I might get AI to write a function, then rework it. I use it a lot for reviews, which helps. And also as a sounding board for research - this is by far the most valuable use case, saves a ton of time. Or get it to write tests similar to what you have, just tell it what you want tested, and get it to suggest.

I definitely don't trust the code it writes, especially for anything remotely complicated.

cmpalmer52 3 days ago

I haven’t done any serious web coding in years, so when I needed a little web page dashboard, I thought I’d do it 100% vibe coded.

Problem statement: We have four major repos spanning two different Azure DevOps servers/instances/top-level accounts. To check the status of pull requests required a lot of clicks and windows and sometimes re-logging in. So we wanted a dashboard customized to our needs that puts all active pull requests on each repo into a single page, links them to YouTrack, links them to the Azure DevOps pages, auto-refreshes, and flags them by needing attention for approval, merge conflicts, and unresolved comments. And it would use PATs for access that are only stored locally and not in the code or repo.

AI used: I began by describing the project goals to ChatGPT 5 and having it suggest a basic architecture. Then I used the Junie agent in JetBrain’s WebStorm to develop it. I gave it the ChatGPT output and told it to create a Readme and the project guidelines. Then I implemented it step by step (basic page layout, fill with dummy data, add Azure API calls, integrate with YouTrack, add features).

By following this step by step iteration, almost every step was a one-shot success - only once that I remember did it do something “wrong” - but sometimes I caught it being repetitive or inconsistent, so I added a “maximize code reuse and put all configuration in one place” step.

After about 3 hours, some of which was asking it code to my standards or change look and feel, I had a very full featured application. Three different views - the big picture, PRs that need my attention, and active PRs grouped by YouTrack items. I gave it to the team, they loved it and suggested a few new features. Another hour with the Junie Agent and I incorporated all the suggestions. Now we all use it every day.

I purposefully didn’t hand edit a single line of code. I did read the code and suggested improvements, but other than that, I think a user with no programming experience could have done it (particularly if they asked chatGPT on the side, “Now what?”). And it looked a helluva lot better than it would have if I coded it because I’m rusty and lazy.

Overall, it was my biggest success story of AI coding. We’ve been experimenting with AI bug triage, creating utility functions, and adding tests to our primary apps (all .NET Maui) but with a huge code base, it often missing things or makes bad assumptions.

But this level of project was near perfect capability to execution. I don’t know how much my skills helped me manage the project, but I know that I didn’t write the code. And it was kinda fun.

binaryfuel 2 days ago

No one other than devs ever cared about the quality or readability of code. Devs are a necessary evil to business

Now that a ‘machine’ can write code no one give a crap.

Does it work? Then it’s done.

Why would anyone need devs to harness some ‘human capable’ but highly syntactically specific programming instructions that get compiled down into machine code anyhow

The new programming language is English

  • cmpalmer52 2 days ago

    Only if the goal is to run the result and never have to update it or add features. Several of the good test projects I’ve made from scratch with AI (my title at work needs to be “Speaker to Silicon” because I’m usually tasked with experimenting with AI tools) have worked and looked great. Then someone wants a new feature. No problem, it adds it. Then you say, add that feature to this other part of the program, and it does it, but if you don’t look at the code, you realize it re-implemented it, so if you go back in a month and request a change, it only gets applied to the first place it finds. I had to constantly say “DRY! Don’t implement it twice, share the code!”

    I mean, it’ll get better, but it ain’t there yet.

sandos 2 days ago

This pretty much sums up my vibe coding experience as well. I have been doing many small pet tool/util projects at work, and after a few thousand LoCs I am always very detached, and have a hard time to see if the LLM is off track or not. At this point I often try to get it to refactor the code aggressively, and especially finding duplicate things.

  • ta12653421 2 days ago

    just do a CTRL A in the one LLM and copy it over to the other and see what it says.

esafak 3 days ago

Suppose it was 10,000 lines of solid code. That would still require of dozens of PRs to be digestible, and the attendant time to review. Our attention is the bottleneck now.

tppiotrowski 3 days ago

I think what Claude generated code is still missing is the feeling of "I learned something from reading this code. Reading this code made me a better programmer". We're not there yet. That's why we have forgiveness for the scripts/tests/etc Claude writes - those are purely for utility. However, reading Claude code must make us feel like we're getting smarter, not dumber.

  • dmix 3 days ago

    I learn new framework/language features all the time from Claude generated code. It's hard for anyone to keep up with every new version or evolution in best practices.

jackpepsi 3 days ago

I resonate with what the author said about losing track of the Mental Model. I think that's the key to enjoying the process or not. I.e. the building up or utilising of that mental model (my own understanding) is they key to finding software development joyful.

Specifically:

"Easy but boring project" case: For projects where I am already familiar with a strong and sensible architecture then I find AI enjoyable to work with as a simple speed boost. I know exactly what I'm asking AI to do at every stage and can judge it's results well. It's not that interesting to me to code these components myself because I've done it before several times. My mental model of the problem space and a good solution is complete. I get some satisfaction from using my mental model.

"Challenging but interesting project" case: For projects where I don't yet understand the best architecture then I will inevitably ask AI to connect Component A to Component B without yet understanding that there should be a Component C. Because I don't have the understanding of the problem space. The thing is before AI I may have made this mistake myself, I just would have had the satisfaction of learning at the same time.

Given the time with these type of projects I basically write them twice: First pass making it work but as a huge mess, but building a mental model of the real problem space along the way. Second pass refactoring and getting it right, creating now a mental model of a good solution. Only after two passes would it be a project I would feel is done correctly and be happy (joyful) to publish it.

I have found AI enables you to get the first pass working much quicker, but without the learning along the way of the mental model to inform how to make the second pass properly. So If I want the challenging project to be joyful I still need to invest the time to learn from the first pass.

And that specific learning task I enjoy more if I do it iteratively as the AI and I build together, it's less enjoyable if I sit down afterwards and only inspect the code.

SO if I want a challenging project to be joyful I have to continue investing the time in the first phase to do the learning. AI just gives the opportuntity to produce a messy working prototype without learning anything, which may or may not make sense for the business side of things.

stevage 3 days ago

This is reassuring. I started vime coding a side project and quickly got repulsed by the feeling of disconnection and lack of ownership. I put it on the shelf for a bit then came back and started over, writing all the code myself (but with a bit of VS Code autocomplete and a lot of assistance from ChatGPT). Super satisfying.

Havoc 2 days ago

Some of this angst feels identical to what you heard in artists circles after their craft by genAI

AI is cool but also worrying times ahead especially for those that get a part of their purpose/identity from it

bobbyblackstone 3 days ago

this is why if you want to use the machine to code.

you need to

plan, build guards, provide scope and desirables and test, retest, xref everything.

the machine codes, then stops and checks the rules, backtests and then continues.

as with all progression, structure matters most.

also, spaghetti code is the future. adapt or die tbh.

"huhuhu look at his spaghetti code, muppet " .... "but it works and is 3 months ahead of schedule ... ." ... "oh" ... "and there is documentation"

hackmack10 3 days ago

This is very well said. I thought I was just burned out over the past several months. Truth is, I'm just reviewing AI code slop all day and I fucking hate it. It's exhausting.

noduerme 2 days ago

Disenchantment with having something else write code is coming hot and fast on HN this week.

Also, yeah, the beach is boring.

_pdp_ 3 days ago

Let me play the devil's advocate here for a brief moment. I suspect that developers will adapt to the new norms.

  • wartywhoa23 2 days ago

    To hell with these new norms which aren't normal at all. And no amount of building back better will change that.

bn-l 3 days ago

> And worse of all, my mental model of the code is completely gone, and with it my ownership.

EXACTLY my feelings also.

jrochkind1 2 days ago

> The tests are quite comprehensive test suite

Also Claude-written, presumably?

ta12653421 2 days ago

Vibe Coding is absolutely OK, .. IF you know what you are doing and if you understand what the result should be.

Example: Most systems have uncritical components which are nice to have but nobody wants to code because "we-can-help-ourself-without-X-by-doing-this-manual-step-but-it-will-be-better-if-someone-does-X-somewhen-in-the-future-but-for-now-we-can-work-without-it"

E.g.: I have polished tons of report due to AI usage; financial reporting usually ends up somewhere with "large loops collecting data and copying it over to the report" - those things are usually "minimum-implemented" and nobody cares since "its good enough to do Y". Its mundane and frustrating to work on those code snippets.

Output streams in general are much more easy now: Lately I had to put together a component to display some numbers in a Windows App, this comes along with things like scaling the data/window correctly etc.; I would have had to put it in windows drawing calls. With a few prompts, I was able to get a perfect rendering component of 1500+ LOC which does exactly what i want and the result is perfect. (even better than in the reference app that we use to compare data output)

There are so many things where LLM are just a boon!

rzzzt 3 days ago

You can ask for Mermaid syntax and receive nicely formatted block diagrams.

Luker88 2 days ago

Sorry about the OT, but I see this more and more often:

Project License: MIT

Readme.md: "Project is about 80% Vibecoded.

80% of your project is public domain.

I'm not saying don't use AI. But at least shut up about how much.

This project uses MIT (which IMHO is already a small problem because it has no patent grant), but I see people vibecoding almost all of a project and then using AGPL.

Doesn't work like that. AI code can't be copyrighted. No copyright => public domain.

Please don't sleepwalk only to wake up to some company closing your projects and laughing at you.

andrewstuart 3 days ago

>> I fucking hate this.

>> And I can not help, but feel dusgust and shame. Is this what programming is now?

I love it. LLM assisted programming lets me do things I would never have been able to do on my own.

Never a greater leap in programming than the LLM.

No doubt the process is messy and uncertain and all about wild goose chases but that’s improving with every new release of the LLMs.

If you understand everything the LLM wrote then you’re holding it wrong.

I don’t hear developers disowning their work because they didn’t write the machine code that the compiler and linker output. LLM assisted programming is no different.

I’m excited about it and can’t wait to see where it all goes.

  • adamddev1 2 days ago

    Compiler and linker output is totally different. Compiler and linker output comes from determnistic, hard-coded logic that you can trust and build on. It's in a totally different category.

    Compiler and linker output is like trusting well-proven math theorums written in standardized symbols, published in peer-reviewed books. LLM output is like asking random people on the street for random ideas with ambiguous language based on who knows what.

    The jump to LLMs is in no way analogous to the jump to a higher level programming language.

  • reeredfdfdf 2 days ago

    "If you understand everything the LLM wrote then you’re holding it wrong."

    If you're using LLM to build production code, then yes, you should very much understand everything the LLM wrote.

    • ares623 2 days ago

      Lucky I'm not an SRE!

  • igravious a day ago

    I've successfully built projects that I had previously abandoned because the amount of tangential learning required felt overwhelming. With AI assistance, I'm now reaching goals that were once just beyond my grasp, which is an incredible feeling.

    I think those who dismiss 'vibe coding' haven't tried to use it for something truly beyond their current skill set. There's also an implicit sneer in the criticism—a kind of 'Oh, so you need an AI to help you code?'—that misses the point entirely.

    [btw: I used Deepseek to reword my original reply :)]

tcdent 3 days ago

It's to be expected that HN would have a contrarian take but I find it ironic that the amount of criticism toward technological innovation in an industry that is rooted fundamentally in technological innovation is so common.

  • xomiachuna 3 days ago

    The stakes are higher than when it comes to any previous controversial technology/bikeshed. This is not "react bad", but rather "push for trusting a black box with engineering bad".

  • immibis 3 days ago

    The saying is "don't get high on your own supply" for a reason.

    • tcdent 3 days ago

      That saying applies to one thing: drugs. It's not something you can extrapolate across industries.

      What is a programming language in the first place if not a programer satiating their own need for a better tool?

      • immibis 2 days ago

        I hear that cocaine makes you feel very successful despite the fact you're actually sitting in a back room with white powder on your nose doing nothing meaningful whatsoever. This is also what AI does, but not what programming languages do (except possibly Rust for some reason).

scuff3d 2 days ago

> I fucking hate this.

Very nicely summarizes vibe coding in four words. Well done.

jeingham a day ago

I'm an OFG, born in 1952 (...an okay boomer...) hit the job market in 1971, bad but there were always openings for machinists. Tough craft with a long learning curve but they paid reasonably well in my view, at the time at least. Long story short, TL;DR, evolving myself into a CAD CAM CNC stabilized my career long term. I know there are nuances and differences here with you programmers and the AI revolution but I think you all will be best served embracing it fully or find another path.BTW, the only reason I'm reading HN

  • jeingham a day ago

    ... is because I parlayed my CAD CAM and CNC knowledge into my follow on career as an web applications manager, server admin (of sorts), DBA and general web wonk midwitz.

    I'll add that I am still, in my state of semi blissful retirement, clattering away on my several PCs and really having fun with AI, Cursor, HN and other geeky (can we say that anymore?) stuffs.

    I guess I'll end by advising that to fight evolution is a bad bet.

gfgcjxg 2 days ago

i what to know hacking

wessorh 3 days ago

after reading this I purchased fuck-ai.com and decided to write a little website to accumulate writings such as the OP, alas the ai-written code isn't done yet, gotta say I feel simular to waht the author experenced.

igravious a day ago

fta: I fucking hate this.

to which i respond: i find this fucking magical.

mmaunder 3 days ago

[flagged]

  • filoeleven 2 days ago

    Show some code, then. There are so many spurious claims made by AI backers about how wonderful their apps are, and yet nobody's willing to show anything they've actually produced with it. That's what drives the rancor for me: hype and anecdotes with no evidence.

  • saturatedfat 3 days ago

    for the love of the mf game. most fun i’ve had with computers in my whole life

  • hitarpetar 2 days ago

    this has the makings of a quality copypasta

bdangubic 3 days ago

> After about 3-4k lines of code I completely lost track of what is going on

full stop here, there is nothing you can write after this…

slowp_ke 2 days ago

I love seeing comments in here mulling over if the developer wrote an AGENTS.md or had some sort of appropriate planning, then the vibe coded application wouldn't have been horrible.

I'm sorry, I thought OpenAI, Anthropic, Lovable, et. al. certainly don't advertise it as needing? I certainly know the people making purchasing decisions think it's necessary.

alganet 3 days ago

What Í'm doing a lot is vibe coding and stashing. Not even a public branch, just git stash the whole thing the LLM writes.

Also, I stack the stash. When I vibe code, I pop it, let it work on its own mess, then I stash it again.

One project has almost 13.000 lines of vibe mess, all stashed.

One good thing, is that the stash builds. It's just that I don't want to release more code than I can read. It's a long review queue that is pre-merged somehow.

Once in a while I pick something from there, then I review it and integrate into the codebase more seriously. I don't have the throughput to review it all, and not all projects can be yolo'd.

  • afarviral 3 days ago

    How can you maintain that much stashed code between commits? I assume you refer to it and manually code using the "mess" as inspo? I don't know stash works much deeper than stashing things I might need later so I can pull from remote.

    • alganet 3 days ago

      It works quite well for me.

      I don't use it as inspiration. It's like I said: code that is not reviewed yet.

      It takes the idea of 50 juniors working for you one step ahead. I manage the workflow in a way that they already made the code they wrote merge and build before I review it. When it doesn't, I delete it from the stash.

      I could keep a branch for this. Or go even deeper on the temptation and keep multiple branches. But that's more of my throughput I have to spent on merging and ensuring things build after merging. It's only me. One branch, plus an extra "WIP". Stash is perfect for that.

      Also, it's one level of stashing. It's stacked in the sense that it keeps growing, but it's not several `git stash pop`s that I do.

      One thing that helps is that I already used this to keep stuff like automation for repos that I maintain. Stuff the owner doesn't want or isn't good enough to be reused. Sometimes it was hundreds of lines, now it's thousands.

  • verdverm 3 days ago

    I just force push the same commits, so you won't know if it was me or the ai that wrote various parts /s

    I actually lead my commit messages with (human) or (agent) now

    You could try using a git worktree that never gets pushed

    • alganet 3 days ago

      I prefer working with the one commit per PR philosophy, linear history and every commit buildable, so I always force push (to the PR branch, but never to master). Been doing it for ages. Bisecting this kind of history is a powerful tool.

      • verdverm 3 days ago

        yup, this is my preferred method as well, but I will wait to squash the commits at PR merging time, depending on project / code host

        I have one client where force push and rebase are not allowed, knots of history are preferred for regulatory compliance, so I'm told. Bisecting is not something I've heard done there before

        • alganet 3 days ago

          Squashing works great for bisecting.

          I like rebasing! It works great for bisecting, reverting (squash messes that up), almost everything. It just doesn't play well with micro commits (which unfortunatelly have become the norm).

          The force pushing to the PR branch is mostly a consequence of that rebase choice, in order to not pollute the main branch. Each change in main/master must be meaningful and atomic. Feature branches are other way to achieve this, but lots of steps involved.

      • hatthew 3 days ago

        why not make many local commits and then squash before rebase/push/merge?

        • alganet 3 days ago

          You mean generally? Yes, sure. As long as sloppy concatenated micro-messages don't end up in the main branch, I'm game.