I recently discovered that some of the Raspberry Pi models support the Linux Kernel's "Gadget Mode". This allows you to configure the Pi to appear as some type of device when plugged into a USB port, i.e. a Mass Storage/USB stick, Network Card, etc. Very nifty for turning a Pi Zero into various kinds of utilities.
When I realized this was possible, I wanted to set up a project that would allow me to use the Pi as a bridge from my document scanner (has the ability to scan to a USB port) to a SMB share on my network that acts as the ingest point to a Paperless-NGX instance.
Scanner -> USB "drive" > Some of my code running on the Pi > The SMB Share > Paperless.
I described my scenario in a reasonable degree of detail to Claude and asked it to write the code to glue all of this together. What it produced didn't work, but was close enough that I only needed to tweak a few things.
While none of this was particularly complex, it's a bit obscure, and would have easily taken a few days of tinkering the way I have for most of my life. Instead it took a few hours, and I finished a project.
I, too, have started to think differently about the projects I take on. Projects that were previously relegated to "I should do that some day when I actually have time to dive deeper" now feel a lot more realistic.
What will truly change the game for me is when it's reasonable to run GPT-4o level models locally.
Please, I would be delighted if you published that code... Just yesterday I was thinking that a two-faced Samba share/USB Mass Storage dongle Pi would save me a lot of shuttling audio samples between my desktop and my Akai MPC.
The tool itself would be of a lot of use in school science and design labs where a bunch of older kit lands from universities and such. I used to put a lot of floppy to usb converters on things like old ir spectrometers that were good enough still for school use.
I was also writing a SANE-to-Paperless bridge to run on an RPi recently, but ran into issues getting it to detect my ix500. Would love to see the code!
Well, R1 is runnable locally for under $2500; so I guess you could pool money and share the cost with other people that think they need that much power, rather than a quantized model with fewer parameters (or a distil).
As a mostly LLM-skeptic I reluctantly agree this is something AI actually does well. When approaching unfamiliar territory, LLMs (1) use simple language (improvement over academia but also much professional intentionally obfuscated literature), (2) use the right abstraction (they seem good at ”zooming out” to big picture of things, and (3) you can move both laterally between topics and ”zoom in” quickly. Another way of putting it is ”picking the brain” of an expert in order to build a rough mental model.
It’s downsides, such as hallucinations and lack of reasoning (yeah) aren’t very problematic here. Once you’re familiar enough you can switch to better tools and know what to look for.
My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known (e.g. a standard task in some technology used by many), and terrible where the problem has not been tackled much by the public.
About language (point (1)), I get a lot of "hypnotism for salesmen to non technical managers and roundabout comments" (e.g. "which wire should I cut, I have a red one and a blue one" // "It is mission critical to cut the right wire; in order to decide which wire to cut, we must first get acquainted with the idea that cutting the wrong wire will make the device explode..." // "Yes, which one?" // "Cutting the wrong one can have critical consequences...")
> and terrible where the problem has not been tackled much by the public
Very much so (I should have added this as a downside in the original comment). Before I even ask a question I ask myself "does it have training data on this?". Also, having a bad answer is only one failure mode. More commonly, I find that it drifts towards the "center of gravity", i.e. the mainstream or most popular school of thought, which is like talking to someone with a strong status-quo bias. However, before you've familiarized yourself with a new domain, the "current state of things" is a pretty good bargain to learn fast, at least for my brain.
> My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known
Yes, that's a necessary condition. If there isn't some well known solution, LLMs won't give you anything useful.
The point though, is that the solution was not well known to the GP. That's where LLMs shine, they "understand" what you are trying to say, and give you the answer you need, even when you don't know the applicable jargon.
In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.
And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”
It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.
But higher than that, no, I’ve not had success with it.
It’s also nice as a general purpose wizard code generator. But that’s just rote work.
First, rote work is the kind I hate most and so having AI do it is a huge win. It’s also really good for finding bugs, albeit with guidance. It follows complicated logic like a boss.
Maybe you are running into the problem I did early. I told it what I wanted. Now I tell it what I want done. I use Claude Code and have it do its things one at a time and for each, I tell it the goal and then the steps I want it to take. I treat it as if it was a high-level programming language. Since I was more procedural with it, I get pretty good results.
They seem pretty good with human language learning. I used ChatGPT to practice reading and writing responses in French. After a few weeks I felt pretty comfortable reading a lot of common written French. My grammar is awful but that was never my goal.
It's true that once you have learned enough to tell the LLM exactly what answer you want, it can repeat it back to you verbatim. The question is how far short of that you should stop because the LLM is no longer an efficient way to make progress.
LLMs don't reason the way we do, but there are similarities at the cognitive pre-conscious level.
I made a challenge to various lawyers and the Stanford Codex (no one took the bait yet) to find critical mistakes in the "reasoning" of our Legal AI. One former attorney general told us that he likes how it balances the intent of the law. Sample output (scroll and click on stats and the donuts on the second slide):
I built the AI using an inference-time=scaling approach that I evolved over a year's time, and it is based on Llama for now, but could be replace with any major foundational model.
The sensitivity can be turned up or down. It's why we are asking for input. If you're talking about the Disney EULA, it has the context that it is a browsewrap agreement. The setting for material omission is very greedy right now, and we could find a happy middle.
A former attorney general is taking it for a spin, and has said great things about it so far. One of the top 100 lawyers in the US. HN has turned into a pit of hate. WTF all this hate for? People just really angry at AI, it seems. JFC, Grow up.
Very probably not somebody who blindly picked a position, easily somebody who is quite wary of the downsides of the current state of the technology, as expressed already explicitly in the post:
> It’s downsides, such as hallucinations and lack of reasoning
I know you’re being disparaging by using language like “bake into their identity” but everyone is “something” about “something”.
I’m “indifferent” about “roller coasters” and “passionate” about “board games”.
To answer the question (but I’m not OP), I’m skeptical about LLMs. “These words are often near each other” vastly exceeds my expectation at being fairly convincing that the machine “knows” something, but it’s dangerously confident when it’s hilariously incorrect.
Whatever we call the next technological leap where there’s actual knowledge (not just “word statistics” I’ll be less skeptical about.
Your framing is extrapolative, mendacious and is adding what could charitably be called your interpersonal problems to a statement which is perfectly neutral, intended as an admission against general inclination to lend credibility to the observation that follows.
Someone uncharitable would say things about your cognitive abilities and character that are likely true but not useful.
With an extra 23 months of experience under my belt since then I'm comfortable to say that the effect has stayed steady for me over time, and even increased a bit.
100% agree with this, sometimes I feel I'm becoming too reliant on it - but I step back and see how much more ambitious of projects I take on, and finish quickly still, due to it.
The exciting thing about AI is it let's you go back to any project or idea you've ever had and they are now possibly doable, even if they seemed impossible or too much work back then. Some of the key pieces missing have become trivial, and even if you don't know how to do something AI will help you figure it out or just let you come up with a solution that may seem dirty, but actually works, whereas before it was impossible without expert systems and grinding out so much code. It's opened so many doors. It's hard to remember ideas that you have written off before, there are so many blind spots that are now opportunities.
It doesn’t do that for things rarely done before though. And it’s poisoned with opinions from the internet. E.g. you can convince it that we have to remove bullshit layers from programming and make it straightforward. It will even print a few pages of vague bullet points about it, if not yet. But when you ask it to code, it will dump a react form.
I’m not trying to invalidate experiences itt, cause I have a similar one. But it feels futile as we are stuck with our pre-AI bloated and convoluted ways of doing things^W^W making lots of money and securing jobs by writing crap nobody understands why, and there’s no way to undo this or to teach AI to generalize.
I think this novelty is just blindness to how bad things are in the areas you know little about. For example, you may think it solves the job when you ask it to create a button and a route. And it does. But the job wasn’t to create a route, load and validate data and render it on screen in a few pages and files. The job was to take a query and to have it on screen in a couple of lines. Yes it helps writing pages of our nonsense, but it’s still nonsense. It works, but feels like we have fooled ourselves twice now. It also feels like people will soon create AI playbooks for structuring and layering their output, cause ability to code review it will deteriorate in just a few years with less seniors and much more barely-coders who get into it now.
Found the same thing.
I was toying with a Discord bot a few weeks ago that involved setting up and running a node server, deployed to Fly via docker. A bunch of stuff a bit out of my wheelhouse. All of it turned out to be totally straightforward with LLM assistance.
Can you describe how you used LLMs for deployment? I'm actually doing this exact thing but I'm feeling annoyed by DevOps and codebase setup work. I wonder if I'm just being too particular about which tools I'm using rather than just going with the flow
This article is strangely timed for me. About a year ago a company reached out to me about doing an ERP migration. I turned it away because I thought it’d just be way, way too much work.
This weekend, I called my colleague and asked him to call them back and see if they’re still trying to migrate. AI definitely has changed my calculus around what I can take on.
For me, it isn't just about complexity, but about customization.
I can have the LLMs build me custom bash scripts or make me my own Obsidian plugins.
They're all little cogs in my own workflow. None of these individual components are complex, but putting all of them together would have taken me ages previously.
Now I can just drop all of them into the conversation and ask it for a new script that works with them to do X.
Here's an example where I built a custom screenshot hosting tool for my blog:
The method could not be performed on the resource because the server is unable to store the representation needed to successfully complete the request. There is insufficient free space left in your storage allocation.
Additionally, a 507 Insufficient Storage error was encountered while trying to use an ErrorDocument to handle the request.
bugger! More than two visitors to my web site and it falls apart, I might fork out the $10 for the better CPU and more memory option before I post something in future.
Yes -- LLMs can write a lot of code and after some reviewing it can also go to prod -- but I have not nearly enough applications of LLMs on the post-prod phase; like dealing with evolution in requirements, ensuring security as zero days get discovered, etc.
Would love to hear folks' experience around "managing" all this new code.
> I am now at a real impasse, towards the end of my career and knowing I could happily start it all again with a new insight and much bigger visions for what I could take on. It feels like wining the lottery two weeks before you die
I envy this optimistic. I am not the opposite (im a sr engineer with more than 15 years of experience), but I am scared about my future. I invested too much time in learning concepts, theory, getting a Master degree, and in a few years all of my knowledge can be useless in the market.
I could not disagree more. Those concepts, theories and all that knowledge is what makes it so powerful. I feel successful with AI because I know what to do (I’m older than you by a lot). I talk to younger people and they don’t know how to think about a big system or have the ability to communicate their strategies. You do. I’m 72 and was bored. Now that Claude will do the drudgery, I am inspired.
I understand your point of view and I do agree that with the current state of affairs I am kind of OK. It's useful for me, and I am still needed.
But seeing the progress and adoption, I wonder what will happen when that valuable skill (how to think about a big system, etc) will also be replicated by AI. and then, poof.
IT is never static. I have had to take several forks in my career with languages and technologies often leading to dead-ends and re-training. It is amazing how much you learn doing one thing directly translates to another, it can often lead to you not having a specific/narrow mindset too.
Having an LLM next to you means there is never a stupid question, I ask the AI the same stupid questions repeatedly until I get it, that is not really possible with a smart human, even if they have the patience, you are often afraid to look dumb in their eyes.
I'm worried about being replaced by LLM. If it keeps evolving to the point where a CTO can ask LLM to do something and deploy it, why he would pay for a team of engineers?
Forking to different technologies and languages is one thing (I've been there, I started with PHP and I haven't touch it for almost a decade now), but being replaced by a new tech is something different. I don't see how I could pivot to still be useful.
I see it more as “if an LLM can do that, why would I need an employer?”
This coin has two sides. If a CTO can live without you, you can live without an expensive buffer between you and your clients. He’s now just a guy like you, and adds little value compared to everyone else.
Where in reality can a CTO talk to a human and deploy it? It takes engineers to understand the requirements and to iterate with the CTO. The CTO has better things to do with their time than wrestle with an LLM all day.
I use Cursor to write Python programs to solve tasks in my daily work that need to be completed with programming. It's very convenient, and I no longer need to ask the company's programmers for help. Large language models are truly revolutionary productivity tools.
Being able to write code that compiled into assembly, instead of directly writing assembly, meant you could do more. Which soon meant you had to do more, because now everyone was expecting it.
The internet meant you could take advantage of open source to build more complex software. Now, you have to.
Cloud meant you could orchestrate complicated apps. Now you can't not know how it works.
LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"
And they won't be wrong, if you can get the lower level components of a system done easily by LLM, you need to be looking at a higher level.
> LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"
Not everyone wants to be a "prompt engineer", or let their skills rust and be replaced with a dependency on a proprietary service. Not to mention the potentially detrimental cognitive effects of relegating all your thinking to LLMs in the long term.
I agree that not everyone wants to be. I think OPs point though is the market will make “not being a prompt engineer” a niche like being a COBOL programmer in 2025.
I’m not sure I entirely agree but I do think the paradigm is shifting enough that I feel bad for my coworkers who intentionally don’t use AI. I can see a new skill developing in myself that augments my ability to perform and they are still taking ages doing the same old thing. Frankly, now is the sweet spot because the expectation hasn’t raised enough to meet the output so you can either squeeze time to tackle that tech debt or find time to kick up your feet until the industry catches up.
What psychedelic/mind-altering/weird coaching prompts do you use?
I have separate system prompts for taboo-teaching, excessive-pedanticism, excessive-toxicity, excessive-praise, et cetera.
My general rules is anything & everything humans would never ever do, but that would somehow allow me to explore altered states of consciousness, ways of thinking, my mind, the world, better. Something to make me smarter from the experience of the chat.
I definitely felt this. I thought "If I have to redo this in Flutter or in Swift, I can". I don't know either, so have to move with some caution (but it's all very exciting).
At first glance, it seems like _porting_ code to another language / api .. is probably a good sweet spot for these code LLMs ..
.. and even automating the testing to check results match, coming up with edge cases etc.
... and if thats true, then they could be useful for _optimizing_ by porting to other apis / algos, checking same-ness of behavior, then comparing performance.
The whole "vibe coding" doesn't grab me - as I feel the bottleneck is with my creativity and understanding rather than generating viable code - using a productive expressive language like javascript or lisp and working on a small code base helps that.
eg. I would like to be able to take an algo and port it to run on GPU for example... without having to ingest much arcane api quirks. JAX looks like a nice target, but Ive held off for now.
Agree, let’s build direct democratic simulated multiverse.
Or at least make a digital back up of Earth.
Or at least represent an LLM as a green field with objects, where humans are the only agents:
you stand near a monkey, see chewing mouth nearby, go there (your prompt now is “monkey chews”), close by you see an arrow pointing at a banana, father away an arrow points at an apple, very far away at the horizon an arrow points at a tire (monkeys rarely chew tires).
So things close by are more likely tokens, things far away are less likely, you see all of them at once (maybe you’re on top of a hill to see farther). This way we can make a form of static place AI, where humans are the only agents
it's interesting, it feels like less needing to think much bigger and more so that we're now able to accept that much bigger ideas we've been thinking about are far more feasible.
that's so cool. all those grand ideas that felt so far away are right here ready to grasp and realize.
An undergrad using the hottest tech right of the bat? Cooked.
What if I tell you there is an undergrad that just flunked a class and is depressed and cries about it? Considers changing their major? This is pre-AI. We have a chance that undergrads will never feel that way again. Not intimidated by anything.
Using AI has changed everything for me and made my ambition swell. I despise looking up the details of frameworks and api’s, transmitting a change of strategy through a system, typing the usual stupid loops and processes that are the foundation of all programs. Hell, the amount of time I save on typing errors is worth it.
I have plans for many things I didn’t have the energy for in the past.
I recently discovered that some of the Raspberry Pi models support the Linux Kernel's "Gadget Mode". This allows you to configure the Pi to appear as some type of device when plugged into a USB port, i.e. a Mass Storage/USB stick, Network Card, etc. Very nifty for turning a Pi Zero into various kinds of utilities.
When I realized this was possible, I wanted to set up a project that would allow me to use the Pi as a bridge from my document scanner (has the ability to scan to a USB port) to a SMB share on my network that acts as the ingest point to a Paperless-NGX instance.
Scanner -> USB "drive" > Some of my code running on the Pi > The SMB Share > Paperless.
I described my scenario in a reasonable degree of detail to Claude and asked it to write the code to glue all of this together. What it produced didn't work, but was close enough that I only needed to tweak a few things.
While none of this was particularly complex, it's a bit obscure, and would have easily taken a few days of tinkering the way I have for most of my life. Instead it took a few hours, and I finished a project.
I, too, have started to think differently about the projects I take on. Projects that were previously relegated to "I should do that some day when I actually have time to dive deeper" now feel a lot more realistic.
What will truly change the game for me is when it's reasonable to run GPT-4o level models locally.
Please, I would be delighted if you published that code... Just yesterday I was thinking that a two-faced Samba share/USB Mass Storage dongle Pi would save me a lot of shuttling audio samples between my desktop and my Akai MPC.
I've been thinking about writing up a blog post about it. Might have to do a Show HN when time allows.
This guide was a huge help: https://github.com/thagrol/Guides/blob/main/mass-storage-gad...
Please do-I think this is a great example of how AI can be helpful.
We see so many stories about how terrible AI coding is. We need more practical stories of how it can help.
The tool itself would be of a lot of use in school science and design labs where a bunch of older kit lands from universities and such. I used to put a lot of floppy to usb converters on things like old ir spectrometers that were good enough still for school use.
Yep!
I’m teaching kids in Bayview how to code using AI tools. I’m trying to figure out the best way to do it without losing anything in between.
With my pilot students I’ve found the ones I gave cursor are outperforming the ones who aren’t using AI.
Not just with deliverables, but with fundamental knowledge(what is a function?).
Small sample size so I don’t want to make proclamations… but I think a generation learning how to code with these tools is going to be unstoppable.
I was also writing a SANE-to-Paperless bridge to run on an RPi recently, but ran into issues getting it to detect my ix500. Would love to see the code!
Well, R1 is runnable locally for under $2500; so I guess you could pool money and share the cost with other people that think they need that much power, rather than a quantized model with fewer parameters (or a distil).
would you have paid someone to do it over solving the challenge yourself?
As a mostly LLM-skeptic I reluctantly agree this is something AI actually does well. When approaching unfamiliar territory, LLMs (1) use simple language (improvement over academia but also much professional intentionally obfuscated literature), (2) use the right abstraction (they seem good at ”zooming out” to big picture of things, and (3) you can move both laterally between topics and ”zoom in” quickly. Another way of putting it is ”picking the brain” of an expert in order to build a rough mental model.
It’s downsides, such as hallucinations and lack of reasoning (yeah) aren’t very problematic here. Once you’re familiar enough you can switch to better tools and know what to look for.
My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known (e.g. a standard task in some technology used by many), and terrible where the problem has not been tackled much by the public.
About language (point (1)), I get a lot of "hypnotism for salesmen to non technical managers and roundabout comments" (e.g. "which wire should I cut, I have a red one and a blue one" // "It is mission critical to cut the right wire; in order to decide which wire to cut, we must first get acquainted with the idea that cutting the wrong wire will make the device explode..." // "Yes, which one?" // "Cutting the wrong one can have critical consequences...")
> and terrible where the problem has not been tackled much by the public
Very much so (I should have added this as a downside in the original comment). Before I even ask a question I ask myself "does it have training data on this?". Also, having a bad answer is only one failure mode. More commonly, I find that it drifts towards the "center of gravity", i.e. the mainstream or most popular school of thought, which is like talking to someone with a strong status-quo bias. However, before you've familiarized yourself with a new domain, the "current state of things" is a pretty good bargain to learn fast, at least for my brain.
> My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known
Yes, that's a necessary condition. If there isn't some well known solution, LLMs won't give you anything useful.
The point though, is that the solution was not well known to the GP. That's where LLMs shine, they "understand" what you are trying to say, and give you the answer you need, even when you don't know the applicable jargon.
Yes. LLMs are the perfect learning assistant.
You can now do literally anything. Literally.
Going to take a while for everyone to figure this out but they will given time.
In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.
And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”
It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.
But higher than that, no, I’ve not had success with it.
It’s also nice as a general purpose wizard code generator. But that’s just rote work.
YMMV
First, rote work is the kind I hate most and so having AI do it is a huge win. It’s also really good for finding bugs, albeit with guidance. It follows complicated logic like a boss.
Maybe you are running into the problem I did early. I told it what I wanted. Now I tell it what I want done. I use Claude Code and have it do its things one at a time and for each, I tell it the goal and then the steps I want it to take. I treat it as if it was a high-level programming language. Since I was more procedural with it, I get pretty good results.
I hope that helps.
They seem pretty good with human language learning. I used ChatGPT to practice reading and writing responses in French. After a few weeks I felt pretty comfortable reading a lot of common written French. My grammar is awful but that was never my goal.
You just aren’t delving deep enough.
For every problem that stops you, ask the LLM. With enough context it’ll give you at least a mediocre way to get around your problem.
It’s still a lot of hard work. But the only person that can stop yourself is you. (Which it looks like you’ve done.)
List the reasons you’ve stopped below and I’ll give you prompts to get around them.
It's true that once you have learned enough to tell the LLM exactly what answer you want, it can repeat it back to you verbatim. The question is how far short of that you should stop because the LLM is no longer an efficient way to make progress.
LLMs don't reason the way we do, but there are similarities at the cognitive pre-conscious level.
I made a challenge to various lawyers and the Stanford Codex (no one took the bait yet) to find critical mistakes in the "reasoning" of our Legal AI. One former attorney general told us that he likes how it balances the intent of the law. Sample output (scroll and click on stats and the donuts on the second slide):
Samples: https://labs.sunami.ai/feed
I built the AI using an inference-time=scaling approach that I evolved over a year's time, and it is based on Llama for now, but could be replace with any major foundational model.
Presentation: https://prezi.com/view/g2CZCqnn56NAKKbyO3P5/ 8-minute long video: https://www.youtube.com/watch?v=3rib4gU1HW8&t=233s
info sunami ai
.
The sensitivity can be turned up or down. It's why we are asking for input. If you're talking about the Disney EULA, it has the context that it is a browsewrap agreement. The setting for material omission is very greedy right now, and we could find a happy middle.
A former attorney general is taking it for a spin, and has said great things about it so far. One of the top 100 lawyers in the US. HN has turned into a pit of hate. WTF all this hate for? People just really angry at AI, it seems. JFC, Grow up.
[flagged]
> invested
Very probably not somebody who blindly picked a position, easily somebody who is quite wary of the downsides of the current state of the technology, as expressed already explicitly in the post:
> It’s downsides, such as hallucinations and lack of reasoning
I know you’re being disparaging by using language like “bake into their identity” but everyone is “something” about “something”.
I’m “indifferent” about “roller coasters” and “passionate” about “board games”.
To answer the question (but I’m not OP), I’m skeptical about LLMs. “These words are often near each other” vastly exceeds my expectation at being fairly convincing that the machine “knows” something, but it’s dangerously confident when it’s hilariously incorrect.
Whatever we call the next technological leap where there’s actual knowledge (not just “word statistics” I’ll be less skeptical about.
Your framing is extrapolative, mendacious and is adding what could charitably be called your interpersonal problems to a statement which is perfectly neutral, intended as an admission against general inclination to lend credibility to the observation that follows.
Someone uncharitable would say things about your cognitive abilities and character that are likely true but not useful.
They didn’t say that they were invested in it.
Probably all the hype and bs.
I wrote something similar about this effect almost two years ago: https://simonwillison.net/2023/Mar/27/ai-enhanced-developmen... "AI-enhanced development makes me more ambitious with my projects"
With an extra 23 months of experience under my belt since then I'm comfortable to say that the effect has stayed steady for me over time, and even increased a bit.
Around that time you highlighted the threat of prompt injection attacks on AI assistants. Have you also been able to make progress in this area?
100% agree with this, sometimes I feel I'm becoming too reliant on it - but I step back and see how much more ambitious of projects I take on, and finish quickly still, due to it.
The exciting thing about AI is it let's you go back to any project or idea you've ever had and they are now possibly doable, even if they seemed impossible or too much work back then. Some of the key pieces missing have become trivial, and even if you don't know how to do something AI will help you figure it out or just let you come up with a solution that may seem dirty, but actually works, whereas before it was impossible without expert systems and grinding out so much code. It's opened so many doors. It's hard to remember ideas that you have written off before, there are so many blind spots that are now opportunities.
It doesn’t do that for things rarely done before though. And it’s poisoned with opinions from the internet. E.g. you can convince it that we have to remove bullshit layers from programming and make it straightforward. It will even print a few pages of vague bullet points about it, if not yet. But when you ask it to code, it will dump a react form.
I’m not trying to invalidate experiences itt, cause I have a similar one. But it feels futile as we are stuck with our pre-AI bloated and convoluted ways of doing things^W^W making lots of money and securing jobs by writing crap nobody understands why, and there’s no way to undo this or to teach AI to generalize.
I think this novelty is just blindness to how bad things are in the areas you know little about. For example, you may think it solves the job when you ask it to create a button and a route. And it does. But the job wasn’t to create a route, load and validate data and render it on screen in a few pages and files. The job was to take a query and to have it on screen in a couple of lines. Yes it helps writing pages of our nonsense, but it’s still nonsense. It works, but feels like we have fooled ourselves twice now. It also feels like people will soon create AI playbooks for structuring and layering their output, cause ability to code review it will deteriorate in just a few years with less seniors and much more barely-coders who get into it now.
Found the same thing. I was toying with a Discord bot a few weeks ago that involved setting up and running a node server, deployed to Fly via docker. A bunch of stuff a bit out of my wheelhouse. All of it turned out to be totally straightforward with LLM assistance.
Thinking bigger is a practice to hone.
Can you describe how you used LLMs for deployment? I'm actually doing this exact thing but I'm feeling annoyed by DevOps and codebase setup work. I wonder if I'm just being too particular about which tools I'm using rather than just going with the flow
This article is strangely timed for me. About a year ago a company reached out to me about doing an ERP migration. I turned it away because I thought it’d just be way, way too much work.
This weekend, I called my colleague and asked him to call them back and see if they’re still trying to migrate. AI definitely has changed my calculus around what I can take on.
For me, it isn't just about complexity, but about customization.
I can have the LLMs build me custom bash scripts or make me my own Obsidian plugins.
They're all little cogs in my own workflow. None of these individual components are complex, but putting all of them together would have taken me ages previously.
Now I can just drop all of them into the conversation and ask it for a new script that works with them to do X.
Here's an example where I built a custom screenshot hosting tool for my blog:
https://sampatt.com/blog/2025-02-11-jsDelivr
Insufficient Storage
The method could not be performed on the resource because the server is unable to store the representation needed to successfully complete the request. There is insufficient free space left in your storage allocation.
Additionally, a 507 Insufficient Storage error was encountered while trying to use an ErrorDocument to handle the request.
bugger! More than two visitors to my web site and it falls apart, I might fork out the $10 for the better CPU and more memory option before I post something in future.
Accidentally thought too big
https://imgs.xkcd.com/comics/is_it_worth_the_time.png
Yes -- LLMs can write a lot of code and after some reviewing it can also go to prod -- but I have not nearly enough applications of LLMs on the post-prod phase; like dealing with evolution in requirements, ensuring security as zero days get discovered, etc.
Would love to hear folks' experience around "managing" all this new code.
> I am now at a real impasse, towards the end of my career and knowing I could happily start it all again with a new insight and much bigger visions for what I could take on. It feels like wining the lottery two weeks before you die
I envy this optimistic. I am not the opposite (im a sr engineer with more than 15 years of experience), but I am scared about my future. I invested too much time in learning concepts, theory, getting a Master degree, and in a few years all of my knowledge can be useless in the market.
I could not disagree more. Those concepts, theories and all that knowledge is what makes it so powerful. I feel successful with AI because I know what to do (I’m older than you by a lot). I talk to younger people and they don’t know how to think about a big system or have the ability to communicate their strategies. You do. I’m 72 and was bored. Now that Claude will do the drudgery, I am inspired.
I understand your point of view and I do agree that with the current state of affairs I am kind of OK. It's useful for me, and I am still needed.
But seeing the progress and adoption, I wonder what will happen when that valuable skill (how to think about a big system, etc) will also be replicated by AI. and then, poof.
IT is never static. I have had to take several forks in my career with languages and technologies often leading to dead-ends and re-training. It is amazing how much you learn doing one thing directly translates to another, it can often lead to you not having a specific/narrow mindset too.
Having an LLM next to you means there is never a stupid question, I ask the AI the same stupid questions repeatedly until I get it, that is not really possible with a smart human, even if they have the patience, you are often afraid to look dumb in their eyes.
I'm worried about being replaced by LLM. If it keeps evolving to the point where a CTO can ask LLM to do something and deploy it, why he would pay for a team of engineers?
Forking to different technologies and languages is one thing (I've been there, I started with PHP and I haven't touch it for almost a decade now), but being replaced by a new tech is something different. I don't see how I could pivot to still be useful.
I see it more as “if an LLM can do that, why would I need an employer?”
This coin has two sides. If a CTO can live without you, you can live without an expensive buffer between you and your clients. He’s now just a guy like you, and adds little value compared to everyone else.
I know what you mean but I don't see it positive either. If each engineer is now a startup, it will be extra complicated to make money.
It's like saying since all of us know how to write, we all can sell books.
Where in reality can a CTO talk to a human and deploy it? It takes engineers to understand the requirements and to iterate with the CTO. The CTO has better things to do with their time than wrestle with an LLM all day.
I use Cursor to write Python programs to solve tasks in my daily work that need to be completed with programming. It's very convenient, and I no longer need to ask the company's programmers for help. Large language models are truly revolutionary productivity tools.
This is much like other advances in computing.
Being able to write code that compiled into assembly, instead of directly writing assembly, meant you could do more. Which soon meant you had to do more, because now everyone was expecting it.
The internet meant you could take advantage of open source to build more complex software. Now, you have to.
Cloud meant you could orchestrate complicated apps. Now you can't not know how it works.
LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"
And they won't be wrong, if you can get the lower level components of a system done easily by LLM, you need to be looking at a higher level.
> LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"
Not everyone wants to be a "prompt engineer", or let their skills rust and be replaced with a dependency on a proprietary service. Not to mention the potentially detrimental cognitive effects of relegating all your thinking to LLMs in the long term.
I agree that not everyone wants to be. I think OPs point though is the market will make “not being a prompt engineer” a niche like being a COBOL programmer in 2025.
I’m not sure I entirely agree but I do think the paradigm is shifting enough that I feel bad for my coworkers who intentionally don’t use AI. I can see a new skill developing in myself that augments my ability to perform and they are still taking ages doing the same old thing. Frankly, now is the sweet spot because the expectation hasn’t raised enough to meet the output so you can either squeeze time to tackle that tech debt or find time to kick up your feet until the industry catches up.
[dead]
What psychedelic/mind-altering/weird coaching prompts do you use?
I have separate system prompts for taboo-teaching, excessive-pedanticism, excessive-toxicity, excessive-praise, et cetera.
My general rules is anything & everything humans would never ever do, but that would somehow allow me to explore altered states of consciousness, ways of thinking, my mind, the world, better. Something to make me smarter from the experience of the chat.
I definitely felt this. I thought "If I have to redo this in Flutter or in Swift, I can". I don't know either, so have to move with some caution (but it's all very exciting).
At first glance, it seems like _porting_ code to another language / api .. is probably a good sweet spot for these code LLMs ..
.. and even automating the testing to check results match, coming up with edge cases etc.
... and if thats true, then they could be useful for _optimizing_ by porting to other apis / algos, checking same-ness of behavior, then comparing performance.
The whole "vibe coding" doesn't grab me - as I feel the bottleneck is with my creativity and understanding rather than generating viable code - using a productive expressive language like javascript or lisp and working on a small code base helps that.
eg. I would like to be able to take an algo and port it to run on GPU for example... without having to ingest much arcane api quirks. JAX looks like a nice target, but Ive held off for now.
Agree, let’s build direct democratic simulated multiverse.
Or at least make a digital back up of Earth.
Or at least represent an LLM as a green field with objects, where humans are the only agents:
you stand near a monkey, see chewing mouth nearby, go there (your prompt now is “monkey chews”), close by you see an arrow pointing at a banana, father away an arrow points at an apple, very far away at the horizon an arrow points at a tire (monkeys rarely chew tires).
So things close by are more likely tokens, things far away are less likely, you see all of them at once (maybe you’re on top of a hill to see farther). This way we can make a form of static place AI, where humans are the only agents
A simulation in a simulation... Neat!
Thanks, I wrote extensively about it) If interested, you can find a link in my profile or ask anything here
Thanks I will take my time reading it, you certainly could never be criticized for thinking too small.
it's interesting, it feels like less needing to think much bigger and more so that we're now able to accept that much bigger ideas we've been thinking about are far more feasible.
that's so cool. all those grand ideas that felt so far away are right here ready to grasp and realize.
Probably right at the end of your career is where this tool would be the most useful
An undergrad using the hottest tech right of the bat? Cooked.
It's like giving the world 128gb of ram and 64bits in 1970, we would have just maxed it out by 1972.
An undergrad using the hottest tech right of the bat? Cooked.
What if I tell you there is an undergrad that just flunked a class and is depressed and cries about it? Considers changing their major? This is pre-AI. We have a chance that undergrads will never feel that way again. Not intimidated by anything.
> Cooked
People using this phrase should probably stop, it's become extremely tiresome as a cliche
Im guessing "cooked" means the opposite of "based" ?
but I could be a Feynman radian out on those vectors in leet space.
Cooked means "you're finished, it's over, no return", it's a really definitive term which I'm opposed to by philosophy
Using AI has changed everything for me and made my ambition swell. I despise looking up the details of frameworks and api’s, transmitting a change of strategy through a system, typing the usual stupid loops and processes that are the foundation of all programs. Hell, the amount of time I save on typing errors is worth it.
I have plans for many things I didn’t have the energy for in the past.
[dead]
[dead]