So let me get this straight. You’re writing tens of thousands of lines of code that will presumably go into a public GitHub repository and/or be served from some location. Even if it only runs locally on your own machine, at some point you’ll presumably give that code network access. And that code is being developed (without much review) by an agent that, in our threat model, has been fully subverted by prompt injection?
Sandboxing the agent hardly seems like a sufficient defense here.
The sandbox idea seems nice, it's just a question of how annoying it is in practice. For example the "Claude Code on the web" sandbox appears to prevent you from loading `https://api.github.com/repos/.../releases/latest`. Presumably that's to prevent you from doing dangerous GitHub API operations with escalated privileges, which is good, but it's currently breaking some of my setup scripts....
My approach is to ask Claude to plan anything beyond a trivial change and I review the plan, then let it run unsupervised to execute the plan. But I guess this does still leave me vulnerable to prompt injection if part of the plan is accessing external content
Telling Claude to solve a problem and walking away isn't a problem you solved. You weren't in the loop. You didn't complete any side quests or do anything of note, you merely watched an AGI work.
So let me get this straight. You’re writing tens of thousands of lines of code that will presumably go into a public GitHub repository and/or be served from some location. Even if it only runs locally on your own machine, at some point you’ll presumably give that code network access. And that code is being developed (without much review) by an agent that, in our threat model, has been fully subverted by prompt injection?
Sandboxing the agent hardly seems like a sufficient defense here.
What is your worst case scenario from this?
The sandbox idea seems nice, it's just a question of how annoying it is in practice. For example the "Claude Code on the web" sandbox appears to prevent you from loading `https://api.github.com/repos/.../releases/latest`. Presumably that's to prevent you from doing dangerous GitHub API operations with escalated privileges, which is good, but it's currently breaking some of my setup scripts....
Is that with their default environment?
I have been running a bunch of stuff in there with a custom environment that allows "*"
My approach is to ask Claude to plan anything beyond a trivial change and I review the plan, then let it run unsupervised to execute the plan. But I guess this does still leave me vulnerable to prompt injection if part of the plan is accessing external content
Claude Code offers sandboxing now: https://www.anthropic.com/engineering/claude-code-sandboxing
It's discussed in the linked post.
Telling Claude to solve a problem and walking away isn't a problem you solved. You weren't in the loop. You didn't complete any side quests or do anything of note, you merely watched an AGI work.
Here's one I did even less work for: https://tools.simonwillison.net/terminal-to-html - prompt and video here: https://simonwillison.net/2025/Oct/23/claude-code-for-web-vi...