Hacker Newsnew | past | comments | ask | show | jobs | submit | nezhar's commentslogin

It's really hard to follow them; it feels like the Spider-Man meme.


This is a great example of why network restrictions on an application are not sufficient.

In my opinion, having a container is currently the best trade-off in terms of performance and maintainability of the setup.

Looks interesting. How does this compare to a container?


It uses Linux kernel namespaces instead of chroot (containers are just fancy Liunx chroot)

Ackually, “containers” on linux are usually implemented using linux namespaces instead of chroot.

The isolation pattern is a good starting point.

I believe the detection pattern may not be the best choice in this situation, as a single miss could result in significant damage.

I built https://github.com/nezhar/claude-container for exactly this reason - it's easy to make mistakes with these agents even for technical users, especially in yolo mode.


Author here. This release adds two features I've found useful for understanding Claude Code behavior:

1. API Proxy - Transparently logs all interactions with the Anthropic API. Every request/response is captured without modifying Claude Code itself.

2. Datasette Integration - Lets you query and visualize the captured API data with SQL. Useful for tracking token usage, analyzing prompt patterns, or debugging unexpected behavior.

The container itself provides isolation from the host system while maintaining persistent credentials and workspace access via bind mounts.

Happy to answer questions about the implementation or use cases.


You still need to connect to Anthropic and obtain an authorization token.

The isolation here refers to the workspace. Since you run the CLI in a container, the process can only access what you have mapped inside. This is helpful if you want to avoid issues like this: https://hackaday.com/2025/07/23/vibe-coding-goes-wrong-as-ai...


Ok. Thanks for the clarification. Still a good project, and many people like to use online services.

I prefer local models. All I use and used on the local model could be on an online, no secrets here. The speed is more than acceptable for a low end cpu+gpu.

I stil use Perplexity sometimes for more complex questions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: