I've gone so far as to become frustrated with what I found in the open source options like Copilot and have been building my own custom extension, which is now better with gemini-3-flash than Copilot is with any model. Their prompt/context engineering is trash and their tools are not great
Very cool, always love seeing adventures in codegen. If you want to see what something like this looks like after many years of development, after you discover all those day-2 issues in code gen at scale...
you'll likely want to move from printf to text/template and then put a schema on the input data, then put the templates as files on disk so you can iterate faster without needing to recompile the binary to adjust the code that comes out
btw, you contributed the built binary to your git repo
How is a one person fork of Go in any way going to ever be more secure than the original which is developed by many people? Why should I trust your changes? Is this actually an adversarial project that will hide and rug pull down the road?
2. "How is a one person fork of Go in any way going to ever be more secure than the original which is developed by many people? " - Read the README.
3. "Why should I trust your changes?" - You don't have to. The same reasons you don't have to trust the Github project you're cloning.
4. "Is this actually an adversarial project that will hide and rug pull down the road?" - Read the code.
Sarcasm aside, the objective is "helping to find bugs in Go codebases via built-in security implementations". That's mainly used for fuzzing and testing. Don't deploy you compiled binary on production with that compiler.
Yes, I'm using Dagger and it has great secret support, obfuscating them even if the agent, for example, cats the contents of a key file, it will never be able to read or print the secret value itself
tl;Dr there are a lot of ways to keep secret contents away from your agent, some without actually having to keep them "physically" separate
I want to like Anthropic, they have such a great knowledge sharing culture and their content is bar none, but then they keep pulling stuff like this... I just can't bring myself to trust their leadership's values or ethics.
I would disagree on the knowledge sharing. They're the only major AI company that's released zero open weight models. Nor do they share any research regarding safety training, even though that's supposedly the whole reason for their existence.
I agree with you on your examples, but would point out there are some places they have contributed excellent content.
In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable
reply