Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe the code and task was super simple. Maybe the whole rewrite was already in its training data (enough) in some way


Naturally it wasn't breaking new ground on computer science :)

It was a "script" (application?, the line is vague) that reads my Obsidian movie/anime/tv watchlist markdown files from my vault, grabs the title and searches for that title in Themoviedb

If there are multiple matches, it displays a dialog to pick the correct one.

Then it fills the relevant front matter, grabs a cover for Obsidian Bases and adds some extra info on the page for that item.

The Python version worked just fine, but I wanted to share the tool and I just can't be arsed to figure that out with Python. With Go it's a single binary.

Without LLM assistance I could've easily spent a few nights doing this, and most likely would've just not done it.

Now it was the effort of me giving the LLM a git worktree to safely go wild in (I specifically said that it can freely delete anything to clean up old crap) and that's what it did.

And this isn't the first Python -> Go transition I've done, I did the same for a bunch of small utility scripts when GPT3/3.5 was the new hotness. Wasn't as smooth then (many a library and API was hallucinated), but still markedly faster than doing it by hand.


Now, that's actually a very reasonable usage, because it isn't coding, it's translation. The counter to that being beneficial is that many code translation programs already exists and it's certainly something that can be built and done so in a way to be certain of security, proper practices etc, which you can't guarantee with LLM and using a specialized program is orders of magnitude less resource intensive.

Of course it's nice as a hobbyist end user to do exactly what you did for a simple script and that's to the credit of the LLM. The over-arching issue is that extremely inefficient process is only possible thanks to subsidization from Venture capital.


I personally prefer Go with LLMs because it has a relatively large amount of analyzers and other tooling to statically check that there are no major issues with the code.

Also the compiler being a stickler for unused code etc. keeps the Agentic models in check, they can't YOLO stuff as hard like in, say, Python.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: