Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another theory: you have some spec in your mind, write down most of it and expect the LLM to implement it according to the spec. The result will be objectively a deviation from the spec.

Some developers will either retrospectively change the spec in their head or are basically fine with the slight deviation. Other developers will be disappointed, because the LLM didn't deliver on the spec they clearly hold in their head.

It's a bit like a psychological false memory effect where you misremember and/or some people are more flexibel in their expectations and accept "close enough" while others won't accept this.

At least, I noticed both behaviors in myself.



This is true. But, it's also true of assigning tasks to junior developers. You'll get back something which is a bit like what you asked for, but not done exactly how you would have done it.

Both situations need an iterative process to fix and polish before the task is done.

The notable thing for me was, we crossed a line about six months ago where I'd need to spend less time polishing the LLM output than I used to have to spend working with junior developers. (Disclaimer: at my current place-of-work we don't have any junior developers, so I'm not comparing like-with-like on the same task, so may have some false memories there too.)

But I think this is why some developers have good experiences with LLM-based tools. They're not asking "can this replace me?" they're asking "can this replace those other people?"


> They're not asking "can this replace me?" they're asking "can this replace those other people?"

People in general underestimate other people, so this is the wrong way to think about this. If it can't replace you then it can't replace other people typically.


But a junior developer can learn and improve based on the specific feedback you give them.

GPT5 will, at least to a first approximation, always be exactly as good or as bad as it is today.


> They're not asking "can this replace me?" they're asking "can this replace those other people?"

In other words, this whole thing is a misanthropic fever dream


Yeah, I see quite a lot of misanthropy in the rhetoric people sometimes use to advance AI. I'll say something like "most people are able to learn from their mistakes, whereas an LLM won't" and then some smartass will reply "you think too highly of most people" -- as if this simple capability is just beyond a mere mortal's abilities.


> misanthropic

I see what you did there


This is a really short sighted way to look at things. Juniors become seniors. LLMs just keep hallucinating.


This implies that it executes the spec correctly, just not in a way that's expected. But if you actually look at how these things operate, that's flat out not true.

Mitchell Hashimito just did a write up about his process for shipping a new feature for Ghostty using AI. He clearly knows what he's doing and follows all the AI "best practices" as far as I could tell. And while he very clearly enjoyed the process and thinks it made him more productive, the post is also a laundry list of this thing just shitting the bed. It gets confused, can't complete tasks, and architects the code in ways that don't make sense. He clearly had to watch it closely, step in regularly, and in some cases throw the code out entirely and write it himself.

The amount of work I've seen people describe to get "decent" results is absurd, and a lot of people just aren't going to do that. For my money it's far better as a research assistant and something to bounce ideas off of. Or if it is going to write something it needs to be highly structured input with highly structured output and a very narrow scope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: