Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs literally read all that commentary in training, so they're taking it into account, not regurgitating the top-voted answer from SO. They're arguably better at this than junior devs.


While LLMs are better at reading the surrounding context, I am not convinced they are particularly good at taking it on board (compared to an adult human, obviously fantastic compared to any previous NLP).

Biggest failure mode I experience with LLMs is a very human-like pattern, what looks like corresponding with an interlocutor who absolutely does not understand a core point you raised 5 messages earlier and have re-emphasised on each incorrect response:

--

>>> x

>> y

> not y, x

oh, right, I see… y

--

etc.


That isn’t how they work and that’s probably why you’re being downvoted




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: