Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People don't understand that LLMs aren't humans. There's a lot of implicit context when humans are communicating. LLMs don't do that.

They do have biases, like if you tell them to do something with data, they'll pretty likely grab Python as the tool.

And different models have different biases and styles, you can try to guide them to your specific style with prompts, but it doesn't always work - depending on how esoteric your personal style is.



Imagining the tool is like a college intern helps me. It has no idea how the real world works. It blindly follows things it previously found online. It’s great at very common boilerplate coding tasks. But it’s super naive and will need hand holding or you to provide a huge amount of context for it so it can operate on its own.

I’m still very much learning how to give it good instructions to accomplish tasks. Different tasks require different types and methods of instruction. It’s extremely interesting to me. I’m far from an expert.


I imagine LLMs as an endless stream of consultants, each can work only one day (context).

Every day you need to bring them up to speed (prompt, accessible documentation) and give them the task of the day. If it looks like they can't finish the task (context runs out), you need to tell them to write down where they left (store context to a memory, markdown file is fine) and kick them out the door.

Then GOTO 10, get the next one in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: