Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The comment about parrots and dogs is made in bad faith

Not necessarily. (Some aphonic, adactyl downvoters seem to have possibly tried to nudge you into noticing that your idea above is against some entailed spirit the guidelines.)

The poster may have meant that for the use natural to him, he feels in the results the same utility of discussing with a good animal. "Clarifying one's prompts" may be effective in some cases, but it's probably not what others seek. It is possible that many want the good old combination of "informative" and "insightful": in practice there may be issues with both.



> "Clarifying one's prompts" may be effective in some cases but it's probably not what others seek

It's not even that. Can the LLM run away, stop the conversation or even say no? It's as much as your boss "talking" to you about the task and not giving you a chance to respond. Is that a talk? It's 1-way.

E.g. ask the LLM who invented Wikipedia. It will respond with "facts". If I ask a friend, the reply might be "look it up yourself". This a real conversation. Until then.

Even parrots and dogs can respond differently than a forced reply exactly how you need it.


True - but LLMs can do this.

A German Onion-like magazine has a wrapper around ChatGPT that behaves like that called „DeppGPT“ (IdiotGPT), likely implemented with a decent prompt.


If you have something to say, just say it directly and clearly.

And the poster clearly did not mean what you say he "may have meant".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: