Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You exemplify well a big problem with LLMs: When people see accurate enough output on some test question and take it as evidence that they can trust the output to any extent on areas they don't dominate.


No, that's not really the case. I don't think you should trust LLM output at all, but I think in general it's closer to the level of reliability of wikipedia than it is to producing useless bullshit.

Which is to say that it's useful, but you shouldn't trust it without double checking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: