Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's frustrating to see so much focus given to how these language models can be tricked into saying silly things.

They are an amazing achievement. We finally have a practical tool to extract things from text, just by talking to it! And yet we're focusing on probing the shadows of their training sets ...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: