Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Toddlers don't understand truth either, until it's taught.

This crayon is red. This crayon is blue.

The adult asks: "is this crayon red?" The child responds: "no that crayon is blue." The adult then affirms or corrects the response.

This occurs over and over and over until that child understands the difference between red and blue, orange and green, yellow and black etcetera.

We then move on to more complex items and comparisons. How could we expect AI to understand these truths without training them to understand?



You probably need to be more clear: the LLM is trained with large amounts of data making statements about facts. It is told repeatedly, "according to this source that crayon is blue".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: