This crayon is red. This crayon is blue.
The adult asks: "is this crayon red?" The child responds: "no that crayon is blue." The adult then affirms or corrects the response.
This occurs over and over and over until that child understands the difference between red and blue, orange and green, yellow and black etcetera.
We then move on to more complex items and comparisons. How could we expect AI to understand these truths without training them to understand?
This crayon is red. This crayon is blue.
The adult asks: "is this crayon red?" The child responds: "no that crayon is blue." The adult then affirms or corrects the response.
This occurs over and over and over until that child understands the difference between red and blue, orange and green, yellow and black etcetera.
We then move on to more complex items and comparisons. How could we expect AI to understand these truths without training them to understand?