Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried out Tom Waits's "The Piano Has Been Drinking", which it gets spectacularly wrong: https://www.songtell.com/tom-waits/the-piano-has-been-drinki...

It saddens me that we're entering into an age of infinite machine-generated bullshit (and here I mean "speech produced without regard for truth"). A lot of it has negative information value, kind of like how misshapen molecules can block cell receptors. I hope somebody is working on tools to detect AI-generated content so I can give it zero attention without having to dig through it.



>speech produced without regard for truth

People are not any different from transformer models in this regard even if they are well-intentioned. Every time there is a physics or finance thread on HN, a lot of well meaning software engineers transformer out absolute nonsense with all of the same "confident, language that signals an intelligent author, completely wrong" qualities that we see in these AIs.


I get your point -- people also produce bullishit -- but people are significantly different in that they are not only capable of finding the truth but mostly do so with some regularity. I expect that even the most "confident but wrong" software engineer has areas of their life where are perfectly able to recognize the truth and correct their statements for it. Indeed, observing children, I think the feedback loop of "confidently say wrong thing and self-correct over time" is how a lot of actual learning happens. Transformer models concern me because they are entirely missing that loop.


ChatGPT also has areas where it can recognize the truth. I agree that some people can hedge their confidence in some areas, but it's not a universal trait that everyone exhibits all the time. I think this shows we're only sometimes generally intelligent.


The real difference is in scale. Automations can be coordinated to produce self-affirming bullshit at a scale that drives real discussion out of view. You already see this on twitter with troll farms and primitive bots. Now it will be a tidal wave


It already is a tidal wave just with people.


> ChatGPT also has areas where it can recognize the truth.

Do you have some examples here?


ChatGPT will provide correct answers to a lot of questions, especially ones where you'd expect to find mostly correct answers in the first few Google results.


Right?

There are only two useful interpretations for the meaning of a song:

1. What the artist intended

2. What a listener interprets through their lens

Both can be valuable and True. Note that neither of these is "what a mechanical parrot spews out based on a crude imitation of human speech".

What a remarkable engine for converting energy and money into bullshit.


Same thing with B.Y.O.B by System of a Down:

> "B.Y.O.B." by System of a Down is a protest song criticizing the government for sending the poor to fight wars instead of enlisting wealthy people. The chorus and refrains are calling people to "party" and "dance in the desert" to protest the lack of governmental action to keep the poor out of wars. Furthermore, the song discusses how the government lies to people and hands them over to obsoletion. The song is ultimately a call to arms for those in power to take responsibility for their actions and to end the exploitation of the poor.

A whole lot of nonsense except for the last sentence, which pretty much summarizes the song.


The signal to noise ratio on today's internet is very bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: