I implore everyone to watch the movie ‘Money Monster’, not only because it’s a great movie but also because I think it has a minor plot point that basically predicts how AI will be used.
(small spoiler)
In Money Monster it turns out the hedge fund manager who is blaming their state of the art AI bot for malfunctioning and rapidly selling off certain stock, tanking it because of that, did so out of a machine code error. He can’t explain what the error was or how or why it happened cause ‘he didn’t program their trading bot, some guy in Asia did.’
But as it turns out he did do it in some way.
I feel like using AI as a way to abstract blame even more when something goes wrong will be a big thing, even when secretly it was not the AI (ML) or who trained the thing’s fault.
My best guess is that at some point a “neutral“ AI will be put in charge of everything and everybody must obey for the good of society. As in one day you only have 22 energy points to spend on electricity or another day you can only eat crickets and whatnot. But it will only appear neutral - in essence somebody will control the AI and hence control the people. And people will agree to its decisions because the AI “knows best.”
It's supposed to be applied equally to all citizens but we see cases everyday where it's not. The wealthy, the enforcers, and their allies are frequently spared from "the law"
AI is genuinely just a rebranding of "algorithms". Like I get it, it's faster and it works by feeding data instead of feeding code... but code is data, data is code etcetc...
Jim Keller said it best, about every 10 orders of magnitude in available computation the paradigm of computing shifts to a higher level of mathematical building blocks. In the beginning we had pure logic. Then came addition and subtraction, then came vectors, then came matrices, and now we're at tensors, he believes the next building block is graphs.
This is citing from memory while I'm sleep-deprived, but I think the general idea holds. Approximately every 10x increase in computation, there's a paradigm shift in what is possible.
But it's just algorithms. Always has been, and still is.
(small spoiler)
In Money Monster it turns out the hedge fund manager who is blaming their state of the art AI bot for malfunctioning and rapidly selling off certain stock, tanking it because of that, did so out of a machine code error. He can’t explain what the error was or how or why it happened cause ‘he didn’t program their trading bot, some guy in Asia did.’ But as it turns out he did do it in some way.
I feel like using AI as a way to abstract blame even more when something goes wrong will be a big thing, even when secretly it was not the AI (ML) or who trained the thing’s fault.