The best part of this is I watched Sam Altman say he really thinks fusion is a short period of time away in response to a question about energy consumption a couple years ago. That was the moment I knew he's a quack.
Not to be anti YC on their forum, but the VC business model is all about splashing cash on a wide variety of junk that will mostly be worthless, hyping it to the max, and hoping one or two is like amazon or facebook. He's not an engineer, he's like Steve Jobs without the good parts.
Altman recently said, in response to a question about the prospect of half of entry-level white-collar jobs being replaced by "AI" and college graduates being put out of work by it:
> “I mean in 2035, that, like, graduating college student, if they still go to college at all, could very well be, like, leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job, and feeling so bad for you and I that, like, we had to do this kind of, like,
really boring old kind of work and everything is just better."
Which should be reassuring to anyone having trouble finding an entry-level job as an illustrator or copywriter or programmer or whatever.
Fusion is 8 light-minutes away. The connection gets blocked often, so methods to buffer power for those periods are critical, but they're getting better so it's gotten a lot more practical to use remote fusion power at large scales. It seems likely that the power buffering problem is easier to solve than the local fusion problem, so more development goes to improving remote fusion power than local.
Sam is an investor in a fusion startup. In any case, how long it takes us to get to working fusion is proportional to the amount of funding it recieves. I'm hopeful that increased energy needs will spur more investment into it.
People saying that usually mean it as "AI is here and going to change everything overnight now" yet, if you take it literally, it's "we're actually over 50 years into AI, things will likely continue to advance slowly over decades".
The common thread between those who take things as "AI is anything that doesn't work yet" and "what we have is still not yet AI" is "this current technology could probably have used a less distracting marketing name choice, where we talk about what it delivers rather than what it's supposed to be delivering".
Machine learning as a descriptive phrase has stopped being relevant. It implies the discovery of information in a training set. The pre-training of an LLM is most definitely machine learning. But what people are excited and interested in is the use of this learned data in generative AI. “Machine learning” doesn’t capture that aspect.
But the things we try to make LLMs do post-pre-training are primarily achieved via reinforcement learning. Isn't reinforcement learning machine learning? Correct me if I'm misconstruing what you're trying to say here
You are still talking about training. Generative applications have always been fundamentally different from classification problems, and has now (in the form of transformers and diffusion models) taken on entirely new architectures.
If “machine learning” is taken to be so broad as to include any artificial neural network, all of which are trained with back propagation these days, then it is useless as a term.
The term “machine learning” was coined in the era of specialized classification agents that would learn how to segment inputs in some way. Thing email spam detection, or identifying cat pictures. These algorithms are still an essential part of both the pre-training and RLHF fine tuning of LLM models. But the generative architectures are new and very essential to the current interest in and hype surrounding AI at this point in time.
I see a fair amount of bullshit in the LLM space though, where even cursory consideration would connect the methods back to well-known principles in ML (and even statistics!) to measure model quality and progress. There's a lot of 'woo, it's new! we don't know how to measure it exactly but we think it's groundbreaking!' which is simply wrong.
From where I sit, the generative models provide more flexibility but tend to underperform on any particular task relative to a targeted machine learning effort, once you actually do the work on comparative evaluation.
I think we have a vocabulary problem here, because I am having a hard time understanding what you are trying to say.
You appear to be comparing apples to oranges. A generation task is not a categorization task. Machine learning solves categorization problems. Generative AI uses model trained by machine learning methods, but in a very different architecture to solve generative problems. Completely different and incomparable application domain.
I think you're overstating the distinction between ML and generation - plenty of ML methods involve generative models. Even basic linear regression with a squared loss can also be framed as a generative model derived by assuming Gaussian noise. Probabilistic PCA, HMMs, GMMs etc... generation has been a core part of ML for over 20 years.
Because if they're curious, they can look up (or ask an "AI") about machine learning, rather than just AI, and learn more about the capabilities and difficulties and mechanics of how it works, learn some of the history, and have grounded expectations for what the next 10 years of development might look like.
That was an impressive takeaway from the first machine learning course i took: that many things previously under the umbrella of Artificial Intelligence have since been demystified and demoted to implementations we now just take for granted. Some examples were real world map route planning for transport, locating faces in images, Bayesian spam filters.
As a young child in Indonesia we had an exceptionally fancy washing machine with all sorts of broken English superlatives on it, including "fuzzy logic artificial intelligence" and I used to watch it doing the turbo spin or whatever, wondering what it was thinking. My poor mom thought I was retarded.
Andrew Ng has a nice quote: “Instead of doing AI, we ended up spending our lives doing curve fitting.”
Ten years ago you'd be ashamed to call anything "AI," and say machine learning if you wanted to be taken seriously, but neural networks have really have brought back the term--and for good reason, given the results.
Well that's rather the point - arguing about exceptionally heavily used terminology isn't useful because there's already a largely shared understanding. Stepping away from that is a huge effort, unlikely to work and at best all you've done is change what people mean when they use a word.
Except AI already had a clear definition well before it started being used as a way to inflate valuations and push marketing narratives.
If nothing else it's been a sci-fi topic for more than a century. There's connotations, cultural baggage, and expectations from the general population about what AI is and what it's capable of, most of which isn't possible or applicable to the current crop of "AI" tools.
You can't just change the meaning of a word overnight and toss all that history away, which is why it comes across as an intentionally dishonest choice in the name of profits.
And you should do some reading into the edit history of that page. Wikipedia isn't immune from concerted efforts to astroturf and push marketing narratives.
More to the point, the history of AI up through about 2010 talks about attempts to get it working using different approaches to the problem space, followed by a shift in the definitions of what AI is in the 2005-2015 range (narrow AI vs. AGI). Plenty of talk about the various methods and lines fo research that were being attempted, but very little about publicly pushing to call commercially available deliverables as AI.
Once we got to the point where large amounts of VC money was being pumped into these companies there was an incentive to redefine AI in favor of what was within the capabilities and scope of machine learning and LLMs, regardless of whether that fit into the historical definition of AI.
And real AI is probably like fusion. Its always 10 years away.