You would find things in there that were already close to QM and relativity. The Michelson-Morley experiment was 1887 and Lorentz transformations came along in 1889. The photoelectric effect (which Einstein explained in terms of photons in 1905) was also discovered in 1887. William Clifford (who _died_ in 1889) had notions that foreshadowed general relativity: "Riemann, and more specifically Clifford, conjectured that forces and matter might be local irregularities in the curvature of space, and in this they were strikingly prophetic, though for their pains they were dismissed at the time as visionaries." - Banesh Hoffmann (1973)
Things don't happen all of a sudden, and being able to see all the scientific papers of the era its possible those could have fallen out of the synthesis.
I presume that's what the parent post is trying to get at? Seeing if, given the cutting edge scientific knowledge of the day, the LLM is able to synthesis all it into a workable theory of QM by making the necessary connections and (quantum...) leaps
But that's not the OP's challenge, he said "if the model comes up with anything even remotely correct." The point is there were things already "remotely correct" out there in 1900. If the LLM finds them, it wouldn't "be quite a strong evidence that LLMs are a path to something bigger."
It's not the comment which is illogical, it's your (mis)interpretation of it. What I (and seemingly others) took it to mean is basically could an LLM do Einstein's job? Could it weave together all those loose threads into a coherent new way of understanding the physical world? If so, AGI can't be far behind.
This alone still wouldn't be a clear demonstration that AGI is around the corner. It's quite possible a LLM could've done Einstein's job, if Einstein's job was truly just synthesising already available information into a coherent new whole. (I couldn't say, I don't know enough of the physics landscape of the day to claim either way.)
It's still unclear whether this process could be merely continued, seeded only with new physical data, in order to keep progressing beyond that point, "forever", or at least for as long as we imagine humans will continue to go on making scientific progress.
Einstein is chosen in such contexts because he's the paradigmatic paradigm-shifter. Basically, what you're saying is: "I don't know enough history of science to confirm this incredibly high opinion on Einstein's achievements. It could just be that everyone's been wrong about him, and if I'd really get down and dirty, and learn the facts at hand, I might even prove it." Einstein is chosen to avoid exactly this kind of nit-picking.
These two are so above everyone else in the mathematical world that most people would struggle for weeks or even months to understand something they did in a couple of minutes.
There's no "get down and dirty" shortcut with them =)
No, by saying this, I am not downplaying Einstein's sizeable achievements nor trying to imply everyone was wrong about him. His was an impressive breadth of knowledge and mathematical prowess and there's no denying this.
However, what I'm saying is not mere nitpicking either. It is precisely because of my belief in Einstein's extraordinary abilities that I find it unconvincing that an LLM being able to recombine the extant written physics-related building blocks of 1900, with its practically infinite reading speed, necessarily demonstrates comparable capabilities to Einstein.
The essence of the question is this: would Einstein, having been granted eternal youth and a neverending source of data on physical phenomena, be able to innovate forever? Would an LLM?
My position is that even if an LLM is able to synthesise special relativity given 1900 knowledge, this doesn't necessarily mean that a positive answer to the first question implies a positive answer to the second.
I'm sorry, but 'not being surprised if LLMs can rederive relativity and QM from the facts available in 1900' is a pretty scalding take.
This would absolutely be very good evidence that models can actually come up with novel, paradigm-shifting ideas. It was absolutely not obvious at that time from the existing facts, and some crazy leap of faiths needed to be taken.
This is especially true for General Relativity, for which you had just a few mismatch in the mesurements like Mercury's precession, and where the theory almost entirely follows from thought experiments.
Isn't it an interesting question? Wouldn't you like to know the answer? I don't think anyone is claiming anything more than an interesting thought experiment.
This does make me think about Kuhn's concept of scientific revolutions and paradigms, and that paradigms are incommensurate with one another. Since new paradigms can't be proven or disproven by the rules of the old paradigm, if an LLM could independently discover paradigm shifts similar to moving from Newtonian gravity to general relativity, then we have empirical evidence of an LLM performing a feature of general intelligence.
However, you could also argue that it's actually empirical evidence that general relativity and 19th century physics wasn't truly a paradigm shift -- you could have 'derived' it from previous data -- that the LLM has actually proven something about structurally similarities between those paradigms, not that it's demonstrating general intelligence...
His concept sounds odd. There will always be many hints of something yet to be discovered, simply by the nature of anything worth discovering having an influence on other things.
For instance spectroscopy enables one to look at the spectra emitted by another 'thing', perhaps the sun, and it turns out that there's little streaks within the spectra the correspond directly to various elements. This is how we're able to determine the elemental composition of things like the sun.
That connection between elements and the patterns in their spectra was discovered in the early 1800s. And those patterns are caused by quantum mechanical interactions and so it was perhaps one of the first big hints of quantum mechanics, yet it'd still be a century before we got to relativity, let alone quantum mechanics.
I mean, "the pieces were already there" is true of everything? Einstein was synthesizing existing math and existing data is your point right?
But the whole question is whether or not something can do that synthesis!
And the "anyone who read all the right papers" thing - nobody actually reads all the papers. That's the bottleneck. LLMs don't have it. They will continue to not have it. Humans will continue to not be able to read faster than LLMs.
> I mean, "the pieces were already there" is true of everything? Einstein was synthesizing existing math and existing data is your point right?
If it's true of everything, then surely having an LLM work iteratively on the pieces, along with being provided additional physical data, will lead to the discovery of everything?
If the answer is "no", then surely something is still missing.
> And the "anyone who read all the right papers" thing - nobody actually reads all the papers. That's the bottleneck. LLMs don't have it. They will continue to not have it. Humans will continue to not be able to read faster than LLMs.
I agree with this. This is a definitive advantage of LLMs.
Actually it's worse than that, the comment implied that Einstein wouldn't even qualify for AGI. But I thought the conversation was pedantic enough without my contribution ;)
I think the problem is the formulation "If so, AGI can't be far behind". I think that if a model were advanced enough such that it could do Einstein's job, that's it; that's AGI. Would it be ASI? Not necessarily, but that's another matter.
The phone in your pocket can perform arithmetic many orders of magnitude faster than any human, even the fringe autistic savant type. Yet it's still obviously not intelligent.
Excellence at any given task is not indicative of intelligence. I think we set these sort of false goalposts because we want something that sounds achievable but is just out of reach at one moment in time. For instance at one time it was believed that a computer playing chess at the level of a human would be proof of intelligence. Of course it sounds naive now, but it was genuinely believed. It ultimately not being so is not us moving the goalposts, so much as us setting artificially low goalposts to begin with.
So for instance what we're speaking of here is logical processing across natural language, yet human intelligence predates natural language. It poses a bit of a logical problem to then define intelligence as the logical processing of natural language.
The problem is that so far, SOTA generalist models are not excellent at just one particular task. They have a very wide range of tasks they are good at, and good scores in one particular benchmarks correlates very strongly with good scores in almost all other benchmarks, even esoteric benchmarks that AI labs certainly didn't train against.
I'm sure, without any uncertainty, that any generalist model able to do what Einstein did would be AGI, as in, that model would be able to perform any cognitive task that an intelligent human being could complete in a reasonable amount of time (here "reasonable" depends on the task at hand; it could be minutes, hours, days, years, etc).
I see things rather differently. Here's a few points in no particular order:
(1) - A major part of the challenge is in not being directed towards something. There was no external guidance for Einstein - he wasn't even a formal researcher at the time of his breakthroughs. An LLM might be able to be handheld towards relativity, though I doubt it, but given the prompt of 'hey find something revolutionary' it's obviously never going to respond with anything relevant, even with substantially greater precision specifying field/subtopic/etc.
(2) - Logical processing of natural language remains one small aspect of intelligence. For example - humanity invented natural language from nothing. The concept of an LLM doing this is a nonstarter since they're dependent upon token prediction, yet we're speaking of starting with 0 tokens.
(3) - LLMs are, in many ways, very much like calculators. They can indeed achieve some quite impressive feats in specific domains, yet then they will completely hallucinate nonsense on relatively trivial queries, particularly on topics where there isn't extensive data to drive their token prediction. I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination. Their ability to produce compelling nonsense makes them particularly tedious for using to do anything you don't already effectively know the answer to.
> I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination
Simply because I don't see hallucinations as a permanent problem. I see that models keep improving more and more in this regard, and I don't see why the hallucination rate can't be abirtrarily reduced with further improvements to the architecture. When I ask Claude about obscure topics, it correctly replies "I don't know", where past models would have hallucinated an answer. When I use GPT 5.2-thinking for my ML research job, I pretty much never encounter hallucinations.
Hahah, well you working in the field probably explains your optimism more than your words! If you pretty much never encounter hallucinations with GPT then you're probably dealing with it on topics where there's less of a right or wrong answer. I encounter them literally every single time I start trying to work out a technical problem with it.
What's the bar here? Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"
I by no means believe LLMs are general intelligence, and I've seen them produce a lot of garbage, but if they could produce these revolutionary theories from only <= year 1900 information and a prompt that is not ridiculously leading, that would be a really compelling demonstration of their power.
> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"
It turns out my reading is somewhat topical. I've been reading Rhodes' "The Making of the Atomic Bomb" and of the things he takes great pains to argue (I was not quite anticipating how much I'd be trying to recall my high school science classes to make sense of his account of various experiments) is that the development toward the atomic bomb was more or less inexorable and if at any point someone said "this is too far; let's stop here" there would be others to take his place. So, maybe, to answer your question.
It’s been a while since I read it, but I recall Rhodes’ point being that once the fundamentals of fission in heavy elements were validated, making a working bomb was no longer primarily a question of science, but one of engineering.
Engineering began before they were done with the experimentation and theorizing part. But the US, the UK, France, Germany, the Soviets, and Japan all had nuclear weapons programs with different degrees of success.
> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?
Yes. It is certainly a question if Einstein is one of the smartest guy ever lived or all of his discoveries were already in the Zeitgeist, and would have been discovered by someone else in ~5 years.
Einstein was smart and put several disjointed things together. It's amazing that one person could do so much, from explaining the Brownian motion to explaining the photoeffect.
But I think that all these would have happened within _years_ anyway.
> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"
Kind of, how long would it have realistically taken for someone else (also really smart) to come up with the same thing if Einstein wouldn't have been there?
But you're not actually questioning whether he was "really smart". Which was what GP was questioning. Sure, you can try to quantify the level of smarts, but you can't still call it a "stochastic parrot" anymore, just like you won't respond to Einstein's achievements, "Ah well, in the end I'm still not sure he's actually smart, like I am for example. Could just be that he's just dumbly but systematically going through all options, working it out step by step, nothing I couldn't achieve (or even better, program a computer to do) if I'd put my mind to it."
I personally doubt that this would work. I don't think these systems can achieve truly ground-breaking, paradigm-shifting work. The homeworld of these systems is the corpus of text on which it was trained, in the same way as ours is physical reality. Their access to this reality is always secondary, already distorted by the imperfections of human knowledge.
Well, we know many watershed moments in history were more a matter of situation than the specific person - an individual genius might move things by a decade or two, but in general the difference is marginal. True bolt-out-of-the-blue developments are uncommon, though all the more impressive for that fact, I think.
Well, if one had enough time and resources, this would make for an interesting metric. Could it figure it out with cut-off of 1900? If so, what about 1899? 1898? What context from the marginal year was key to the change in outcome?
It's only easy to see precursors in hindsight. The Michelson-Morley tale is a great example of this. In hindsight, their experiment was screaming relativity, because it demonstrated that the speed of light was identical from two perspectives where it's very difficult to explain without relativity. Lorentz contraction was just a completely ad-hoc proposal to maintain the assumptions of the time (luminiferous aether in particular) while also explaining the result. But in general it was not seen as that big of a deal.
There's a very similar parallel with dark matter in modern times. We certainly have endless hints to the truth that will be evident in hindsight, but for now? We are mostly convinced that we know the truth, perform experiments to prove that, find nothing, shrug, adjust the model to be even more esoteric, and repeat onto the next one. And maybe one will eventually show something, or maybe we're on the wrong path altogether. This quote, from Michelson in 1894 (more than a decade before Einstein would come along), is extremely telling of the opinion at the time:
"While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals." - Michelson 1894
With the passage of time more and more things have been discovered through precision. Through identifying small errors in some measurement and pursuing that to find the cause.
It's not precision that's the problem, but understanding when something has been falsified. For instance the Lorentz transformations work as a perfectly fine ad-hoc solution to Michelson's discovery. All it did was make the aether a bit more esoteric in nature. Why do you then not simply shrug, accept it, and move on? Perhaps even toss some accolades towards Lorentz for 'solving' the puzzle? Michelson himself certainly felt there was no particularly relevant mystery outstanding.
For another parallel our understanding of the big bang was, and probably is, wrong. There are a lot of problems with the traditional view of the big bang with the horizon problem [1] being just one among many - areas in space that should not have had time to interact behave like they have. So this was 'solved' by an ad hoc solution - just make the expansion of the universe go into super-light speed for a fraction of a second at a specific moment, slow down, then start speeding up again (cosmic inflation [2]) - and it all works just fine. So you know what we did? Shrugged, accepted it, and even gave Guth et al a bunch of accolades for 'solving' the puzzle.
This is the problem - arguably the most important principle of science is falsifiability. But when is something falsified? Because in many situations, probably the overwhelming majority, you can instead just use one falsification to create a new hypothesis with that nuance integrated into it. And as science moves beyond singular formulas derived from clear principles or laws and onto broad encompassing models based on correlations from limited observations, this becomes more and more true.
This would still be valuable even if the LLM only finds out about things that are already in the air.
It’s probably even more of a problem that different areas of scientific development don’t know about each other. LLMs combining results would still not be like they invented something new.
But if they could give us a head start of 20 years on certain developments this would be an awesome result.
Then that experiment is even more interesting, and should be done.
My own prediction is that the LLMs would totally fail at connecting the dots, but a small group of very smart humans can.
Things don't happen all of a sudden, but they also don't happen everywhere. Most people in most parts of the world would never connect the dots. Scientific curiosity is something valuable and fragile, that we just take for granted.
One of the reasons they don’t happen everywhere is because there are just a few places at any given point in time where there are enough well connected and educated individuals who are in a position to even see all the dots let alone connect them.
This doesn’t discount the achievement of an LLM also manages to, but I think it’s important to recognise that having enough giants in sight is an important prerequisite to standing on their shoulders
If (as you seem to be suggesting) relativity was effectively lying there on the table waiting for Einstein to just pick it up, how come it blindsided most, if not quite all, of the greatest minds of his generation?
That's the case with all scientific discoveries - pieces of prior work get accumulated, until it eventually becomes obvious[0] how they connect, at which point someone[1] connects the dots, making a discovery... and putting it on the table, for the cycle to repeat anew. This is, in a nutshell, the history of all scientific and technological progress. Accumulation of tiny increments.
--
[0] - To people who happen to have the right background and skill set, and are in the right place.
[1] - Almost always multiple someones, independently, within short time of each other. People usually remember only one or two because, for better or worse, history is much like patent law: first to file wins.
Science often advances by accumulation, and it’s true that multiple people frequently converge on similar ideas once the surrounding toolkit exists. But “it becomes obvious” is doing a lot of work here, and the history around relativity (special and general) is a pretty good demonstration that it often doesn’t become obvious at all, even to very smart people with front-row seats.
Take Michelson in 1894: after doing (and inspiring) the kind of precision work that should have set off alarm bells, he’s still talking like the fundamentals are basically done and progress is just “sixth decimal place” refinement.
"While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals." - Michelson 1894
The Michelson-Morley experiments weren't obscure, they were famous, discussed widely, and their null result was well-known. Yet for nearly two decades, the greatest physicists of the era proposed increasingly baroque modifications to existing theory rather than question the foundational assumption of absolute time. These weren't failures of data availability or technical skill, they were failures of imagination constrained by what seemed obviously true about the nature of time itself.
Einstein's insight wasn't just "connecting dots" here, it was recognizing that a dot everyone thought was fixed (the absoluteness of simultaneity) could be moved, and that doing so made everything else fall into place.
People scorn the 'Great Man Hypothesis' so much they sometimes swing too much in the other direction. The 'multiple discovery' pattern you cite is real but often overstated. For Special Relativity, Poincaré came close, but didn't make the full conceptual break. Lorentz had the mathematics but retained the aether. The gap between 'almost there' and 'there' can be enormous when it requires abandoning what seems like common sense itself.
It is. If you're at the mountain, on the right trail, and have the right clothing and equipment for the task.
That's why those tiny steps of scientific and technological progress aren't made by just any randos - they're made by people who happen to be at the right place and time, and equipped correctly to be able to take the step.
The important corollary to this is that you can't generally predict this ahead of time. Someone like Einstein was needed to nail down relativity, but standing there few years earlier, you couldn't have predicted it was Einstein who would make a breakthrough, nor what would that be about. Conversely, if Einstein lived 50 years earlier, he wouldn't have come up with relativity, because necessary prerequisites - knowledge, people, environment - weren't there yet.
You are describing hiking in the mountains, which doesn’t generalize to mountaineering and rock-climbing when it gets difficult, and the difficulties this view is abstracting away are real.
Your second and third paragraphs are entirely consistent with the original point I was trying to make, which was not that it took Einstein specifically to come up with relativity, but that it took someone with uncommon skills, as evidenced by the fact that it blindsided even a good many of the people who were qualified to be contenders for being the one to figure it out first. It does not amount to proof, but one does not expect people who are closing in on the solution to be blindsided by it.
I am well aware of the problems with “great man” hagiography, but dismissing individual contributions, which is what the person I was replying to seemed to be doing, is a distortion in its own way.
With LLMs the synthesis cycles could happen at a much higher frequency. Decades condensed to weeks or days?
I imagine possible buffers on that conjecture synthesis being epxerimentation and acceptance by the scientific community. AIs can come up with new ideas every day but Nature won't publish those ideas for years.
Things don't happen all of a sudden, and being able to see all the scientific papers of the era its possible those could have fallen out of the synthesis.