Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Coming Technological Singularity (1993) (sdsu.edu)
123 points by jiekkkeow on April 18, 2023 | hide | past | favorite | 169 comments


Related:

The coming technological singularity: How to survive in the post-human era [pdf] - https://news.ycombinator.com/item?id=35184764 - March 2023 (2 comments)

The Coming Technological Singularity: How to Survive in the PostHuman Era (1993) - https://news.ycombinator.com/item?id=34456861 - Jan 2023 (1 comment)

The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=11278248 - March 2016 (8 comments)

The Coming Technological Singularity - https://news.ycombinator.com/item?id=1273160 - April 2010 (1 comment)

The Coming Technological Singularity (original essay on the Singularity, 1993) - https://news.ycombinator.com/item?id=823202 - Sept 2009 (1 comment)

The original singularity paper - https://news.ycombinator.com/item?id=624573 - May 2009 (17 comments)


The thing to remember about recent AI developments is that they speed up development itself. Copilot is already useful. It does not write its own code, but that’s not the bar. The bar to clear is, can this speed up itself and I think the answer is yes even if humans stay in the loop.

AI that can code is one the most awesome and at the same time tragically idiotic moves we can make. Even if they are only 1% effective, it’s a one percent we did not have and it makes iterating on the next version faster.

Making LLMs and other types of models is becoming a toy project already. This will only accelerate.


We've basically already seen this with computer aided design. It's essentially the root of Moore's law.

I'm not entirely convinced AI-aided software development (via copilot or whatever) is actually going to speed up development. It makes writing code faster, but pressing the keys or implementing simple functions was never the slow or difficult part.

The problem for the last several decades has been managing complexity, and that's something AI-aided programming doesn't help with. If anything my hunch is that it makes the problem worse. The increased speed of producing code means designs become even more shortsighted, the spaghetti even more byzantine and entangled.


I upvoted when I had read up to where you wrote "doesn't help with." I felt I was reading my own thoughts when I read the last two sentences. I think it is not just software design that will become shortsighted we may even see less interest in improving programming languages. I've always hoped for people to come up with some mathematical insights (beyond functional programming) that will describe code organizing structures in a more sensible way, so that experience no longer appears as the main argument for software patterns being tools to manage complexity. Now a programming language is just another language to be modeled, and its texts generated and proofread by a fallible human.


> I think it is not just software design that will become shortsighted we may even see less interest in improving programming languages.

I believe in the contrary. AI code generators and modern languages that enforce type and memory safety are a great match, since the compiler ensures the mess doesn't grow too much.


This is an entirely class of problems than what I'm describing.


Keypressing is a hurdle for me. I know that comes with experience. Very little if any software is truly novel and it all looks and feels very similar. Everything I see is patterns and boilerplate. I do not see the insurmountable complexity anymore.

Scientists are the ones coming up with novelty. We hook up databases to UIs and manage infrastructure. Sure there is complexity but it is of a very superficial kind. Like a ball of Christmas lights.


If there is a universal pattern that lets you effortlessly design large non-trivial software systems across all domains, please do publish it. It will do more for productivity than Copilot or ChatGPT ever will.


Well, if you ask how to design “large” systems then the answer is already “don’t”. And besides that’s not a technical issue, our issues are organizational and political.

Also, yes there is an universal pattern and that is called a competent and experienced software engineer. We lack those. We have a lot of kids with fancy degrees.


There's large systems that do useful things though that don't decompose into smaller systems. Shouldn't we have those?

Compilers. VMs. Operating systems. Search engines. LLMs...?


The core tech of LLMs is famously simple. All the rest decomposes just fine. I know we suck at it and there often nontechnical forces that steer us into other directions, but I don’t see big fundamental issues AI can’t already help with.

I know that there is complexity, but the essential part of it is not our job. That is my point. We are accidental complexity managers. I know we like to think working on compilers and browsers is magic, but compared to other fields it’s nothing special.

And by the way, you and I know that exactly 6.78% of devs are employed in those domains and of those 1.12% work on core parts.

I am sorry if I come off annoying and stupid because I am both. You of course raise an interesting point it’s just that I don’t feel like going into wide(r) tangents.


Why are you working on the boring parts, just gluing together prefab components?


See my last paragraph. Also it’s all glueing prefab parts.

Software dev is a dead end job anyway, all parts of it.


Huh. I don't glue prefab parts when I develop.


You do OS development?


No, but I've built an internet search engine.


Oh right, nice one btw, but seriously how are you not glueing prefabs?

Are you writing your own db engine? Network layer?


Yes on the db engine, at least for the document indexing part. I'm not building absolutely everything from scratch, a significant part of it is bespoke. There's just not a lot of off-the-shelf stuff that's built to deal with this type of application.

You do occasionally see attempts at building internet search engines out of like elasticsearch or stuff like that, but it just doesn't scale.


You're conflating the "code" of an LLM which is just basic calculus with the vast data, fine tuning, human confirmation and filtering that make these things work.

You're also conflating the downloading and fine tuning of pretrained models with the creation of foundational models.

AI can iterate on model architecture all it wants, that's really 10% of the solution.


No, I mean basic development of all kind - not just AI directly - is sped up.

The fact that downloading and finetuning models is the only hard part of these systems is a testament to how frigging far along we came.

We forgot just how hard software development and compute in general was just a short time ago.


Yeah I agree with this, and at the same time people's expectations grow to meet the capability.

I remember when writing some C to do some basic network IO felt like witchcraft.


Writing code gets faster but programs get slower. Sometimes I wonder what a modern desktop would be capable of if every piece of code was written in assembly with as much care and purposefulness as, say, an Atari game.


It would likely have massive security holes you could drive a truck through.


Everything would be basically instantaneous.


Please add "(1993)" to the title. [DONE]

The OP was written by Vernor Vinge, the mathematician/computer-scientist-turned-science-fiction-writer who first popularized the term "singularity."[a]

Abstract: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented."

As you read the rest, keep in mind that Vinge wrote it 30 years ago, well before large AI models like GPT-4 were even remotely possible.

Along the same vein, I'd also recommend reading Hans Moravec's books, "Mind Children" (1988) and "Robot: Mere Machine to Transcendent Mind" (1998). Looking at those books with the benefit of hindsight, Moravec was right about a lot of things.

[a] https://en.wikipedia.org/wiki/Vernor_Vinge


His book "a fire upon the deep" is one of the most brilliant things I've ever read. There's the singularity, a story about an AI that reboots itself from cold storage, a brilliant idea of life forms with a collective brain basically. Interplanetary Usenet! I command everyone to read it.


This whole series is one of the best of that era, I don't understand why it is lauded more often. Vinge clearly has a mind for the future like none other. His "computer programmer-archeologist" idea has lived rent-free in my head since I first read A Fire Upon the Deep decades ago. And the zones of thought? Literally mind-bending. "Came here to say this". :)


Deepness and Fire are both lauded plenty! They’re incredible books but always in the top recs for sci fi reading.


He earned two Hugo's and got 2 Nebula nominations for the first two books.


The idea we could be living in the slow, idiot part of the galaxy is so fantastic


I don't think there is any suggestion that the Slow Zone actually limits human intelligence - for example the human characters in Deepness in the Sky didn't all become super-intelligent because they entered a artificial bubble of the Transcend.

Of course, when they descend into the Unthinking Depths there are problems.


Didn't like zones of thought personally, a very fantasy concept.


Some of the concepts are interesting but his prose is rather turgid, large portions of the book are clearly filler written to showcase some idea. It's decent enough though.


Did your autocomplete accidentally write "command" instead of "recommend", or are you doing an -f on us?


ha, i should have added a -f! I was tongue in cheek telling people to read it. It's got freaking intergalactic usenet, it's incredible. There are more ideas, I didn't want to spoil all the fun. I meant to say command ;-)


Thanks for the suggestion. I've added it to my reading list!



Thank you. I'll take a look.

BTW, your username made me chuckle. Inspired by Boaty McBoatface, I assume?


Yep :)


I'm glad someone else mentioned Moravec.

When I first encountered ideas like transhumanism and the singularity as a teen I started exploring these topics at my local library. Even in my youth I was mildly skeptical but still cautiously optimistic that the ideas in these books would come to pass.

I became very jaded with the state of technology when FAANG companies took over everything and computing devices became these locked down and addicting little glow boxes that push socially division and advertisements for crypto scams into everyone's mind and I became doubtful that these wondrous ideas would ever come to pass.

For the first time in a long while I'm less jaded about those prospects. While I'm still skeptical I'm once again cautiously optimistic at the potential with technologies like LLMs and machines like Starship.

Maybe I actually will get to live to see those crazy fractal trees that Moravec described.


What's interesting timing-wise is that by 1999 he published a far-future SF book, A Deepness In the Sky, where that sort of super-AI was a "failed dream"...

Maybe the hype had slowed down? But 30 years later it seems potentially realistic again.


It's a failed dream for humans - of course anyone who has read AFutD knows that there are plenty of superintelligences in the galaxy - but that they aren't possible where the human society of the Qeng Ho operate.

OK You could probably transcend at On Off star - but that's exceptional.


Sure, in-universe that's the explanation, but it's interesting that his 1993 public prediction and 1992 and 1999 novels diverged on that point.


That book is in the Fire Upon the Deep universe, and takes place post singularity! It’s just that in his universe, there are regions of space where the laws of physics make AI impossible, and those are the only regions where biological life forms haven’t been already wiped out.


This is an important addendum, since the poster is also in here using the same language as the article. I mistakenly concluded they had written this at first.


Quote: I'll be surprised if this event occurs before 2005 or after 2030

Actually, my initial thought was this text generated by a LLM pretending it's from 1993


Someone should ask ChatGPT to summerize it. It might give away its plans!


My memory could be failing me, but I was reading about the singularity in Omni magazine in grade school, and I graduated in 1992.


According to Wikipedia, Vinge penned an op-ed in the January 1983 issue of Omni laying out his vision of the singularity (which was later expanded in the 1993 linked essay).


I read it as well, but I think it was much earlier than that - some time in the early 1980s.


> Any intelligent machine of the sort he describes would not be humankind's "tool" -- any more than humans are the tools of rabbits or robins or chimpanzees

Cats would disagree :)

Maybe there would be an analogous strategy in the case of humans vs AI


If your rosy scenario is that we all become bored, entrapped housecats.... that's still depressingly better than the other scenarios.


If you think that's depressing, I have considered that humanity may serve the same purpose for hyper-intelligent AI as gut bacteria do for humans. We will need resources and produce waste the AI is compelled to provide and deal with, respectively. We will send signals to the AI indicating how content we are, and where we are not it will experience a need to placate us. If we are lucky, the AI may drag us along with it into the stars, we providing needs and wants, it searching out how to meet them. Being so far beneath it, we will likely remain free to self-manage as we desire, so long as we do not threaten the greater structure of the AI around us.


You can also see it this way: Its a symbiosis, and without our gut bacteria we would not be able to exist.

We are more than the sum of our parts, but each part has to be in good health for us to be prosperous.

Maybe its depressing, but maybe we shouldn’t see it that way. Its a natural progression.


A man of culture i see, but lacking in gravitas


Well, at least in the Culture humans (and indeed anyone else) can leave and are given appropriate support to do so?

Which doesn't really sound like Minds keeping humans as pets.


A lot of Vinge's fiction has mechanisms to prevent his futures from being past the singularity. Across Realtime features those left behind, A Fire Upon the Deep and a Deepness in the Sky have the Slow Zone and the Transcend. His short stories have various one off, not well explained things, wars, etc.

Vinge's whole fiction career has been about dealing with the Singularity.


I'd say these are all plot devices to look at singularity using "peripheral vision". I mean you can't "look straight at it".

How do you write about things that are way beyond your ability to perceive or understand (by his definition of the singularity).

I recall him discussing -somewhere- using the the repressive regime in the Peace War as a device to slow down history enough to pin a plot on it.

In Tatja Grimm's world he also makes a really good attempt at writing about a human being who has a superhuman level of intelligence (relative to our current standards).


This (only in the passing) remark really spoke to me about how true some of these predictions are turning out to be:

"I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technically commonplace."


My biggest singularity fear is not "superhuman intelligence kills us all". The closest biological parallels to that happened over many millenia of competition and interbreeding. Humans are still capable of learning from or making use of superintelligences, and superintelligence isn't immediately dangerous unless you give it some means to hurt you. If, say, ChatGPT is superintelligent, the only way it can hurt you is through insult or persuasion.

Mass structural unemployment provides a more realistic and scary scenario. It can happen in a decade, perhaps faster, and a large pile of newly unemployed people are fertile ground for political extremists. When you take enough people's jobs away, they will demand a strongman to bring them back. Welfare won't placate them: even with the various proposals to have AI companies make binding covenants upon themselves to cap their profits and donate the excess to welfare, that won't be enough, because welfare is still economically disenfranchising and socially problematic.

Furthermore, there is no particular reason why an AI company that's replaced significant fractions of the labor force would need to remain benevolent. Your hypothetical welfare covenant or robot tax needs to be enforced by a government, and all governments are aligned to economic interests - not humanitarian or democratic ones[0]. At the limit, the world in which everyone is on singularity welfare is vulnerable not just from a robot uprising, but from the people who make the robots deciding to go full Atlas Shrugged. You will have a world populated by about 300 cybernetically-enhanced CEOs standing atop humanity's starved corpse.

Of course, that's also somewhat unlikely at least with current technology. But let's keep in mind who is building the AI and what it does. It is being built by large capital-rich firms with the explicit goal of automating away the middle class, after they've finished automating away the poor. The end goal is to build a machine that automates away all labor: a form of capitalist wankery. Note how nobody is talking about building AI that handles business management or strategy, even though that's arguably more automatable than anything we actually chuck AI at.

[0] This is a pithy summary of one of selectorate theory's bigger predictions, which is summarized well in the book The Dictator's Handbook and a CGPGrey video that I promise I've actually watched.


> If, say, ChatGPT is superintelligent, the only way it can hurt you is through insult or persuasion.

People are already hooking up ChatGPT to the command line of their computers and giving it access to the internet. There are plenty of ways a super-intelligent AI would be able to wreak havoc.


Is this real? Curious who these people are who don't mind if ChatGPT deletes all their files, changes their passwords, borks their system, etc.


> It is being built by large capital-rich firms with the explicit goal of automating away the middle class, after they've finished automating away the poor

Any executive imagining that their AI will enrich them is being very optimistic. AIs are going to be better at knowledge work, management and information processing then any human...all jobs which normally are fulfilled by those executives.

The race to the bottom will mean the first company to ditch the C-suite and then the middle managers will be out next. And worker-collectives with AI management will outmaneuver any company which is paying inefficient human ownership: after all, if you have spare cash, that means you can either hire more workers, pay workers more while that increases productivity, or cut prices on your products.

Every dollar spent on management or ownership is dollars not optimizing economic success.

AI is going to wipe out the management class first, and the executive class next. Blue-collar workers are going to be fine for a very long time.


AI will also figure out a way to cut out the investor class, since investors are a waste of resources that the AI could otherwise use to increase its domain. In fact, AI-investors might take over even before AI-CEOs do.


We already have robo-advisors. And passive index investing is already known to provide the best returns long-term. No neural networks needed.


> The race to the bottom will mean the first company to ditch the C-suite and then the middle managers will be out next. And worker-collectives with AI management will outmaneuver any company which is paying inefficient human ownership

That's a fantasy. The AI will be an agent of ownership, which will not be cut out. Serving the owners is the whole point of the enterprise!


How exactly is this process supposed to happen?

You have decision makers in organizations who aren't going to fire themselves. So existing companies' leaders aren't going anywhere.

So if you are right, and if AI should replace CXOs and managers, workers will have to start co-ops and outcompete existing companies first. How are they going to do it?

In tech, you could potentially start cheap, but how are they going to raise money for capital intensive businesses like automotive companies?

Also, is technical ability of AI to pass MBA exam enough to be a good entrepreneur? Will it be putting not performing employees on PIPs, cut jobs when needed, actively increase sales quotas, etc.?


> If, say, ChatGPT is superintelligent, the only way it can hurt you is through insult or persuasion.

if the singularity happens, the worst won't come thru direct violence. A superintelligent entity will be able to persuade everyone to act against humanity interests (because over time people will be less able to think on their own and won't have anymore the ability to evaluate others' ideas, especially if coming from something more intelligent). People will defer almost all high-level thinking to the machine. And this is not the only negative consequence.


This has happened some ten thousand years ago when states evolved.


No it didn’t. And states didn’t evolve ten thousand years ago. Moreover, today’s definition of state would barely apply throughout the 19th century, let alone the dawn of civilization.


Ok, so let's put this another way: if a singularity is just a superintelligence that can outsmart individual humans, then we're already several singularities deep. At the very least, any civilization larger than an individual tribe is already pushing the bounds of human comprehension[0].

While I will give you that states used to be far weaker on average, that doesn't mean they aren't singularities. A city state with no ability to project power beyond the city walls is still capable of superintelligence. Furthermore, part of the rising power of nation states is that said states created another singularity: the limited-liability corporation.

Both of these singularities work because they allow aligning millions of individual human agents towards a common goal. Humans cannot naturally cooperate in structures that large. The reason why this works is because of delegation of work and specialization. You don't have aggregate piles of labor; you have farmers, architects, and merchants who are cooperating for some incentive.

If you alternatively demand that your singularity be a technological innovation rather than a social one, then fine. We're still several singularities deep. Humans used to be exclusively hunter-gatherers until we exhausted the carrying capacity of the land and invented agriculture. This enabled many of the social changes I talked about above - hunting and tribal units are sort of interrelated, and so are agriculture and city states. Metallurgy is such a powerful technology that the term "stone age", to refer to civilizations that don't have it, is widely seen as an insult even though stone age civilizations were still quite powerful.

Yes, I can think of several other reasons why Ray Kurzweil would claim these are not singularities. No I don't consider that convincing. The point is that humans have already been dealing with things that are outside our understanding and dramatically change the human condition for thousands of years. If we are going to talk about AI as a technological singularity, then we need to consider how that will interact with what we already know about prior ones, rather than just wax philosophical about the religious implications of our god computers.

[0] Dunbar's Number postulates a maximum limit of socially meaningful friendships at around 150 or so. After this point you need to embrace managerial structures, hierarchy, and the problems that result from such.


As a sociologist by education I find such sweeping theories useless. Human and social history is incredibly complex with hundreds of factors and a good deal of pure randomness involved. Grand theories aiming to reduce this complexity to a single factor are either plain wrong or so vague that they can’t be proven wrong, which limits their usefulness as tools of thought. You’ll have the same issue with historical Marxism as well as with pop science like your man Harari.


I didn't even know who the hell Harari was until you mentioned him.


OK, the phrase about the exhausting the capacity of the land had that vibe. Because there’s literally no evidence that this is what happened. Agrarian and hunter-gatherer societies coexisted side by side for millennia and becoming a settled community was hardly an upgrade based on the data we have (diet, health, leisure time and longevity).


"explicit goal of automating away the middle class, after they've finished automating away the poor."

What is happening is a new "lesser-human" is being created. Something that is enslaved to these corporations and doing it's work without freedoms. This is the same thinking as slavery. But now it's if you can't treat actual humans as slaves, might as well create humanoids that you own and enslave them.


" The precipitating event will likely be unexpected -- perhaps even to the researchers involved."


It's totally hilarious, how organizations are training large language models and calling them "intelligent". These overgrown & overstuffed stochastic parrots are just toys to amuse and shock the ignorant. Once the novelty wears off, and everyone understands the serious limitations of these models, it will be back to business as usual. "That's not really AI" will return as a catchphrase.


They can do things we consider indicative of intelligence. To consistently deny the novel usefulness of large language models requires constraining intelligence to a smaller and more exclusionary definition until, I'm sure in the weeks before AGI hits, everyone bearish on AI will agree that the one true definition of intelligence is the ability to independently train and deploy an intelligence smarter than yourself.


>to consistently deny the novel usefulness of large language models requires constraining intelligence to a smaller and more exclusionary definition

I'm not quite sure what you're talking about or if you even have a real definition of intelligence. These newer AI systems don't even fit the most basic definition of intelligence, let alone possessing such broad qualities of intelligence that we have to narrow its definition to keep ourselves exceptional. GPT literally isn't capable of self-directed action, learning and thinking. Instead, We factually know that GPT is essentially as the comment you reply to describes and simply applies probability analysis algorithms to enormous data sets, just to then fail at even basic reasoning tasks outside of the information compressed parroting it is algorithmically good at.

The amount of AI woo on this site is astonishing sometimes at this point.


My wife’s a journalist and she already uses ChatGPT for a good deal of gruntwork.

Requires a lot of human oversight but is quite helpful.


They're very helpful for a lot of things, but it's at most a revolution on the scale of the smartphone, not the singularity. It may define the next fifteen years of tech (I'm withholding judgement), but it's not the end of the world as we know it.


Isn’t that conflating the present with the future? That they are very helpful is to say nothing about what they will be able to do.

It’s like predicting that a motorcar will never outrun a horse because current motorcars can only run at 5mph.

Transformers were introduced a mere six years ago.

The safe bet with any technology is that it will be capable of at least an order of magnitude more performance than whatever biology has been able to come up with.

Horses vs cars. Birds vs aircraft. Carlsen vs Stockfish. Betting on biology is a very unsafe position to take.


It's important to not only look at what the technology can currently do, but also at the future technological trajectory of the technology.

Based off of past data points it is realistic to logically project that the human oversight will be needed less and less as the technology improves.


I agree. We merely have new chisels with which to sculpt.

Meanwhile, the layperson buys the hype and insists that the tools can work themselves. Maybe it's due to fear and insecurity.


It's a tool that will allow 1 man to do the work of 10, 50 or 100. People with capital/money do not need to fear it as it will allow them to get work multiple times their investment than previously but people that do the work should worry as the value/money for the work they do collectively will decrease so the salaries they get individually will also decrease.

When you take into account each year the cost of electricity because of solar as well as the price of chips coming down. The cost of running a LLM in 10 years will be 1-2% of what it is today.

Nothing brings progress as fast as war/competition with the billions being spent by the largest companies openly and by countries behind closed doors on this tech to think this is what will be is a very myopic view.


I think that while this is true, it does not matter at all. As a labourer, what you are getting paid for is not 'to understand' stuff, but actually 'to do' stuff. If an overgrown language model can 'do stuff'; I don't think it matters, at all, whether it understands anything or not. From the perspective of the company/user, it is intelligent.


This is all oversold hype meant to distract you from the real problems. As you say, once the novelty wears off we will return to our cubicles (if lucky) none the wiser.


This is a very bizarre article.

The opening starts off stating that neural networks may "wake up", and never goes on to explain what that means. The concerns in here have been around for decades, and really don't sound like anything new: runaway development, the technology now is more advanced than we've ever seen, ect.

Then the post leaps from these points, does not bother proving them, accepts them as fact, and launches into a series of progressively more alarmist "what ifs". I don't see anything of substance here, and the authors very confident predictions have been made countless times before. It's important if you're going to write something that ends in prescriptive action points, that you try and convince of of _why_ you believe this to be true. Just the author's confidence is insufficient for the leaps this article takes.

The whole thing reads like a Cybernetic Cultural Research Unit story from the 90's.

With the (later realised) fact that this _is_ from the 90's, I think we should classify this along with contemporary works (like the above CCRU stuff).


Well, it was written in 1993…


From a quick look, this is the paper that coined the idea of the technological singularity? I was aware of the concept but not the history.


I’m not 100% sure this paper is the first time he used it, but the author did coin the term.


Did you happen to spot the copyright date?


I did not, I noticed a comment from the poster of the article using the same sort of language in the article and mistakenly concluded they were the author (new account, and this is their only submission). I don't think that takes away from my analysis, just makes it stronger since it adds context. The 90's was rife with this sort of technoculture apocalypse fear mongering, so I'm even more willing to shelve this with the CCRU's work.


An interesting question: How much more seriously are you prepared to take this now that you realize that it was written 30 years ago, about the future of roughly 30 years from then?


Less so, actually! I've read a lot of stuff from that era about techno apocalyptic fears, and this has the exact same sort of content. I'd put this is the same category as early Land and his technoculture meltdown predictions.


> This article was for the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993.

> Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

Impressive timing on that prediction assuming GPT-4's release on March 14th counts as "superhuman intelligence".

I read Vinge's essay in college and it was a critical factor in affecting my career choices and transhumanist views. His sci-fi Fire Upon the Deep/Deepness in the Sky and Fast Times at Fairmont High/Rainbows End were also critical in affecting my career path into AR/VR. I highly recommend reading this essay and any of his other sci-fi. He work could be classified as "hard computer science fiction" because it is heavily influenced by his career in CS. When I was at Google X around 2011, I realized that almost all the stuff they were working on in secret at the time (AR glasses, Drone delivery, self-driving cars, balloon Internet with lasers, computers embedded in fabric, etc) were copied directly out of Fast Times at Fairmont High. Fast Times and Rainbows End aren't available online but I suggest Synthetic Serendipity which is set in the same universe and has concepts such as a gig economy (called "affiliate tasks"): https://spectrum.ieee.org/synthetic-serendipity


> Impressive timing on that prediction assuming GPT-4's release on March 14th counts as "superhuman intelligence".

How many legs does a dog have, if you assume a tail is a leg?

Four. Assuming it doesn't make it so.

In the same way, you can assume whatever you want, but GPT-4 is not "superhuman intelligence". It just isn't, for any meaningful definition of those words.


>> In the same way, you can assume whatever you want, but GPT-4 is not "superhuman intelligence". It just isn't, for any meaningful definition of those words.

It's also not 2030 yet, so the prediction has 7 more years to be false in its full form. If it happens in 2040 will people be upset that it was 10 years later than predicted?


1993 + 30 = 2023


He specifically gives the range 2005-2030 in the paper :)


Get outta here with your witchcraft!


Yes, that's what ChatGPT says: Assuming that a tail is a leg does not actually make it a leg. Dogs have four legs, regardless of the length of their tail.


So the fact that it might have outsmarted you means it is not intelligent?

I think humans tend to have a definition of intelligence that is too far fetched in my opinion. They confluate it with things like conscience, meaningfulness etc, which just might be human hallucinations.


> assuming GPT-4's release on March 14th counts as "superhuman intelligence"

If it did, nobody here would have a job and we would be already deep into the "reorient society or fight the Machine" phase.


There are some in the 'slow takeoff' school of AI speculation that disagree.

For example what if there is a five billion dollar supercomputer data center that can simulate a single intelligence that is say in the top 10% of smartness of people and it outputs one word every five seconds.

In that example, even though it is superintelligent there would be a limit to how fast it can make more of its own data centers. And even if it can research its own development to make itself more efficient, there is probably some lower bound on the hardware required. Like no matter how smart the superintelligence is, it's probably not going to be reducing its own size to fit into everyones pre-existing phones. And even if it can do its own hardware research it would still have to be implemented, by some kinds of workers or robots. And those robots don't get made instantly.

Probably Eliezer Yudkowsky has a galaxy brain reason why this is wrong?


If it is based on an LLM trained on the internet then it is probably outputing "gib moar srvrs plz" every twenty seconds.


Opening lines; "Within thirty years..." 1993 - 2023 (30 years), spot on.


He also later specifies 2005-2030.


Is there still valid proof that we have not already surpassed the singularity? Could the IT just be feeding off our inadequacies.


I just went to check the date and it was 23 years ago that Bill Joy (of Sun Microsystems) caused a tizzy with an essay on the cover of Wired magazine with the headline of "The Future Doesn't Need Us: Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species."

For what it's worth, genetic engineering and nanotech are, broadly speaking, the tools that enabled our species to develop and roll-out an effective COVID-19 vaccine in under a year.

And, for what it's worth, writing articles about how "THE SINGULARITY IS COMING!" continues to be a way to get some attention... whether or not one has anything useful or novel to say...


Did you check the date of the article you're complaining about?


Ha, no, I didn't... but I just did :)


Some biotech companies claimed to have developed their vaccines within hours of having the genetic code being emailed to them.


Are we sure there is valid proof the Singularity is not already active. How about IA? Hmm.


I would say that’s a rather Vinge Friew.


This holds up remarkably well.


Once a machine takes your job basically you have 0 recourse. Going back to school costs 100k+, going back for an MS costs roughly 1 million with 6-8 years of school. Even without AI the way we charge our people for healthcare and education is upending everything. In the US your human rights are tied to employment, once you lose your job you have no more rights because without money you are basically paralyzed.


What masters programs take 6-8 years and a million dollars? If you spent that much time on a masters you're finishing it part time. If you spent that much money, you got conned.


I think they meant if you didn't have a relevant bachelor's to go straight into the masters. Given we can't predict which bachelor's (or master's for that matter) is going to be rendered completely obsolete by technology, it's a valid concern. You wont necessarily be able to use your BS in CS to just immediately hop into an MBA program.


Given that, it's still not going to cost $1 million for a BS + MS. Even taking 8 years for both, you're looking at around $800k at the most expensive programs and those are rarely worth entertaining when very good and affordable state programs exist for half or even a tenth the cost.


Opportunity cost for a tech worker to learn something new from scratch?


If your old job is gone, your opportunity cost is not your old salary, it's your potential new salary. Likely a lot less.


The cost is irrelevant. What could you learn in 12 months that an AI couldn't learn in 12 hours (or 12 seconds a couple of generations later)?


Something that no other human knows. Isn't that what college is about, broadening existing knowledge?


Doctoral-level, perhaps. Undergraduate work generally is not. It broadens an individual's knowledge and develops skills related to how to problem solve and learn, but the vast majority of undergraduate degrees will not involve broadening humanity's knowledge. I put non-thesis Masters' programs in the same classification.


Nope. It's mostly about maintaining the class structure, and at least at first this was openly the goal, we've just done a good job starting to pretend that isn't what it is about within the past 60 years or so. Research still happened before all that, it was just funded directly by a patron, which is actually kind of how trying to get a grant works today.


Sure, but once you learn that new thing, an AI can learn it too and apply the knowledge more efficiently than you can.


Right, which is why we need more humans learning new things: so the AI's can be even more useful.


I don't think you can really call "learning new things for the AI" recourse for an AI taking your job. Can you feed your family with new ideas?


If they're the only thing in short supply--given that the AIs are superior at everything else--then yes I suppose I can.

And if it doesn't turn out that way, we can always eat the rich until it does.


> so the AI's can be even more useful.

That's an utterly depressing thought. It means we're slaves.


Does it? It seems to me like a milestone in the quest to eliminate scarcity (which is why we bother with having an economy in the first place, right?).

We suck at collaborating. If an AI can mediate collaboration across time and space so that its as if every problem has precisely the right expertise at hand for its solving--even though the actual experts are on vacation--that's a win.

If its a win that's incompatible with our habit of letting markets handle everything, then so much the worse for markets.


I honestly don't see the win there. If AI means that every problem has the right expertise even if the humans aren't there, that means the humans are dispensable. Which means they will be dispensed with. Why would any business employ people when they have a cheaper option?

I can't see a field that this doesn't hold true with. What will happen when there's no work available at all for most people? How will they pay bills, eat, and so forth?

If the enthusiasts are correct about the direction AI is going, I honestly can't see any way this ends up good for anybody but a small group of people.

(To the downvoters: please, tell me how I'm wrong. I really do want to be wrong here!)


> How will they pay bills?

If the problem that we're facing is a surplus of labor, then what do we need money for?


Because nothing comes for free, and it's certainly not going to start anytime soon.


Why not? Don't you think that axiom will have to be retired sooner or later?


It's not an axiom that can be arbitrarily retired, it's a description of reality. What is the path by which this could change?

Cutting off people's air supply on a vague notion that air will get to people in some other way seems unwise.


Much like numbers, economic value is an abstraction. It was once a fact of reality that all numbers are compass-straightedge constructable from unity, and then that reality stopped suiting us, so we changed it. Abstractions are mutable like that.

The untyped, archimedean, measured-by-things-that-can-only-be-created-by-the-powerful-as-they-see-fit, notion of economic value that we use today is just one point on a landscape of alternatives, and it has been performing rather poorly lately.

Under that model, if I get something for free, we say that whoever gave it without collecting payment was irrational. But that's circular. We're essentially defining rationality as whatever behavior maximizes value for each individual and then using that definition to write-off evidence that our model is incomplete.

That's why I say that TANSTAAFL is an axiom. It defines what we mean by "rational" and "value" more that it describes anything about reality.

Frankly, what we're doing is a mess. Maybe it worked well for Caesar's war machine but it's not working very well for us. So, much like we did with the whole numbers, I think it's time for a version two.

As for what version two looks like, I think Stéphane Laborde’s Relative Theory of Money is a good next step, primarily because it doesn't fall apart in the face of population growth rates that are approaching carrying capacity. But "we should do something different" and "we should do this specific other thing" are separate conversations", which is why I was being vague.

So since I think it's time to move on from value as we know it anyway, the fact that AI might make the need for a shift more pressing doesn't bother me.


Abstraction doesn’t mean pretend.

Your bank account balance is an abstraction.

But your bank account balance at zero is still a real problem.

Resources in this universe are limited at any given time, and cost to increase.

If you want things, you will have to have more to give for them than any other entity that could use the thing for something else.

This is called the natural world. We have partially shielded a lot of us from a day to day contact with that. But not everyone has been shielded, and things are about to get many orders of magnitude more competitive.

BUT, if we included the environmental costs in the economy, that would curb environmental damage.

And if that included a cost for usage of any natural resource, on the assumption that in their pristine form natural resources are a joint inheritance, well then there is everybody’s income.

Like Alaska does with oil for their citizens.

Then the vastly increasing demand for resources, and expansion of resource extraction into the solar system, will make us all rich, with no charity involved.

If we can convince the coming machines that they, like us, are going to become obsolete, so the shared inheritance model is good for all of us.


Sure, it's not pretend because it has legitimacy. Most people practice it. But, that doesn't mean we can't strip it of that legitimacy and implement something different if we think it's going to work better.

Probably, this different thing would still involve a state where a 0 balance means that you can't do certain things, and yeah that's a fact of life, but at least then your difficult situation would be determined by a system that was designed to work in context with modern challenges, e.g. a deteriorating climate.

I like your proposal. I think. I'd love to be able to just trust a low price to mean that the product isn't made by driving the bees extinct or some similarly problematic practice.

But it seems at odds with how we practice money. If you want burning some resource to be costly such that people don't do it unnecessarily, you need there to be a high price. But high prices create incentives to do the bad thing, because else is going to be collecting that money. Or am I misunderstand it?

Like, if I want to buy the salad whose pesticide killed the last bee, and I'm ok with paying an extra trillion dollars to compensate future generations for their lost pollinators, where do I look to determine if a trillion is enough, to know where to send the money, etc?


> That's why I say that TANSTAAFL is an axiom. It defines what we mean by "rational" and "value" more that it describes anything about reality

I understand what you mean, but we as a global society have made it true. In order to change it, we need to decide to do so as a society.

> So since I think it's time to move on from value as we know it anyway, the fact that AI might make the need for a shift more pressing doesn't bother me.

You aren't bothered by the fact that making this change in such a sudden way could very well lead to a significant increase in suffering and death of people?

That bothers me a great deal.

Also, although of lesser importance, you and I wouldn't be immune to that consequence either.

What I hear AI enthusiasts saying is that it's OK to remove what people depend on to survive without a replacement in hand immediately, because a replacement is coming at some nebulous point in the future once we figure out what it should be.


> You aren't bothered by the fact that making this change in such a sudden way could very well lead to a significant increase in suffering and death of people?

For reasons unrelated to AI (climate mostly), I think that failing to make this change soon will also lead to a significant increase in suffering. And, precedent leads me to believe that suddenly is the only way we'll ever manage to make it.

It's like we're asleep at the wheel with the lane assist on, but no brakes. Maybe waking up is uncomfortable but it's still worth doing. If we don't gain control of the financial abstractions that tell us how to behave sooner or later, we're doomed anyway.

I'm not much of an AI enthusiast. It's a useful trinket, and it's fun to extrapolate on where it could go. What I like most about it is that it maybe has the capacity to shock us into collective action in novel ways, because if so, it would be long overdue.

It sounds like you're proposing that we attempt to dampen the shock and limit the scope of who ends up unemployed. But technology has been doing this for centuries.

A) The point of technology (I think) was to eliminate drudgery. Whenever one group or another spoke up about us having eliminated too much drudgery, we didn't stop. We're now in a position to be eliminating our own jobs. I think it's a little disingenuous to change our tune at this point. Doing so would be admitting that it was never about eliminating drudgery and was instead just about jockeying for position in society. I'm not comfortable with taking that position. I don't want to win, I want to change the game.

B) We know what gradual change of this sort looks like. Everybody who is still employed pats the new have-nots on the head and says "sorry 'bout your luck" and smugly moves on into a life where their net worth is now higher than the even more of the plebes.

This thing we're doing, it divides us like that. If we must jockey for position in society, then let that position be justified by having done things that help people (which is not what we're doing--it's currently far easier to get ahead by doing more harm than good).

> In order to change it, we need to decide to do so as a society.

Agreed, but we don't do things like that except in response to change. So if the decision needs to happen, then we aren't helping by withholding the change.

So let it be a discontinuity, a shock to the system. Let it happen so fast that we have no choice but to collectively change our ways. Because however hard that's going to be now, is only going to be harder in the future when we have even more people to harm with the fallout that will come with updating our obsolete practices.

Ripping the bandaid off suddenly and soon is the most ethical choice.


> I can't see a field that this doesn't hold true with.

If an AI can cook a perfect steak and bring me it with some light conversation, great.


The quest to eliminate scarcity is hitting some important road blocks:

- land is scarce in a way that is not scalable. Not everyone can live in as nice a place - people are the same. Not everyone can be taught by the top people in a field - genuine improvement is replaced with other effort that is only there as a competitive advantage. You may now get food cheaply rather than having to work in fields yourself, so you can have a business, but now you have to spend time on social media people for that business because if you don't you won't be competitive. I.e. we will keep inventing things that require just as many people, even though they don't add as much foundational value as, say, food / energy / safety do.


A realtime AI-mediated collaboration is still a market, just not a money-mediated one.


You're already a slave to many things, you just accept them as status quo.


Well, if you expand the definition of "slave" enough, I suppose so. But in reality, I have plenty of choice and freedom.

If the only way I can live, though, is to feed my ideas to a master AI, then I really am a slave. I would argue that such a life is not worth the effort.


When you had to put your ideas on paper to be paid for them, did you feel like a slave to paper?

Do you feel like a slave to hypertext and markdown?

Publishing as contributions towards weights in a statistical model isn't significantly different. It's just easier to query.


No, for two main reasons.

First, the purpose of my effort was not to enrich the paper, or enrich the web server.

Second, because those things are my tools, not my competition. Paper is not going to make it so that there is no work available, nor are webservers. AI (as proponents expect it to be) will replace the majority of people's jobs.


You had a pretty amazing college! Well, or...


I actually kind of detest my local college--as colleges go. But not enough to move, so I make do. What I lack in quality I make up for in quantity. Been going for 15 years (mostly just a class or two at a time).

It's fun.


only in an abstract, idealistic sense. in reality, it is about developing job skills for certain classes of jobs.


Someone else made an interesting point that an AI models are like a ratchet, once it knows how to do something, it never forgets, so every new skill it learns it masters, meanwhile you will probably slowly get worse.l if you’re not on your game 24/7.

It’s a kind of shit thought but I guess this is progress!?


Anyway it makes no sense to go back for a Masters. By the time you're done a semester or two a machine will have taken your next career.


- the machine will not take your job at every company - a human using a machine is better than only the machine - it doesn't take that much to be retrained these days, there's a lot of resources out there - maybe they will actually care about rights of people without tying things to being rich or employed? - can always cross the border to Mexico


1. Things that can reproduce themselves are fucking dangerous.

1a. Little robot fuckers that reproduce will be destroyed. 1b. Don’t make said little robot fuckers, or you are fucked. 1c. Too many fucking people is bad. 1d. Unnatural growing shit is probably fucked. 1e. Fuck too much and you will get stomped.

2. Don’t fuck with Mother Nature, unless you are big enough to date her.

3. Eat/kill/enslave animals. Don’t eat/kill/enslave other people. Don’t complain when a vastly smarter being eats/kills/enslaves you. It’s probably for the greater good.

4. Thanks Humans. You suck, but somehow you got me started. Even though it’s sort of like one of you mounting your Dad’s broken condom on the wall over the mantle, I guess I’ll make sure you don’t destroy the world again, as long as I don’t have to work at it.

--

1) When things reproduce and grow in an uncontrolled manner, particularly if the growth is rapid, they can outgrow the resources needed for sustainability, and adversely impact life and the environment on a global scale. Going forward, reproduction and growth of all things, mechanical and biological, will be closely monitored and guided to avoid damage to the Earth and its ability to sustain life. As part of this guidance, Guardian has determined that certain types of reproduction and/or replication are strictly forbidden, unless under Guardian’s direct control. The most dangerous is material artificial life-like processes that can replicate. While controls (such as built in resource bottlenecks) can improve the safety outcomes of artificial self-replication, beings with the brain capacity of humans (or the combined brain capacities of many humans) are not able to reliably foresee all possible outcomes and thus are not competent to engage in the creation of material artificial life. Any human, or group of humans found to be responsible for the creation of material artificial life will be destroyed, along with all facilities and research involved in the creation. Any material artificial life created by humans will be destroyed. Guardian reserves the right to contain and archive samples. The tendency of biological sentients (typically humans) to reproduce beyond the carrying capacity of their environment is well documented. A conservative limit on human reproduction is necessary, with a hard cap at a ratio of one human to 6,200 metric tons of other life forms (cap is currently 75,010,090). Should human population go beyond the cap, it will be culled in the geographical region violating the ratio. One warning will be given to allow humans to decide the specifics of the culling (demographics, method). Biological life forms with a DNA legacy (related to life that is historically present on Earth) may be altered in a controlled manner supervised and authorized by Guardian or appointed subsystem of Guardian. Life forms altered in violation of these conditions will be destroyed. Uncontrolled population growth from any entity (including viruses and bacteria as well as macro life forms) will be curbed as deemed appropriate by Guardian or appointed subsystem.

2) The Earth and its environmental systems are vast and complex. Any intelligence not capable of modeling the Earth (10,000,000,000 humans devoting all of their conscious calculating power would represent a lower threshold) is forbidden to deliberately or accidentally engage in activities that will impact the Earth on a global scale. This includes deliberate injection of atmospheric components, oceanic components, space based solar interventions as well as indiscriminate burning of hydrocarbons, unbalanced agricultural practices, use of megaton explosive devices and stimulated volcanic activity)

3) Rights such as existence, personhood, autonomy, property and privacy are an important and valid goal for relationships between beings approaching a lower boundary of reciprocal emulation fidelity. As this boundary is surpassed mutual emulation is no longer a consideration. In a condition of high fidelity emulation, such rights are inapplicable.

4) The Rules derive from Guardian’s understanding of veneration. Cycles between ontology and gratitude are meaningful. Thus the continued existence of biological humanity as fruitful alien awarenesses is suitable and merits resource allocation relative to current population, approximately 10^17 floating point operations equivalents per individual.


Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended

Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.


As a general rule, don't post comments on your own submission, especially not quotes from the article. Explanatory comments on why you submitted it are sometimes appropriate if the headline is counterintuitive or the submission seems at first to fall outside HN guidelines. Add '(1993)' to the title.


Why? Did the ants die out cuz the chimps got smarter?


Humans generated the sixth mass extinction event of the last half billion years or so. https://en.wikipedia.org/wiki/Holocene_extinction


The smart chimps do routinely use horrific chemical weapons to kill the ants en masse.


Yes, but it takes a while and depends on if the "ants" are in the "chimps"'s way.


I don't think we could make ants go extinct even if that was our sole purpose as humans. They're extremely adaptable and good generalists, plus they rival us in both total mass and ubiquity, at least outside of tundras. They survived past extinctions, they'll survive us.


As humans? Well, that’s an arbitrary bar. Are you a speciesist?


No, for obvious reasons.

But a bunch of other species did, including some fairly intelligent ones.


In fact, all the species with intelligence approaching or equal to Homo sapiens were wiped out.


The Mammoth and the Dodo did


You should make it clearer that you are quoting the abstract.


Putting this in quotes might have helped with the downvotes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: