Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He just founded a new company called Keen Technologies to work on AGI. Not surprising that he wants to focus on that now. He's been part-time at Meta for years. I'm interested to find out what kind of business model he's planning for an AGI company.

Edit: he posted his leaving message publicly here: https://www.facebook.com/100006735798590/posts/pfbid0iPixEvP...

Additional public comments on Twitter here: https://twitter.com/ID_AA_Carmack/status/1603931901491908610

> As anyone who listens to my unscripted Connect talks knows, I have always been pretty frustrated with how things get done at FB/Meta. Everything necessary for spectacular success is right there, but it doesn't get put together effectively.

> I thought that the "derivative of delivered value" was positive in 2021, but that it turned negative in 2022. There are good reasons to believe that it just edged back into positive territory again, but there is a notable gap between Mark Zuckerberg and I on various strategic issues, so I knew it would be extra frustrating to keep pushing my viewpoint internally. I am all in on building AGI at Keen Technologies now.

@dang can you change the link to one of these?



It's very interesting to me that he's giving up on VR entirely. If he thought it was just an issue with Facebook, presumably he'd jump in somewhere else and get the impact he was looking to have.

That's another sign to me that VR is once again not going anywhere. Or rather, half of it isn't. Oculus-style VR is two best in one: 3D persistent virtual worlds and stereoscopic facehugger interfaces. The former has had great success, mostly in games, but the latter has spent decades as the thing that people are supposed to want but don't actually use much when they get the chance.


It wouldn't surprise me if he has a noncompete in play that prevents him from continuing his VR work for a few years.


Oh Meta 100% has some sort of non-compete in writing, especially after his lawsuit with Bethesda when he left Id to work at Oculus[1].

[1] https://www.pcgamer.com/zenimax-accuses-john-carmack-of-thef...


There is absolutely no such thing as a non-compete in California.


They are actually permissable in circumstances similar to John Carmack's:

> California employers can sidestep non-competes in the following instances:

> EXCEPTION 1: If the employee sells business goodwill

> EXCEPTION 2: If the owner sells his or her business interests

> EXCEPTION 3: If the owner sells all operating and goodwill assets

> Upon the business’ dissolution, a member of the company may agree to a non-compete if operating a similar business in the geographic area. Goodwill is the company’s name and brand reputation. Employees with stock options are not considered company owners for purposes of non-competition agreements.

https://www.contractscounsel.com/b/non-compete-california


He lives in Texas still


I really doubt Carmack would decide his life based on NDAs or non-competes.


There is when equity changes hands, which will have been the case with Carmack.


No. Everyone is given equity in Silicon Valley.


See x3n0ph3n3's comment upthread.


There is for C-suite employees.


They don't exactly need a non-compete (those are hard to enforce in CA anyway), they can just buy any up-and-coming VR companies. And they do.


You don’t need a noncompete when any investor is going to be scared that a major company will sue over “pilfered secrets”.

Combine that with being burned out and I can see trying something else.


I think it's less of a judgement about VR and more of a judgement about AI. If you believe AGI is within reach (as he does and I agree), working on literally anything else seems like a waste of time. It's impossible to overstate the impact it will have.


Wouldn't be surprised if SD, GPT-3, and other recent releases were what pushed him over the edge and prompted him to leave. He must have felt like he was watching a lot of cool things happen without him.

If I had another 30 IQ points I'd be climbing over walls and sneaking into buildings at night to work on this stuff.


I believe he talked about starting work on AGI at the time he went part-time at Meta, long before GPT-3.

I encourage anyone to try out some AI related stuff, genius IQ not required. It's still a young field so there aren't yet huge towers of knowledge to climb before you can do anything. The core ideas are actually super simple, requiring nothing more than high school math.


It's the mainframe era for AI: ideas are simple, but access to mainframes isn't.


The hardware is fairly accessible. You can start for free with Colab, try a subscription for $9.99/mo, or use the gaming PC you might already have. The hardest thing is data, but again there are lots of free datasets available as well as pretrained models you can fine-tune on a custom smaller dataset that you make yourself.


Any specifics for trying out AI stuff? Take some online courses? Play around with some simple models in DL frameworks?


Andrew Ng's online courses are great to get your feet wet. You use Octave/Matlab to implement the basics of many machine learning models from scratch, and build yourself up to using python to design several popular deep learning models including convolutional and transformers. It's not required, but a good idea to understand at least the basics of linear algebra and calculus.


I think the course just got updated during the past year and now they use Python instead of Octave/Matlab


Interesting. To be honest I really appreciated how they started with Matlab; it gave a very math-centric focus to the fundamentals, although of course you can do all of that with Python too. And I say this as a professional developer.


FastAI gets recommended a lot I think, if you can already code - focuses on hacking with frameworks instead of starting with the boring linear algebra stuff.

https://course.fast.ai/


It's good to have a $50k mini-supercomputer at home though, so you can actually try out your ideas in a reasonable time.


> That's another sign to me that VR is once again not going anywhere

That's not Carmack's opinion. To quote from his post:

"Despite all the complaints I have about our software, millions of people are still getting value out of it. We have a good product. It is successful, and successful products make the world a better place."

"the fight is still winnable! VR can bring value to most of the people in the world"

The main problem with VR in my opinion is a lack of software. Most of the games produced have been either toys that are little more than a tech demo, or ports of other games not designed around VR. There haven't been many serious attempts to harness VR for non-game playing purposes.

I think the future is still bright for VR. We're currently in a bit of a local hype-cycle trough, but the tech is only going to improve.


When people say one thing with their words and another with their actions, I tend to believe the actions. And "we made cool hardware that doesn't have much practical use" is not the most ringing of endorsements.

Also suspicious to me is the way that Meta still isn't releasing actual use statistics. They're happy to release DAU numbers for Facebook. Where are the equivalent numbers for Oculus? What Carmack says is consistent with my suspicion that a lot of people bought the Quest to try it but don't use it regularly. Which would explain why they keep those numbers very quiet.


The main problem with VR is that it is a gimmick that people don't want to use as their main medium of interaction whether with games or job communication.


I see you're getting downvoted, but that's my suspicion as well. Since the 1850s, stereoscopic 3D has had many waves of short-term popularity but has produced no lasting impact. From the Brewster Stereoscope to the Viewmaster to multiple tries at 3D movies and TV, to 30 years of "VR will break out once we improve the tech", each time people get very excited about the novelty and think it will change the world. And each time it doesn't.

The simple answer here is that people's brain hardware is already quite good at turning flat 2D representations into 3D mental experiences, so stereoscopy doesn't add much. Making it, as you say, a gimmick. The historically cyclical interest in the gimmick suggests that it's mainly appealing as a novelty.


>That's another sign to me that VR is once again not going anywhere

Maybe it's just a sign of how hot AI is right now.


His goal is artificial general intelligence, something that seemed to grab his attention after he was already actively working on VR. My guess is that when he started teaching himself machine learning, he realized that he needed to focus all his attention on it if he was going to take it seriously. Facebook's failings just accelerated this for him.


I think the truth is that VR is there. They've sold lots of headsets and people who love it, really love it. Expecting it to sell at iPhone levels was never going to happen.


So by "there" you mean that the tech is good enough to provide a good experience, and so they've basically plateaued? That's my guess too. I rented a Quest for a couple of weeks and it was pretty neat. It just didn't fill any needs we actually had. I was thinking of it as a try-before-I-buy situation, but when I sent it back at the end of the two weeks, nobody cared. The kids were already back on their Switches and the Playstation for gaming.


Yeah, that's pretty much what I mean. They aren't some new thing that either has to launch into orbit or crash and burn. The likely path is in between.

VR headsets have improved immensely since I first tried it in 1991 (Dactyl Nightmare) and I suspect they will keep improving. Where we are at today is not necessarily a plateau but it might feel like one if you were expecting a growth curve that looks like a hockey stick.


Try it yourself. Strap something 8 hours or longer around your head, don't move, because every small tremor distorts the vision of the device.

VR in its current form is eye cocaine, nothing more. Why watch poppy avatars hop around in a virtual reality, while you are bound to do nothing?

In every way, VR should be restricted to very few use cases, not to - hello Zuck - rebuild reality completely and hereby track everything you do or see to deliver even more addictive material to your eye vision.

It sounds cool, some use cases look cool, but the very fact that hardly anyone at Meta uses their own device/creature, speaks volumes. I am glad, JC takes consequences and abandons this experiment.


Don't move? Have you ever played Superhot VR or Beat Sabre or Pistol Whip or... etc.? You're jumping around all over the place for some VR games and it's extremely fun and the experience can't be reproduced on a monitor.


I only use Abe for fitness apps about 1 hour a day and frankly the weight isn’t a problem for that period of time with intense movements.

In fact I think the other use cases are bogus, everything else is unimpressive to me except the ability to have a decent fitness experience with FitXR or Beatsaber.


Or maybe you know he is simply a lot more excited about AI than VR now and that's it. No need to stick with something that doesn't excite you any more just because you were excited about it at one point and spent several years working on it :-)


I’d wait and see how Apple’s device is received before making any long-range pronouncements.


I suspect he's going to get smacked hard by the AGI problem and ultimately concede defeat on his biggest goal while achieving some success in the current less-than-general ML stuff.


If you solve a few more problems with chatgpt it's going to become useful enough that people are going to stop caring so much about the AGI label. It's going to quack like a duck enough to change a lot of industries.


> "If you solve a few more problems"

The amount of work done by these 7 words is incredible. ChatGPT is far from changing a "lot" of industries. Still pathetic at programming (which isn't its purpose, but is AlphaCode's purpose; which also sucks), use of it for copy writing is nullified since it seems Google will crack down on AI generated copy writing. DALL-E is also a nice party trick but far from being particularly useful.

I'd certainly love to have a useful AI but I think we're experiencing a 80-20 rule situation right now, and that it'll be a few years before we see anything that makes significant improvements on current solutions.


It boggles my mind that people are now dismissive towards technology that would have been literal science fiction _a year ago_ while at the same time being pessimistic about future progress.


I think you’re exaggerating quite a bit. Language models have been evolving for years.

The problem I have with GPT is that is wonderful at confidently writing things that are completely incorrect. It works wonderfully at generating fluff.

I’m not a pessimist I love this kind of thing. I just understand the delta between impressive demo and real useful product. It’s why self driving still isn’t pervasive in our lives after being right around the corner for the better part of a decade.


Maybe you think it's a revolution but older folk see that as an evolution.

It remind me of the alice bot hype of my youth, which in retrospect was just an evolution fromthe ELIZA hype of 1966. https://en.m.wikipedia.org/wiki/ELIZA

We are actually far from sci-fi where is my flying delorean and clean fusion energy for humankind? An as far as AI is concerned where is HAL 9000?


> Maybe you think it's a revolution but older folk see that as an evolution.

I don't know what you consider "older" folk but I highly doubt you speak for every member of that group.

Also, I don't think the distinction here between "revolution" and "evolution" is so clear cut (or important.)


> ...and clean fusion energy for humankind

Well, we did just produce the world's first clean fusion energy this week.

https://www.independent.co.uk/tech/nuclear-fusion-power-plan...


This once again represents a sentence which omits a lot of important details.

This happened in an experimental setting, not an actual production setting. It was a net positive energy output when ONLY accounting for the energy input of the actual lasers, not when accounting for the mechanisms which fired the lasers (which had an energy efficiency of about 1%, although this efficiency could be higher if they used more advanced laser generator/whatnots). It was generated in a way that in no way resembles what current attempts at a production ready, maintainable, fusion reactor look like (tokamaks), and was instead, as stated before, essentially a design meant for experiments where fusion occurred (basically by shooting a pellet of fusion material into the central focus of a bunch of powerful lasers).

The LLNL is, and always has been, a experimental laboratory meant for primarily nuclear weapons testing and maintenance, and as such, have the ability to test nuclear fusion (via this inertial confinement setup), as fusion occurs in thermonuclear bombs, of which the US certainly has many in its stockpile.

This test, while a big "milestone", is the equivalent of building a specialized fuel efficient vehicle which gets 500 miles to the gallon by sacrificing almost everything that makes a car a car, and then equating that as to say that every car on the road will be getting 500 miles to the gallon any day now. When in fact, the only thing achieved was the ability to say that we've made a car get 500 miles to the gallon.


3 whole energy for the low cost of 300 energies!


Just going back and look at GPT-2's output, it's amazing how much better this system is. It still doesn't "understand" anything, but the coherence of what it spits out has gone up drastically.

https://thegradient.pub/gpt2-and-the-nature-of-intelligence/


The world is full of compelling tech demos that fail to make much of a splash.


You’re still calling ChatGPT a tech demo?

Buddy, it’s not a demo, it’s a warning of what’s to come.

Stay behind, it makes no difference to anyone but yourself. As for me, I have integrated ChatGPT into my daily work. I have used it to write emails that negotiated a 30k usd deal, write stories, prototype an app, send a legal threat, brainstorm name and branding ideas, scope a potential market and this is just some of the stuff I used for actual productive work.

I can’t begin to tell you how much I have played with it for fun and intellectual curiosity.


We are mistaking this for a splash when instead it is the ripple before the shock wave


No. "AI" isn't creating new information complexity. (In fact it's making the world simpler, by regurgitating smooth-sounding statistically average statements.)

Information complexity is the true test of intelligence, and the current crop of "AI" is actually making computing dumber, not smarter.

But yes, "dumber" is often more useful. But the industries "AI" will revolutionize are the kinds of industries where "dumber" is more profitable (e.g., copywriting spam, internet pornography, casual games, etc.) so the world will be poorer for it.


Users of chatgpt went from 0 to 1 million in five days.

Show me the compelling tech demo that did that without being a big deal.


Dwarf Fortress shipped 160k units in 24 hours, and moreover people paid money for it. That won't make a dent on the course of history.

https://cogconnected.com/2022/12/dwarf-fortress-sells-160000...


The first iPhone forever changed how people use and perceive smartphones as well as how they are built. It only sold 6 million units over the course of 13 months, an average of 15k units / day.

I too can pull up completely irrelevant statistics.


I'm not sure I understand. TapWaterBandit asked for a fairly specific example of something and I gave one. Could you elaborate on what your disagreement is?


I misunderstood the intent of your post. But also, DF is not a tech demo... it's been around for two decades. And it's not new technology.


i want to see the actual use cases for these less than perfect AIs. only recently they've become useful enough to actually assist with coding, which is indeed impressive, but what else can they really do?

they can answer questions, yes, but it's tough to tell if it's telling the truth or making stuff up which is kind of a problem. code at least compiles or doesn't so its easy to verify.


My sweet dude, it's a prototype literally released as a CHAT bot to public to tinker with seventeen days ago.


The forever relevant xkcd

https://xkcd.com/1831/


It's worth it to try with fresh eyes though right? Maybe you get lucky?


I also think he's going to lose it massively (technical background aside, i.e. he's not an applied mathematician, it's a huge problem) BUT the AI industry is currently run with a very childish approach to software and programming, if his company can attack the infrastructure wisely they could really improve the industry. No idea how to make that money money though.


Have a listen to some interviews. Doing it out of love for the problem more so than anything else. https://www.youtube.com/watch?v=I845O57ZSy4


It's as if nobody's learned anything from the 'self-driving car' boondoggle of the last 20 years or so.

Any problem can seem easy when you don't know the things you don't know.


Honestly seems like a better fit for him than VR. VR may be games, but everyone is saying his strengths are technical, not organizational. Seems like he wants to work on hard technical problems and thats what he's good at.


I believe we're still 15 years out from AGI, but I also think we could start creating autonomous agents right now that make people feel like it's almost AGI. I think Carmack could get something out the door that feels like that.


It’s not exactly a bold predction that someone will fail to solve a grand, millennia-old problem of philosophy and science.


I don’t think you’d need to be alone in solving the AGI problem.

Carmack is an engineer at heart. I am sure there are some yet undiscovered technical blockers in that space that he will solve.


>he's planning for an AGI company.

This is something I think will break the mystique around Carmack. Back in the 1990s and early 2000s he was considered the whiz kid when it came to rendering engines but in reality something that's as nuanced and still not fully understood (intelligence, especially AGI) is likely too far out of his expertise to develop. Why? Because the last 60 or so years of development in the field hasn't yield much in the way of results in pure research or practical application. Machine learning and other hat tricks aren't even in the realm of AI properly but rather probability models that work well enough on a narrow set of problems to possibly commercialize them (ex. chat bots for help desk replacements). The thought that anyone, and I mean anyone, is going to develop artificial general intelligence before the 2050s or even later is to me laughable. We barely can understand how to talk to parts of our own brains, we're still learning more from other so-called lower lifeforms with respect to how they solve problems (fun fact: it seems bumble bees like to play or at least do things that are roughly analogous to being play-like, all without any kind of human or hominid-like brain so fancy that). So, it just seems like hubris to imagine any corporation, research department, or a singular scientist cracking this nut. I think we're more in the technological Bronze Age with respect to computing than anything else. It would be nice we finally got started on our computing Iron Age.


He believes that AGI is an engineering problem because of the vast compute resource required for it. Being an engineering problem he believes he can make an impact.


That would be utterly wrong.


GPT-3 happened and showed that by throwing more computation at the problem you have better results.


As one of my AI profs said: it's like trying to teach pigs to fly, and say you're progressing because you're building higher towers.


Chomsky and Gary Marcus predict that the current approach will not be sufficient; is and will increasingly be detrimental to society because of that “almost right at best” reason. Chomsky only has hopes in a combined approach that integrates “old AI” with the engineering / data / GPU driven one. See also discussion here for anyone interested: https://news.ycombinator.com/item?id=33857543


+1 to changing the post to Carmack's published post on Facebook or Twitter without the Business Insider fluff.


If you solve AGI, all business is your plan.


Keen Technologies. That’s a great name.


I think it is based on id Software's Commander Keen[0] game.

[0]: https://en.wikipedia.org/wiki/Commander_Keen


Why would he change the link? We're discussing this article.


[flagged]


Is AI really grifting? It's not like mom and pop can sink their savings into AI tech and lose it all to scams, etc. like they can with crypto. At worst some big investors sink a big seed round in and never get it back from a 'grifting' AI company--IMHO no real harm done, if you're an angel investor you're mature enough to deal with getting burned it's just part of the risk (and no one is going to cry for someone rich enough to gamble millions and lose it all).

I suspect his AI company will be like his previous rocket company, Armadillo Aerospace, that tried to go after the x prize for space. A purely passion project that bootstraps itself from the start and either sinks or swims. I can't see Carmack 'grifting' by courting huge seed rounds from tons of investors, expanding quickly into an enormous company to steamroll into series rounds with no solid business plan, etc.


Your theory is that it isn't a scam if the people can afford to lose the money? That is certainly not how I think of them.


A scam implies malfeasance or fraud. There can be scam companies anywhere. What I'm saying is that AI is not inherently full of scammy companies, unlike say crypto. Sure AI tech is over-hyped but it isn't designed to defraud people.


Over-hyping is defrauding people.


AI in its current form is grifting. Trying to actually productionize anything with GPT3 for example is a nightmare, it can actively lie to you, the embeddings are pretty sub-par, and inference is pretty expensive. But you hear nothing but praise from it here on HN, and people act like the 30 minute web app they built and charge $15/month for is going to change the industry.

But it's getting better. GPT3.5/InstructGPT and now ChatGPT are showing incredible leaps in performance. Less hallucination, more coherence, it's getting better over time.

So guess who wins and profits once the tech catches up? Is it the people like me sitting on the sidelines and poo-pooing the tech? Or is it the people who have been in the space for a while?

Just the act of being "in proximity" to a technology can be so valuable. I know first-hand, I was an Objective-C developer for pure passion, because I loved clean MacOS apps and wanted to build myself tools. Well guess what? That proximity to Objective-C, familiarity with Xcode, and knowledge of Apple API patterns paid handsomely when the iPhone came out and I became an iOS developer. The same happened for WatchOS.

You see this pattern in technology over and over again. And I have no reason to think that AI/large language models will be an exception.

TLDR: It's kind of a grift. Carmack likely won't advance the field of AI or make a major breakthrough. But I have no doubt the infrastructure and talent he surrounds himself with will be able to manifest something profitable when the time comes.


> AI in its current form is grifting.

Could you give some actual examples of projects or companies that you see as "grifting"?

Most companies are clearly communicating that they're in the R&D phase of the tech. R&D definitely isn't grifting.

The problem spaces where people find value and pay for AI isn't grifting either, like the bulk of content moderation happening now, recommendation systems, text to speech, speech to text, etc. The camera on my phone uses neural networks, with great success. I use ChatGPT daily, at this point.

What do you see as clearly being "grifting" (ok, lets try to keep Elon Musk related projects out of this)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: