Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Inside Story of Microsoft's Partnership with OpenAI (newyorker.com)
240 points by jyunwai on Dec 1, 2023 | hide | past | favorite | 103 comments



> Altman began approaching other board members, individually, about replacing [Toner]. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years.”

> ... [Altman's] tactical skills were so feared that, when four members of the board—Toner, D’Angelo, Sutskever, and Tasha McCauley—began discussing his removal, they were determined to guarantee that he would be caught by surprise. “It was clear that, as soon as Sam knew, he’d do anything he could to undermine the board,” the person familiar with those discussions said.

> The person familiar with [Altman's] perspective said that he and the board had engaged in “very normal and healthy boardroom debate,” but that some board members were unversed in business norms and daunted by their responsibilities.

> [Re: the board's silence after firing Altman] Two people familiar with the board’s thinking say that the members felt bound to silence by confidentiality constraints.

This seems to be the most concrete information about what he did since it happened. Sounds like he lied for years and justified it as "business norms."


I haven't heard anything yet that presents Helen Toner as someone who was useful on the board. She'd write dramatic papers about the dangers of AI which put her own company (I mean whose board she was on) in a bad light, and was unreceptive to feedback from Altman about the optics of the entire situation.

Maybe she thought she's doing the right thing, I'm sure all those people think they're doing their earnest version of "the right thing". Yet the company she praised, Anthropic, is clearly dysfunctional, with users complaining all over the Internet that Claude refuses to perform basic tasks out of misguided "safety concerns" ("I can't help you kill Linux processes, as this is unethical"), making their subscriptions useless. That's the Helen Toner model.

See, people who deal with security often like to put the spotlight on themselves by proclaiming grave danger is imminent and then position themselves as the experts who can save us from this danger. This is such a common pattern and sometimes it's not even consciously done, but extremely toxic regardless. Everyone wants to be important and regarded, so they lean on behaviors that achieve that outcome.

Sam Altman is your typical startup entrepreneur sleezeball. And yes, here he was trying to manufacture consent within a board that was largely complacent through "divide and conquer". The details of his behavior can be nauseating, but still... I can't deny on the total he did make countless correct leadership decisions at OpenAI, and getting Helen Toner off the board... for all we know so far... feels like one of those correct leadership decisions. So I can't feel too bad about it.

The only thing I feel bad about it is Ilya Sutskever. I hope he and Altman can find a productive and dignified way to continue their partnership.


Your comment prompted me to actually read Toner’s paper, and you’re seriously mischaracterizing it. And, not that it matters so much, but she’s third author on it.


link to paper please? cant find it based on the info from the article



Try looking at it from the perspective of reality TV, where attention is everything and there is no actual product.

An inflammatory post about the dangers of AI increases awareness and emotional engagement. Especially paired with a calm and steady press release.

It's the same pattern as an abusive relationship - cause the pain, then alleviate it.

I'm not saying its an explicit manipulation, just a unconscious tactic that opens up when people really care about the mission.


If there's no singularity after the commercial break, I'd be super annoyed ;-)


Sam Altman sounds like a prick; scheming to replace a board member who is dutifully performing their oversight role.


The article implies that the Mira Murati and the rest of the C suite were the ones responsible for getting people to sign the letter threatening to quit unless the board resigned. And there seems to be some direct involvement from Microsoft (even before they announced they would hire employees who quit).

This sounds a lot different from the narrative that it was a grass roots show of support for Sam Altman.

If the CEO, the rest of the execs, and the investors in the for-profit were pushing employees to sign, doing so wasn’t a sign of loyalty to Altman at all.


It was strange seeing how many people with genuine intentions of advancing AI for good were blindly threatening to join MSFT, a company with much less genuine reputation. I think OpenAI's board is even more controversial than ever now.


To be fair, Microsoft does seem to have repaired a lot of its bad reputation in the last decade or so. Has it not? At least that was my impression.

The word Microsoft used to be borderline toxic.


I found this article, 'VSCode is Designed to Fracture'[0], insightful for its look at the current Microsoft.

Its thesis is that VSCode is meant to leverage open source and popularity to increase closed source vendor lock-in to Microsoft. It cites recent examples of VSCode extensions being closed source, so that while VSCode seems open, the critical portions are not. It's worth reading the full article.

0: https://ghuntley.com/fracture/


Thanks for sharing!


My impression is it was more a case of if it ain't broke don't fix it rather than love for Sam. I mean OpenAI was maybe the world's hottest tech company with many employees about to get rich through share sales and there were proposals to largely destroy that. I can understand how employees would be against.


How many of those employees who signed the petition had a tenure longer than a year? The for-profit stacked the ranks with would-be microsofters and then overran the non-profit board with threats of mutiny and made-up controversy. The non-profit was taken over by pirates.


Now that this and other articles have established that the board's actions were based not on a single event but rather on general, long-term, underspecified concerns, no one captures the essence of what happened more perfectly than Matt Levine:

"Well, sure, but [the board not trusting Altman] is a fight about AI safety. It’s just a metaphorical fight about AI safety. I am sorry, I have made this joke before, but events keep sharpening it. The OpenAI board looked at Sam Altman and thought 'this guy is smarter than us, he can outmaneuver us in a pinch, and it makes us nervous. He’s done nothing wrong so far, but we can’t be sure what he’ll do next as his capabilities expand. We do not fully trust him, we cannot fully control him, and we do not have a model of how his mind works that we fully understand. Therefore we have to shut him down before he grows too powerful.'

I’m sorry! That is exactly the AI misalignment worry! If you spend your time managing AIs that are growing exponentially smarter, you might worry about losing control of them, and if you spend your time managing Sam Altman you might worry about losing control of him, and if you spend your time managing both of them you might get confused about which is which. Maybe Sam Altman will turn the old board members into paper clips."

(Link: https://www.bloomberg.com/opinion/articles/2023-11-29/the-ro...)


That's pure gold, and I'm even more impressed at Levine finding time and will to understand enough about AI x-risk arguments to not just speak the language, but make a joke in it.


Makes me wonder, has anyone actually physically seen Sam A. lately? :-)


Is Altman using OpenAI to guide his decisions? Does he ask his "oracle" how to handle these individuals on the board based on what it knows about them and how they have reacted in the past? From many posts on here he sounds like a master manipulator and therefore he should not be allowed control of anything that affected anyone else's life, property, health, etc since he is obviously only in it for himself.

I don't know the guy but frankly from reading a lot of stuff on here posted by people that have experience with him I am not inclined to trust him at all with anything and would watch him like a hawk. Trust is critical and he does not inspire trust.


"...he does not inspire trust."

This is the part that his sycophants shout over so nobody can hear it. The Board™ didn't trust Altman. He isn't a martyr, just another startup success rotting on the vine.


I agree with that assessment. He seems to be another one of the things in SV that is rotten and that threatens to spoil the whole ecosystem. Too bad he has so much suction that he can always find someone willing to rally to his defense.


There's also this gem around states/corporations at the end of another recent New Yorker article: https://www.newyorker.com/books/under-review/maybe-we-alread...

"For Runciman, we’ll be lucky if we can coax into alignment the machines we’ve made of ourselves, never mind the ones made of silicon."


That's hilarious, though it also demonstrates the AI Safety problem in another way - If true, Sam made a stupid mistake in assuming he could lie/fudge/exaggerate each board member's position to the others and that they wouldn't eventually talk amongst themselves and compare notes. That suggests he's either really not that smart, or he's overconfident and sloppy.

An AGI, or especially an ASI, would presumably avoid such easily-detectable mistakes. Things just start happening and seemingly disconnected events steer reality in a particular direction, and there is no way to concretely link them or discern their true provenance.

This is already exactly the problem with the most sophisticated disinformation campaigns run by the FSB/IRA/etc. It appears random, grassroots, and uncoordinated, and is difficult or impossible to discern its true provenance. But it all operates across a shared a set of objectives - increase social divisions, decrease trust in democracy and democratic institutions, and/or increase general irrationality and emotional response among the public.

If humans can already accomplish that, imagine what an ASI could do.


> That suggests he's either really not that smart, or he's overconfident and sloppy.

I think he's very obviously overconfident and sloppy.


It’s only overconfidence if he gets caught.

This is the essence of playing poker- the bluffs need to be called for it to be a bad bluff.


Well, if the board shot Altman or forcibly put him into suspended animation, I think it's fairly likely that he wouldn't be CEO now.


Ultimately, now we can get back to using a useful tool instead of having people assign numerical values to unlikely events and use metaphorical references to the fact that they can't Align the Saltman.


Reading this article, I kept think about James Micken's 2018 usenix security talk https://www.youtube.com/watch?v=ajGX7odA87k. He warns quite effectively against widely deploying technology that we don't understand. Kevin Scott, on the other hand, seems to have exactly the opposite take. I suppose there is a reason though why one of them is a successful businessman and the other is an academic.


I don't think ML is substantially different from other technologies that we have deployed that we didn't completely understand: early atomic weapons, early molecular biology, cars, and more. In each case, by deploying a bit with caution, we learned important lessons that allowed us to continue. Rather than sticking our head in the ground and remaining ignorant. In each case there were people calling for the deployments to stop. Of all those, I think only atomic weapons ever represented an existential risk to humanity- and we figured that out within a decade or to and set up political structures to prevent it (although, ideally, we'd have worldwide disarmment).

Unless somebody can make some sort of extremely convincing argument that by deploying this tech we are taking an existential risk, rather than purely banal ones, it doesn't seem reasonble to prevent deployment.


What? All of those inventions you listed were mechanistically understood quite well. Atomic weapons wouldn’t have been possible without a completely sound understanding of their mechanics.

A better example is perhaps pharmaceuticals, many of which we do not actually understand the mechanisms of action. But note those go through extremely rigorous testing for exactly that reason.


The inventions were partly understood mechanistically- just like ML. What we didn't understand was the consequences of deployment (such as massive radiation leaks due to testing, unpredictable yields, the risks of global thermonuclear war, and the ongoing cost of maintaining a fleet of nuclear weapons).

Similar issues with molecular biology- we knew how to clone a gene from one organism to another, but that doesn't mean we really knew all the things going on (side effects) or what the large-scale implications are (hence the Asilomar agreement).

Even cars- while engines were understood mechanistically, it took quite some time for people to appreciate why automobile safety glass was necessary.

See the experiences learned during testing nuclear weapons- we tested them because we didn't understand them mechanistcally, at least not fully enough to predict many effects.


> The inventions were partly understood mechanistically- just like ML

We have effectively no mechanistic understanding of frontier ML systems, in the sense that we have no idea how they do what they do and could not e.g. write human-readable code to perform comparable tasks, nor can we predict ahead of time what capabilities such systems will have when they're trained (being able to predict e.g. log-loss is _not_ the same as being able to predict specific capabilities).


Maybe GMO crops are a better analogy. We understand them well enough, but if we mess up at making them unable to reproduce they could easily proliferate beyond our control.


I don’t have any reason to trust Sam Altman much. To me, he seems polished enough to actually be slimy. He’s well liked by others but wealthy tech is a particularly insular group.

But my god, in what world should the future of “AI safety” be in the hands of a small board of directors that works in a completely opaque manner and with uncertain motivations or capabilities?

Surely humanity’s fate is not meant to be guarded by the Quora dude.


Four is better than one through.


I dunno... seems like its an AvP situation -- whoever wins, we lose.


    Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years.”


Sounds like a manipulative individual.


Sounds like a CEO of a hyped tech company.


Yes. And that type of person is not usually the type of trustworthy or selfless person whom we as a species critically need, in a role like OpenAI leader that has final say over the ethical value system of the artificial superintelligence which OpenAI is building towards.

This concern (that a profit-maximizing CEO like Sam is not the type of person who should decide the ethical fate of humanity) was and is the whole point of having a non-profit as the owner of the for-profit tech company, so that ethically-driven people who value human well-being over growth can be in charge of the non-profit board and exercise final say over the AI's ethical constraints.

The fact that Microsoft now has a board observer seat seems highly corrupting, and contrary to the board's fiduciary purpose. This is discussed here: https://news.ycombinator.com/item?id=38471990 Basically, Microsoft now has an enormous, and indecent, insider information advantage (allowing them to conduct software development industry market front-running, which would be illegal but it will be impossible to prove that the source of their ideas is the "forbidden fruit" of what they hear in board meetings). Details may be found in that link.


Superintelligence is a hypothetical. They can say whatever they want and make claims as to future performance. It doesn’t make it so.


This was a very long piece that really buried the lead. Seems like most of the people commenting on this here never got to this passage.

While I'm at it, most of the references to the paper co-authored by Toner missed the two pages of praise for OpenAI before the single paragraph with the somewhat negative comparison to Anthropic.


So, the guy was gaslighting the board.


Eh, gaslighting specifically refers to trying to make someone question their own perception and judgement of reality. Sounds more like he was presenting different versions of reality to different people, which is manipulative and dishonest but not gaslighting.


Further confirmation of what I said 9 days ago (https://news.ycombinator.com/item?id=38373572) based on the NYT (https://www.nytimes.com/2023/11/21/technology/openai-altman-...) followed by WSJ (https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c): the firing was over Sam Altman's attempt to takeover the board by firing one of the holdouts (Helen Toner), which would secure him a majority indefinitely (especially after packing the board further by filling vacancies), thereby neutering all remaining governance controls over Altman. In particular, in this piece, Altman acknowledges on the record that he was trying to fire Toner and went about it in a 'ham-fisted way'.

Excerpts:

    "...Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought”, the person familiar with the board’s discussions told me. “Things like that had been happening for years.” (A person familiar with Altman’s perspective said that he acknowledges having been “ham-fisted in the way he tried to get a board member removed”, but that he hadn’t attempted to manipulate the board.)

    ...His tactical skills were so feared that, when 4 members of the board---Toner, D’Angelo, Sutskever, and Tasha McCauley---began discussing his removal, they were determined to guarantee that he would be caught by surprise. “It was clear that, as soon as Sam knew, he’d do anything he could to undermine the board”, the person familiar with those discussions said...Two people familiar with the board’s thinking say that the members felt bound to silence by confidentiality constraints...But whenever anyone asked for examples of Altman not being “consistently candid in his communications”, as the board had initially complained, its members kept mum, refusing even to cite Altman’s campaign against Toner.

    ...The dismissed board members, meanwhile, insist that their actions were wise. “There will be a full and independent investigation, and rather than putting a bunch of Sam’s cronies on the board we ended up with new people who can stand up to him”, the person familiar with the board’s discussions told me. “Sam is very powerful, he’s persuasive, he’s good at getting his way, and now he’s on notice that people are watching.” Toner told me, “The board’s focus throughout was to fulfill our obligation to OpenAI’s mission.” (Altman has told others that he welcomes the investigation---in part to help him understand why this drama occurred, and what he could have done differently to prevent it.)

    Some A.I. watchdogs aren’t particularly comfortable with the outcome. Margaret Mitchell, the chief ethics scientist at Hugging Face, an open-source A.I. platform, told me, “The board was literally doing its job when it fired Sam. His return will have a chilling effect. We’re going to see a lot less of people speaking out within their companies, because they’ll think they’ll get fired---and the people at the top will be even more unaccountable.”

    Altman, for his part, is ready to discuss other things. “I think we just move on to good governance and good board members and we’ll do this independent review, which I’m super excited about”, he told me. “I just want everybody to move on here and be happy. And we’ll get back to work on the mission”."


> Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived)

Wait, what? Apologise for doing your job or did the board fail to read Helens current job description? Or is Helens position on the board of openai meant to prevent criticism of openai?


I mean wouldn't you fire her? Imagine someone in Ford's board wrote an article criticizing Ford's approach to cars and saying that Mazda was doing a much better job


First, board members criticizing a company is very common, indeed, an entire career for some people (like Carl Icahn) and a useful role, especially in this case. If you think it's so outrageous, as so many OAers apparently do, better fetch the smelling salts the next time you crack open a WSJ...

Second, the 'criticism' here is laughably weak and is barely even a criticism at all (I actually read it as a criticism of everyone else but OA); I strongly encourage anyone who thinks that this might justify firing someone (especially without, apparently, a replacement who would be acceptable to the safety faction) to actually read the supposedly unforgivable page of the paper: https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...


Wholly agreed — the paper is a pretty mild analysis of the ways different companies have signaled their trustworthiness, and was hardly a scathing critique of OpenAI (or unreserved praise of Anthropic). I found it pretty decent.


Third, she was on the board of a non-profit with different goals and duties to a company


Wall street is full of corporate board members being very critical of ceo's approach and therefore driving sales of divisions, assets, layoffs, etc.

that's one of the fundamental jobs of the board: to be critical and to guide direction, especially if the direction is different from what is happening at the moment.

and yes, multiple Ford board members were/are extremely critical of Ford's approach to cars and are very public about it. That's why multiple CEO's were sacked, and even members of the Ford family are not spared from criticism from the Ford Board.


Is Ford a non profit?


This is a poor comparison. Even in a non-profit, if your board wrote anything that went against the non-profit, it would certainly rile up other members.


This is a non profit which charter places certain values above the reputation of the organization. The board members should arguably not have been riled up by the article


You’re projecting your feelings for the boards which you are not a part of.

The board of even a nonprofit can have motivations. Your original comparison is still false and unproven. Sorry.


A board member did not write anything against the non-profit motivations. They wrote an article critical of the for-profit operation.


And who controls the for profit? The nonprofit.


Keanu reaves whoa! That's the whole point of the original charter.


Sorry I don’t understand any point your trying to make.


[flagged]


References please. From what I remember about all the recent problems in the automotive industry, Ford did not need a bailout because they were fully capitalized with cash on hand to ride out the periods that drove GM and whoever controlled Dodge/Jeep to beg for a bailout. I may be wrong but I don't remember Ford needing to be propped up any time in the last 25 years or more.


Ford was the only US automaker to not need a bailout.



Yea, this turned out to be the simplest explanation… try to undermine board, get fired, win anyway (lol)

Lesson here is… the next time a startup CEO gets fired they should try the social media spam gambit…? Depends on how much $ is on the line but worked at least once!


I think the lesson is that after firing your CEO and 80% of the organization threatens to quit, then maybe don't do that. Though if MSFT wasn't there for a soft landing ($), not sure the number would be as high as that.


The lesson is if you're going to fire your CEO for alleged unethical behaviour, make sure you're willing to actually argue why you fired him, to convince people it was a reasonable decision.


Agreed. The board's public note about all this basically read as, "Sam is not able to do his job." They made this public note during a time of generally high praise for OpenAI's success, so nobody believed it. I hate to side in either direction here, but the board really showed incompetence with public relations here. Having said that, I'm sure there's some poor PR person at OpenAI sweating over exactly this, and to them I say, it's not your fault, every single person in the board should have been involved in that public notice, and everyone apparently agreed this was a sufficient explanation for a bunch of people with no internal context.


I really think the board and the whole organization would've come out of this just fine (minus a handful of resignations) if they'd just outlined their reasons to employees. Of course you can't keep a CEO who you can't trust.

Really bizarre that they didn't do that.


The lesson is if your relatively unknown little nonprofit startup stumbles into a giant pile of gold, it doesn't matter how pure and ethical your intentions are.


And the irony is, that's the main plot and lesson of like half of fables, legends, fantasy and sci-fi stories ever made.


Or if you’re fired, convince 80% of the org that you’re taking their big payday with you.


I think Matt Levine covered it well. Having technical control is all fine and dandy, but having the backing of money usually matters more than anything. Microsoft was firmly behind Sam, which made all the difference.


This entire debacle has convinced me that Altman is both smarter and more ruthless (machiavellian?) than anybody who was on the board and, as little as I care about him, his mission, or OpenAI, I still think they are better off with him solidly in charge.

Compare the "triumphant return" letter he wrote (https://openai.com/blog/sam-altman-returns-as-ceo-openai-has...), versus the "we're jettisoning Sam because he's not candid" letter: https://openai.com/blog/openai-announces-leadership-transiti...

The board's letter is a big "WTF" that led to their downfall. Sam's is basically "I'm back and bigger than ever, thanks to all the minions who helped, now let's get back to implementing the singularity".


Well I think it is interesting that Adam D'Angelo remains on the board. If the theories about a power struggle are correct, he managed to be part of the "coup" attempt, and remain at the seat of power despite it failing.


Maybe the real coup targeted the board after all!


Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought”, the person familiar with the board’s discussions told me.

Psychopath.


"super excited"

This is funny language, it's been adopted into the corporate bullshit buzz phrase guide thanks to Altman.


Yes this was a coup no doubt orchestrated by Microsoft with eager participation by Alternative Man. No sarcasm there, Microsoft should inspire the deepest cynicism in any who view their dealings.


I think OP makes clear actually that this was not a MS-inspired coup. MS appears to have had no role in the CSET paper criticism or anything else that I've found in the runup, and then during the crisis, being mostly reactive and defaulting to a simple loyalty of 'whatever our guy Altman says' (which included Altman+OA defecting to MS as their BATNA against the Board), and when the war went hot, ultimately decided that the status quo was most in their favor:

"As enticing as Plan C initially seemed, Microsoft executives have since concluded that the current situation is the best possible outcome. Moving OpenAI’s staff into Microsoft could have led to costly and time-wasting litigation, in addition to possible government intervention. Under the new framework, Microsoft has gained a nonvoting board seat at OpenAI, giving it greater influence without attracting regulatory scrutiny."


How did you come to the conclusion that this was orchestrated by Microsoft?


I feel comfortable concluding that the serial arsonist seen throwing molotav cocktails at the building was the actual cause of the fire and there wasn't some coincidental electrical fire.


Then how would you explain that Microsoft didn't find out about any of this until a minute or two before the board fired Altman.

If they were involved in anyway, wouldn't they actually know this was about to happen?

Your version of events requires us to assume all the board members are lying and Altman is lying and Microsoft is Lying and you and you alone know the truth.

EDIT after reading some of your other comments, its clear you aren't operating based on facts, but just some made up fantasy that you want to believe:(

I regret trying to engage in meaningful dialog with someone who isn't trying to argue in good faith:(


Tsk tsk thou shalt always assume good faith!


> Yes this was a coup no doubt orchestrated by Microsoft

Really?

How did you reach that conclusion?

All the evidence points to MS being blindsided


Correct, Microsoft and Alternative Man have excellent PR campaigns.


So no evidence for your opinion


See: https://news.ycombinator.com/item?id=38493318

When a company of sheisters repeatedly sheists their way through history then assume interactions by them will generally be sheisty. I'd say this is some sort of reverse Hanlon's razor.

As a sailor of yore manning a merchant ship, when approached by a frigate flying skull and crossbones I suppose you'd say "but captain we cannot know their true intentions perhaps they wish for some adhoc trade! It would be premature and I dare say prejudiced to assume otherwise!"


Arguing by analogy and prejudice

I hate Microsoft for their demonstrated shady practices, I do not them for fictional sins they might, or probably not, committed


Prejudice based on experience is postjudice, aka learning.


>Alternative Man

That sounds like the kind of name an AI incompetently trying to disguise itself would use.


[flagged]


Name a successful CEO who most people say is "nice". Steve Jobs? No. Larry Ellison? No. Bill Gates? No. Nice guys don't become CEOs. Most CEOs are slimy.


I haven’t heard too many negative comments towards Warren Buffett. He also has a pretty down-to-earth persona. But in general I tend to agree that nice guys don’t succeed in business.


I haven’t really heard of Jobs or Gates being manipulative within the company, just ruthless. Curious if anyone has comparable manipulation stories?


So? He is still a slimy person. Btw, Lisa Su and Rose Marcario, but both are women so maybe you are right in that nice guys can't be ceos. ;)


Cook? Nadella?


it honestly seems like the board isn't a good fit, and they tend to have crazy mental breakdowns that caused the entire company to lose stability

i agree sam is too smart for them, they need to replace the board


There are more mental health issues around that company than breakdown. The constant bombardment of schizofrenic doomerism is a good indicator of what the issues may be. Altman on the other hand is just a bro wanting to make money by inflating the capabilities of a chat bot through fomo fud and an army of incel spammers.


I'm not why we should see him as a "bro".


The general consensus (in the places I've read) is that the old board were incompetent blunderers. TFA seems to agree, without saying so outright.

I don't see how you can build an $80 billion operation, with a board full of blunderers. I get that it was those blunderers AND sam, until about a year ago; and I get that a year earlier still, they had several experienced businessmen on the (then larger) board.

None of the "blunderers" has spoken yet, AFAIK; I doubt they've taken the cloth and become Trappists, so I assume they're keeping their powder dry, and waiting to see how the situation shakes out.


CEO and employees are the ones that build the company.

In many places the role of the board is just to fire / hire the CEO. In some orgs they have more roles: getting financing, customers, audit, agreeing on certain things (e.g. investments, or contracts). Sometimes the board also has more prerogatives.

But in general the role is a more ceremonial role that once per year chooses the audit firm (one of the big 4 companies.. so not very hard), maybe makes some obligatory resolutions, like confirming the audit result. Their biggest power is to remove the CEO and choose the CEO's compensation.

Getting a seat on a board of a public company is like free money. Most boards dont meet often. Typical boards in companies represent the shareholders (aka remove the CEO if they want to do something crazy), while in non profits they are usually there to find financing or networking.

So yes, you can get very far with a bad board, because most boards are quite inactive. The members read some documents, maybe sometimes request some data - and that's all.


This seems like an overly simplistic take and reads like someone who hasn't worked closely with board members or been on the board of a company with a lot of money behind it.


Apart from personal attacks you didnt provide any constructive criticism.

Also you are wrong.


Thanks.

Yes, I guess I knew that about boards, subliminally.

> So yes, you can get very far with a bad board

That I didn't know. I'm inclined to believe it, and it seems at first glance to clear-up in my mind quite a few things about government in general that keep surprising me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: