The Oligarchy Is Afraid of Itself Too
- by motherjones
- Apr 29, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
Get your news from a source that’s not owned and controlled by oligarchs. Sign up for the free Mother Jones Daily.
In May 2016
, Elon Musk did something out of character that he has now spent years of his life trying to undo: He made what he believed to be a charitable donation.
The world’s richest man is also among its stingiest. Musk’s private foundation often doles out less than the minimum percentage required by law. He has argued, instead, that his businesses are inherently philanthropic, since they develop technologies that will “extend the light of consciousness.” The $38 million he donated to OpenAI over the next four years was considerably less than the $100 million he later claimed to have given, or the up to $1 billion he offered behind the scenes. But it was vital capital at a critical stage, giving Sam Altman’s fledgling non-profit the nudge and the means to hire talent and make a name for itself in the artificial intelligence arms race. Over time, the two men’s ambitions diverged and the relationship soured. Musk left the board, stopped sending checks, and launched a competitor, xAI. In 2024, he sued Altman and OpenAI, alleging that they had abandoned their mission and misused his money.
The case, which goes to trial this week in an Oakland federal court, is a clash over AI’s past and future. Musk accuses Altman and OpenAI president Greg Brockman of “stealing a charity” by effectively turning OpenAI into “a fully for-profit subsidiary of Microsoft.” Musk wants the now-private company behind Chat-GPT to revert back to the open-source non-profit he gave money to. The defendants have denied reneging on any agreement with their early benefactor, and painted Musk instead as a bitter and untrustworthy rival who schemed behind the scenes to benefit his own interests. There are designer drugs and disappearing emails; interludes at Davos and Burning Man; and altogether too much Larry Summers.
Fundamentally, though, Musk v. Altman is about power—who has it, who should have it, and how it can be used. At a moment when Americans are pushing back against the physical infrastructure of AI and its approval ratings hover somewhere between the Democratic party and ICE, court filings made public ahead of the trial offer a revealing look at how tech oligarchs really see themselves, and the technology they promise will level-up civilization. They want you to trust them. But they don’t even trust each other.
Musk and Altman were
first brought together by, of all things, a fear that too much influence was accruing in the hands of one Silicon Valley figure. In 2015, Google and its DeepMind subsidiary were the undisputed leaders in the race for Artificial General Intelligence. As Musk recalled in a 2025 deposition for his lawsuit, he came to fear Google’s hegemony after a conversation with Larry Page while staying at the Google co-founder’s house, sometime in the late Obama era. Musk had wanted to know what would happen to people when we reached AGI. Page had chastised him as a “speciesist” for raising such concerns, and said AI was “our successors.” Musk said in his deposition that based on that conversation, and others he had around that time, he came to fear “a unipolar world where any one person would control AI.” He had one specific person in mind: DeepMind’s CEO, Demis Hassabis. In a 2015 email thread in which he and Altman tried to hash out a name for their new venture, Musk proposed calling their emerging AI project “Freemind,” as a way of signalling its opposition to “Deepmind’s one-ring-to-rule-them-all approach.” Draft language included in an email shared by Musk said the group’s purpose would be to ensure “the power of digital intelligence is not overly concentrated.”
The tech elites are worried about one person exerting too much control, but they’re not really interested in delegating power either.
That OpenAI—not “Freemind” or, as Altman suggested, “Axon”; “Intelligence.com”; or something “related to Turing somehow”—was initially pitched as a more altruistic, safety-conscious venture is well established, but it is nonetheless striking to read their behind the scenes conversations about forestalling what they feared would be Hassabis’ AI dictatorship. Brockman emailed a prospective hire that the aim was to avoid “making anyone into a quadrillion-dollar company or omnipotent surveillance state.” (He continued: “I think most people see the costs of AI (a la Terminator) but don’t know what the benefits would be. Maybe this requires something crazy like getting more movies like Her made.”)
While the men behind OpenAI may have all agreed that the technology they hoped to build would be too powerful to end up in the hands of just one person, figuring out exactly how many other people it should be entrusted to proved more difficult. They ran through a variety of numbers and structures. When they first began plotting in earnest in June 2015, Altman had proposed a five-person “governance structure” comprised of himself, Musk, Bill Gates, eBay founder Pierre Omidyar, and Facebook co-founder Dustin Moskovitz. “The technology would be owned by the foundation and used ‘for the good of the world,’” Altman wrote, “and in cases where it’s not obvious how that should be applied the 5 of us would decide.”
But it was hard to find the right mix of people. Musk didn’t want to work with Gates (“not his biggest fan,” he said in his deposition) and Moskovitz ultimately gave money, but not much time to the project. Mark Zuckerberg’s own AI projects ruled him out. (Still, he makes an appearance: According to court filings, when Zuckerberg texted Musk in early 2025 to say that his team would crackdown on people “doxxing” DOGE members, Musk texted back to ask if he was interested in “bidding on the Open AI IP.”) While Amazon Web Services was an early partner, no one seemed to suggest bringing on Jeff Bezos, whom Musk described in a later email as “a bit of a tool.” They started off small, with four board members, then bumped it to seven.
By September 2017, with DeepMind still lapping the field, Musk, Altman, Brockman, and star researcher Ilya Sutskever were at loggerheads over how their project could keep growing. They considered a variety of restructuring options—including merging the company with Tesla, or transitioning to a for-profit venture. It’s all a bit in-the-weeds, but the debates they were having internally about how to distribute power amongst themselves are striking. Musk wanted to “unequivocally have initial control” of a rebooted venture, and said he would not be comfortable unless he personally held at least one quarter of the seats on an expanded board. “[T]he rough target would be to get to a 12 person board…where each board member has a deep understanding of technology, at least a basic understanding of AI and strong & sensible morals,” Musk wrote in a 2017 email, while conceding it would “probably” have to be “more like 16 if this board really ends up deciding the fate of the world.” During a meeting that year, Musk, according to notes taken by Brockman, raised expanding the board in the same megalomaniacal terms, saying “the challenge is gonna be how do we find, who should decide the fate of the world.”
You can see some flaws emerging here, both philosophically and logistically: On one hand, the tech elites are very worried about just one person exerting too much control; on the other hand, they’re not really interested in delegating power either. A dozen or so people, adhering to Elon Musk’s sense of morality, does not a democracy make.
Indeed, Musk’s partners in the venture expressed misgivings about giving him too much control. In a September 2017 email titled “honest thoughts,” Brockman and Sutskever wrote to Musk and Altman to express their fear that Musk would “end up with unilateral absolute control over the AGI… The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose.” In a private journal that’s been excerpted in court records, Brockman expressed his desire to “get out from Elon,” and questioned whether Musk was the “glorious leader” they urgently needed.
The fellowship scattered
not long after. Musk left OpenAI in early 2018, and the project launched its for-profit arm later that year. Then it was Altman’s turn to be the target of suspicion from people who believed he couldn’t be trusted with the one ring. (Disclosure: The Center for Investigative Reporting, the parent organization of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
OpenAI, according to Musk’s lawsuit, has become a “market-paralyzing gorgon,” and a “for-profit leviathan” that has betrayed its founding ideals and sacrificed safety for money and market-share. Musk’s complaint laments that “OpenAI dropped a clause from its Usage Policies banning the use of its technology for ‘activity that has a high risk of physical harm’ such as ‘weapons development’ or ‘military and warfare.’” The complaint also warned that OpenAI was abandoning its safety mission at a time when AI “is leading to a proliferation in child sexual abuse material” and “supercharging the spread of disinformation” and “malicious human impersonation.”
What started as an underdog alliance has become a parable about hubris and power.
It’s an interesting argument coming from Musk, a Pentagon contractor who built a Nazi-loving chatbot for pervs. It’s likewise a bit dissonant for someone who destroyed public health programs in the name of cost-cutting to argue that the “obligation to generate financial returns” will corrupt someone else’s mission, but Musk was not the only person raising these concerns about Altman. I won’t rehash the 2023 power struggle at OpenAI that led to Altman being fired by the board and then reinstated days later, but suffice to say, it is central to the narrative of the lawsuit. Musk’s team has cited criticisms of Altman by Sutskever and Dario Amodei, who both left OpenAI to start new companies after questioning Altman’s commitment to safety. Musk suggests that Altman—“Scam Altman,” as he called him eight times during his deposition—has become the Demis he wished to stop.
The court records offer a rare glimpse at the strained relationships and bruised egos behind one of Silicon Valley’s nastiest falling-outs. We find Altman backchanneling with Shivon Zilis—the Neuralink employee who, unbeknownst to Altman, had multiple children with Musk while serving on the board of OpenAI—in 2023 to ask if he should tweet something nice about Elon to make him feel better. We find Musk’s lawyers moving to censor a portion of their client’s deposition where he was asked if he ingested something called “rhino ket” at Burning Man in 2017 (he says he did not), and OpenAI’s lawyers responding in a later brief with the dubious but indelible words, “there’s nothing unfairly prejudicial about attending Burning Man.” Zillis is asked if she and Musk have “ever been in a romantic relationship” and responds by saying: “‘Relationship’ is a relative term. But there have been romantic moments.” Sounds like a dream.
“it really fucking hurts when you publicly attack openai,” Altman wrote Musk in one 2023 text exchange.
“it is certainly not my intention to be hurtful, for which I apologize, but the fate of civilization is at stake,” Musk responded.
“i agree with that, and i would really love to hear the things you think we should be doing differently/better. it’s also not clear to me how the attacks on twitter help the fate of civilization,” Altman wrote.
What started in 2015
as an upstart alliance against Google has become, in every respect, a parable about hubris and power. They each believe the other is the thing people hate about Silicon Valley, and they are each, in a sense, sort of right. Musk, in his deposition, pointedly noted having “read that—allegedly, Chat GPT convinced some kid to commit suicide.” (OpenAI has denied culpability.) Altman, in his deposition, called Musk’s Grok a “goonbot” and said xAI makes “anime sex bots for children.” (X has said it has “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.”) In January, they waged a public argument on X about whose companies were responsible for more deaths. Neither OpenAI or xAI responded to emailed requests for comment on the case.
As a result of Musk’s months-long assumption last year of quasi-dictatorial powers within the federal government, he caused the loss of hundreds of thousands of lives and helped destroy the country’s capacity for cutting-edge research while still finding time—as he revealed in his deposition—to complain about Altman to the president of the United States. Brockman, for his part, was perhaps not quite as concerned about a dictator as he once let on: In September he became one of the single-largest donors to Donald Trump’s super-PAC.
There’s one email exchange that embodies the mix of civilization-defining grandiosity and get-over-yourselves gamer brain that made and then broke the relationship—a short back-and-forth in the hours before OpenAI’s official 2015 launch. Altman and Musk took turns hyping each other up with motivational quotes that underscored their sense of civilizational struggle. Under the subject line “Re: Great Acton quotes,” Musk shared a remark attributed to the British aristocrat: “Liberty consists in the division of power. Absolutism, in concentration of power.” Altman replied with a link to a YouTube trailer for Halo 3, which began with the words, “This is the way the world ends.” (In fact, T.S. Eliot.)
Acton, of course, is most famous for a line that captures the essence of these court filings, even if it doesn’t show up in them: “Power corrupts. Absolute power corrupts absolutely.”
Looking for news you can trust?
Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




