ARTICLE AD
We’re only one week into 2025, but, already, OpenAI is having a tough year. Here is everything that’s gone wrong for the influential company in the last seven days, and a quick look at the potential frustrations and headwinds that it faces as it heads into the new year.
Sam Altman’s sister sues him
Annie Altman, the sister of the company’s CEO, Sam Altman, has sued the executive, accusing him of sexual abuse. The lawsuit, which was filed in U.S. District Court in the Eastern District of Missouri on Monday, makes claims that Altman abused his sister when she was three years old and he was 12. The suit claims that “as a direct and proximate result of the foregoing acts of sexual assault,” Annie suffered “severe emotional distress, mental anguish, and depression, which is expected to continue into the future.” The suit asks for damages in excess of $75,000, as well as a jury trial.
The allegations of abuse have circulated the web for over a year and first gained mainstream attention in the days after Altman was controversially ousted from OpenAI (he would later be reinstated). The litigation has obviously pushed the claims to a much wider audience. Were the case to head to trial, it could prove disastrous for OpenAI from a PR perspective.
Altman’s family released a statement Wednesday responding to Annie’s litigation. “All of these claims are utterly untrue,” the statement reads. “This situation causes immense pain to our entire family. It is especially gut-wrenching when she refused conventional treatment and lashes out at family members who are genuinely trying to help.” The statement, which Altman shared on X, further characterizes Annie as mentally unwell and financially motivated. It states that though the family has financially supported Annie for years and she “continues to demand more money” from them.
A former employee’s family accuses the company of murder
In recent weeks, the company has also been subjected to conspiracy theories that allege it murdered a former employee. The death of Suchir Balaji on November 26th inspired immediate suspicion, despite the fact that the San Francisco Medical Examiner’s Office has dubbed the death a suicide. That’s because, in the months before his death, Balaji acted as a corporate whistleblower, making claims that the company was breaking U.S. copyright law. Only a few weeks before his death, Balaji penned an online essay in which he claimed to show that the company’s approach to content generation did not fall under the U.S. definition of “fair use.”
While police have said that there is “no evidence of foul play” in Balaji’s case, his family has claimed that he was murdered by OpenAI and has demanded that the FBI investigate his death. In an interview with The San Francisco Standard, the Balaji family relayed that they “believed their son was murdered at the behest of OpenAI and other artificial intelligence companies. “It’s a $100 billion industry that’d be turned upside down by his testimony,” said Poornima Ramarao, his mother. “It could be a group of people involved, a group of companies, a complete nexus.” The medical examiner’s autopsy hasn’t been made publicly available yet.
The Cybertruck bomber allegedly used ChatGPT to plan his attack
To top things off, it was recently learned that the guy who blew himself up in a Cybertruck outside of Trump Tower used ChatGPT to plot the attack. Las Vegas police recently revealed details to reporters at a press conference on Tuesday. “This is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device,” said Las Vegas Sheriff Kevin McMahill. “It’s a concerning moment.” It’s not exactly something that OpenAI is going to want to include in its ad copy (“Useful for planning terrorist attacks!” just doesn’t have a great ring to it).
Political headwinds
OpenAI doesn’t just face a slew of bizarre, splashy scandals, it also faces the political realities of Trump’s second presidency. Elon Musk, the company’s former founder (and investor) turned worst enemy, notably helped Trump win and is now enjoying unparalleled access to the federal government’s levers of power. At the same time that he’s been dubbed America’s “co-president,” Musk is also waging a legal war on OpenAI that, while having been dubbed “frivolous” by OpenAI, shows no signs of going anywhere.
The lawsuit that Musk filed last year alleges that the company has betrayed its original mission in favor of pursuing a for-profit business model (OpenAI did recently announce it would be ditching its original, weird structure to pursue a more traditional business strategy). When last we checked in on that litigation effort last November, Musk had expanded the lawsuit to include other entities close to OpenAI, including its backer, Microsoft.
At the same time, while Musk is engaged in the legal battle, and may be able to manipulate federal policy in ways that could prove disruptive to OpenAI, he can also leverage the soft power of his media platform, X, to damage the company’s reputation. Indeed, Musk and his affiliates have seized on some of OpenAI’s recent controversies, openly spreading damaging conspiracy theories. The Standard notes that, after Suchir Balaji died, Musk and others close to him helped to spread the conspiracy theories surrounding the coder’s death: “When Ramarao (Balaji’s mother) tweeted about hiring the private investigator, Musk replied: “This doesn’t seem like a suicide.”
The fraught economics of OpenAI
OpenAI’s biggest dilemma may be less political than economic. That is, the massive amounts of money that are being used to prop up the company have left many onlookers wondering: Is OpenAI’s business model even sustainable? Last year, the company self-reported that it lost some $5 billion, while garnering substantially less money in revenue. OpenAI has claimed that its revenue will grow to some $11 billion by the end of this year and will continue to explode exponentially in the years to come.
Indeed, OpenAI has claimed that its revenue will reach $100 billion by the year 2029—a mere four years from now. Granted, as a company, OpenAI has grown at breakneck speed (its revenue jumped 1,700 percent in the space of a year, the New York Times has reported), though skeptics still see its projections as PR fantasies designed to draw in perpetual cash infusions from true believers in the venture capital realm. Blogger Ed Zitron, who has referred to OpenAI as an “unsustainable, unprofitable and directionless blob of a company,” notes that the company’s own estimations of its future revenue capacity are “fucking ridiculous.” Firmly repping the doubter camp, Zitron writes:
…the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029, a statement so egregious that I am surprised it’s not some kind of financial crime to say it out loud. For some context, Microsoft makes about $250 billion a year, Google about $300 billion a year, and Apple about $400 billion a year. To be abundantly clear, as it stands, OpenAI currently spends $2.35 to make $1.
Zitron notes that OpenAI seems to make a majority of its revenue from subscriptions to ChatGPT, which doesn’t seem to be making enough money to make up for its ongoing losses. OpenAI also makes money by licensing usage of its algorithmic models for use in software products. As it is, it doesn’t matter if their revenue increases if the cost of providing service remains so high. Sure, it could jack up prices, but OpenAI has competitors with deep pockets and similar benchmarks.
In short: OpenAI has its work cut out for it. Beset by powerful adversaries, ongoing lawsuits, and looming scandals that could prove disastrous for the company’s brand, the company needs to prove that the media hype that carried it through the last few years can actually translate into cold hard dollars and cents. It’s unclear, at this point at least, how it’s going to do that.