The Open AI Shakeup

A Guide for the Perplexed

"Four times now in the history of OpenAI – the most recent time being just in the last two weeks – I've had the opportunity to be in the room when we pushed the veil of ignorance back and the frontier of discovery forward. Being able to do that is the professional honor of a lifetime.”

– Sam Altman, former CEO of OpenAI, two days before he was fired

Who else is suffering from AI vertigo this week?

If you’ve tried to stay on top of the news about OpenAI, it has felt like a full-time job. Just when you think you’ve got a handle on the latest developments, the plot twists again.

Case in point – as I was about to publish this, Elon Musk tweets out this bombshell, which I’ll get to in a moment:

In the realm of AI news, this is par for the course – the only constant is change. But lately, it feels like things are accelerating to the point where it’s dizzying and impossible to keep up by yourself.

To make sense of recent events, I organized an impromptu huddle that included Chris Dolinski (@1Dolinski) alongside the “Chief AI Officer” to discuss what the heck is happening. Dolinski is founder of a video conference platform, Vibehut.io, on which the call was hosted, while the Chief runs a popular newsletter focused on AI fundraising news. They both filled in important gaps in my knowledge, such that I now feel as qualified as anyone (read: not very) to write the following explainer. Forgive me if it’s already outdated by later this evening.

Setting the Scene

Let’s start with some context and chronology.

Since its founding in 2016, OpenAI has been the benchmark and gold standard for AI innovation and progress. Its recent showcase a couple weeks ago at "Dev Day" marked another leap forward. Then-CEO Sam Altman introduced the launch of “custom GPTs” and heralded the arrival of a new app store, which looked like a gamechanger.

But beneath the cheerful surface, all was not well behind closed doors at OpenAI. While Altman touted their coming attractions, internal strife and philosophical divisions were simmering.

On Friday, Altman was fired from the company he had steered to prominence since taking over as CEO in 2019. The board members behind the firing claimed that he had not been “consistently candid,” and cited failures to communicate as the reason.

The drama intensified over the weekend when Altman announced he would be leading an AI project within Microsoft – one of OpenAI’s biggest investors and strategic partners. By Monday afternoon, over 700 of OpenAI's 760 employees signed a petition demanding the board reinstate Altman or else they would quit.

In the days leading up to his unexpected firing, Sam Altman spoke about pushing the “frontier of discovery forward” at OpenAI. This statement, made just two days prior to his ousting, hinted at yet another significant breakthrough in AI under his leadership. But with Altman no longer at the helm, the future direction of OpenAI hangs in the balance. This leadership shakeup leaves those of us who depend on OpenAI's tools in a state of uncertainty.

Questions abound:

  • What groundbreaking development did Altman witness in that room where they “pushed aside the veil of ignorance”?

  • How did this shape his vision for AI's future, and more critically, did it contribute to the discord within OpenAI?

  • Will the launch of the custom GPT store still go ahead as planned later this month?

  • What about the much-anticipated release of GPT-5 – will it still happen, and if so, in what form?

No one knows the answers to these questions, but we can make more educated guesses with a better understanding of the AI landscape as it stands today.

Let’s start with the apparent reasons for Altman’s firing.

Dueling Incentives at OpenAI

Rumors abound, yet clear answers on why Sam Altman was fired from OpenAI remain elusive. The most compelling theory suggests a clash of distinct factions within the company.

On one side were the "commercializers," driven to bring products like ChatGPT and the new app store to market. Spearheaded by CEO Sam Altman and President Greg Brockman, this group viewed monetizing AI inventions as essential for funding further research and advancing beneficial AGI. They prioritized serving current users while pushing technological boundaries.

In contrast stood the "research purists," with Chief Scientist Ilya Sutskever at the forefront. This faction, guided by OpenAI's nonprofit charter, emphasized developing AGI for humanity's benefit, with capped profits intended to limit base motives and greed. They cautioned against rapid commercialization, prioritizing safety and alignment over financial gains and market milestones.

“We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI's employee attrition rate between January 2018 and July 2020 was in the order of 50%.

Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.”

Concerned Former OpenAI Employees

We don’t know who wrote this letter, and it may not be legitimate. But a few things are clear.

The recent commercial success of ChatGPT seems to have been the spark igniting this tinderbox. The launch of custom GPTs for pro users, in particular, began to put more strain on the company’s computing resources, to the point that they had to halt new pro subscriptions and throttle usage (you may have noticed a slowdown of late – I know I did).

Sutskever, as Chief Scientist, was committed to dedicating more computing resources to his internal work and research. For him, studying risks and aligning AI wasn't just a professional stance; it was almost a spiritual calling. This approach, often seen as over-cautious by the commercializers, was rooted in his belief in the existential risks of advanced AI and his duty to uphold OpenAI's mission of developing AGI for the greater good of humanity. It appears that at least two board members, Helen Toner and Tasha McCauley, were motivated by similar concerns embodied by the “effective altruism” movement, which is prominent among brainy technologists seeking to guide AI to benevolent ends.

Chris Dolinski, analyzing Sutskever's perspective, draws parallels with Paul Graham's essay “Maker’s Schedule, Manager’s Schedule.” Sutskever, a prototypical “maker,” focused more on the science of AI and its ethical implications than on organizational logistics. His background as a pioneering researcher in neural networks – a field long neglected in academia – informed his view of these systems as more than mere tools; to him, they were akin to miniaturized brains requiring meticulous nurturing before unleashing them into the world at large.

Dolinski speculates that Sutskever may have underestimated the organizational and political implications of his decisions. With his makers orientation, wasn’t aware of the backlash that firing Altman would bring about.

"He maybe didn't realize the huge shift his vote or opinion could create," Dolinski observed, "and got hit by this tidal wave of internal politics.”

Once he saw the mass exodus of talent from the company, Sutskever began to backpedal – tweeting that he “deeply regretted” his involvement in the board’s actions.

Sutskever and others who shared his concerns are now confronted with a dilemma: the commercial success that funded his research is now under threat.

Matt Wolfe, an AI industry commentator, points out the crux of the issue: Compute – the stuff that makes AI go brrrrr – is a finite resource:

“A research lab would want as much compute as possible dedicated towards the actual training and advancement of the tech,” Wolfe notes, “However, the more ChatGPT and Dalle grow, the more compute needs to be dedicated to the actual commercial side of the business, taking resources from the research side.”

Matt Wolfe

Wolfe speculates that with each new announcement Sam made about larger context windows for users, building custom GPTs, etc., Ilya and the board were probably getting more and more frustrated because both compute and employees were being dedicated toward commercialization instead of research.

In a bid to balance these competing needs, Sutskever pushed for allocating 20% of the company's compute resources to a new "superalignment" team, aiming to preemptively address emerging issues. But the total amount of compute available long-term is determined in large part by the commercial viability of the company. So the paradox remains.

This dilemma underscores the inherent tension in every big AI company between advancing research and driving profits. Dario Amodei and six other former OpenAI employees, cited similar concerns when they formed Anthropic. Yet, even within this new venture, the same debates between the 'makers' focused on research and the 'managers' pushing for revenue generation continue to play out. Dolinski noted a similar pattern at Cohere, an AI company that split off from Google Brain, emphasizing, "There's this fountain of AI engineers branching out, each pursuing their own unique approach."

Each time there is a split, the idealists think they’ve escaped the dilemma, but the profit motive always creeps back in.

Alternative Theories for the Firing

Beyond the widely-accepted narrative of internal discord at OpenAI, there are other theories for Altman's ousting.

One theory speculates about the potential mishandling of a vast amount of data under Altman's leadership. Could a significant data leak have triggered the board's drastic action? While the board denies such allegations, the possibility adds a layer of complexity to the already tangled situation. OpenAI's COO, Brad Lightcap, has stated that Altman's firing was not due to "malfeasance or anything related to our financial, business, safety, or security/privacy practices," but rather a breakdown in communication between Altman and the board.

There have also been allegations of personal misconduct.

Another compelling angle involves Adam D'Angelo, an OpenAI board member and the CEO of Quora. The relationship between Quora and OpenAI took an intriguing turn when ChatGPT, partially trained with data from Quora, emerged as a more convenient information source than Quora's forums. This development was a blow to Quora. As if that wasn’t bad enough, Quora launched its own AI app platform, Poe, with creator monetization announced just a week before OpenAI’s Dev Day, in which Altman announced his own app store. Thus, D'Angelo had invested significantly in Poe, only to see his efforts potentially overshadowed by OpenAI's new app store.

This would explain why D’Angelo might want Altman gone, but doesn’t explain the motives of the other board members, Tasha McCauley and Helen Toner. It’s possible the D’Angelo convinced them to do what was in his best interest as CEO of Quora using the language around effective altruism, and concerns that Altman’s future plans were jeopardizing the mission.

Winners and Losers

Through the turmoil, Microsoft has emerged as a clear winner. They are poised to benefit from the acquisition of OpenAI's top talent (most of all, Altman) while continuing their pursuit of AI dominance. Jim Fan, a senior AI scientist at Nvidia, sums it up as follows, “Somehow in this epic meltdown, Satya [Nadella, CEO of Microsoft] swoops in, wins it all, and wins with grace. I'm floored.”

The partnership between Microsoft and OpenAI has been integral since 2016. The balance of power in this relationship, however, appears to have just shifted. Chief AI Officer, whose newsletter on AI fundraising is read by 1500+ industry insiders, opines, “I think the tables have turned.”

He emphasizes that OpenAI needs to navigate this situation with caution as it tries to negotiate with Sam and its employees. It would be easy to give Microsoft too much leverage. Recent remarks by Satya Nadella have hinted at potential governance changes and a board seat for Microsoft if Sam Altman returns to OpenAI. But this scenario could undermine OpenAI's mission as a non-profit organization and challenge Altman's original vision of a safety-focused AI development path.

Moreover, entities collaborating with Anthropic's “Claude” assistant, supported by Amazon and Google, are also poised for significant gains. Anthropic's recent rollout of Claude 2.1, with its expanded context window, demonstrates the robust AI development occurring outside of OpenAI's domain.

On the flip side, Ilya Sutskever and the current OpenAI board find themselves in a less favorable position. Their decisions, aimed at steering OpenAI back to its original research-focused mission, will like lead to a degradation in services like ChatGPT. The organization will face challenges in maintaining its staff, funding, and customer base. This could cause a death spiral, where falling revenues further limit its capacity for innovation. Accordingly, the board's aspirations for research advancement and being the first to create AGI might be hampered.

The ongoing chaos at OpenAI has prompted a cautious approach in the industry as a whole. Many companies, according to Chief AI Officer, are pausing crucial decisions and fundraising as they await the outcome of these changes.

Dolinski, who has relied on OpenAI’s API in building AI features for his Vibehut.io platform, is considering diversifying his AI resources to navigate the uncertain landscape.

For my part, I'm turning my focus away from building custom GPTs and moving back towards working with Claude, which has become my go-to AI tool for writing and content creation. In line with this new direction, I'm excited to announce my forthcoming book: Command the Page: Mastering AI to Sharpen Your Thinking, Shatter Writer’s Block & Sculpt Rough Ideas into Polished Prose.

Coming soon to the Amazon store

Subscribe to keep reading

This content is free, but you must be subscribed to Augmented Intelligence to continue reading.

Already a subscriber?Sign In.Not now