When the news broke that Sam Altman would be ousted from OpenAI, the company he helped to create in 2015, it seemed to come out of the blue for most people — and essentially threw the technology world into disarray. The firing, however, might not have come as much of a surprise for Y Combinator founder Paul Graham, who was Altman’s mentor in his previous role at the startup accelerator. In 2019, Graham flew across the Atlantic to fire Altman from Y Combinator because Altman “put his own interests ahead of the organization.” A similar situation seems to be reflected in these recent OpenAI incidents and the board’s conflicts with Altman.
OpenAI was founded in December 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. The initial board members were Sam Altman and Elon Musk. Of these, the key players who remained leading up to Altman’s dismissal were Greg Brockman and Ilya Sutskever, and the latter was reported to be the leader of the board coup.
We can trace the origins of all this trouble to the early days of the company. OpenAI, as the name implies, was created with open-source principles. Open source has its origins in software development and promotes sharing knowledge between developers, including special licenses, which are legal frameworks that dictate how software can be freely used, modified, and shared. These licenses are essential in the open-source community, as they protect the creator’s rights while allowing for collaboration and distribution. OpenAI was named to reflect this mindset. Furthermore, the company’s mission statement “…to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity” shows that OpenAI was initially created during the early years of the beneficial AI movement, a focus on ensuring that AI research would be safe for future generations promoted by the likes of Max Tegmark and Stuart Russell as well as Elon Musk and Steven Hawkins. Needless to say, these two founding principles are at odds with Microsoft’s January 2023 multibillion-dollar investment.
Microsoft has a strange relationship with open-source technology. In 2001, they called it a “cancer” after a decade-long battle in which usage of their server software dwindled in the face of free alternatives. In more recent years, Microsoft’s Azure cloud computing service was built with open-source software and has raked in millions of dollars in revenue selling services based on open technologies. But their core products have remained staunchly closed-sourced, and they have become the world’s most important business software developers.
After Microsoft’s investment, OpenAI stopped publishing the secrets behind its AI systems. Previous versions of ChatGPT were detailed in publicly available academic papers, enabling researchers to reproduce the work. Today, only OpenAI and Microsoft have access to the source code.
The fundamental driving force behind OpenAI has also changed since the investment. Instead of focusing on creating beneficial intelligence, the company launched the $30-a-month Microsoft Co-Pilot and ChatGPT Plus subscriptions. Altman then toured the world in an attempt to justify the commercial pivot. For example, when he visited Madrid and spoke at IE University, he commented on the new closed-source approach, calling for regulation to protect users from rogue organizations with bad intentions using the technology.
Yet, according to Andrew Ng, all the talk of regulation and closed-source software was an attempt to monopolize the industry and prevent competition and innovation. This strategy sounds like something Microsoft would devise — let’s not forget how Microsoft tried to monopolize the Internet in the 1990s by forcing users to use Internet Explorer. Microsoft has, therefore, been the primary catalyst of change at OpenAI. Its involvement has sent one of the world’s most prominent AI companies into a direction at odds with the founders’ original intentions.
The for-profit won, and the old values that were once the essence of OpenAI have disappeared.
There has been a lot of speculation as to the real motivations behind the Open AI board’s move on November 17. The official line was — and has remained — that Altman “was not consistently candid in his communications.” This can be interpreted as a sign that Altman was not being truthful with the board, but there’s been no further clarification on the board’s reasoning. Most commentators have settled on the ideological differences between the company’s original mission and the new mission to drive profit. Altman was championing fast product rollout and generating more income for OpenAI.
According to the press, the driving force behind the dismissal of Altman was Ilya Sutskever, over concerns that Altman was going full steam ahead with creating, releasing, and promoting a product without sufficient consideration of the thought regarding the implied dangers. This was why he called a meeting with Altman on the fateful morning to tell him he was being immediately replaced, and the board had voted to fire him. The chaos that immediately ensued after Sutskever told Altman he was being replaced highlights a considerable divergence between the origins of AI and the present.
Essentially, OpenAI the company descended into anarchy. As with many modern dramas, the social media network X became the platform for announcing the roller coaster of changes. Greg Brockman posted “Sam and I are shocked and saddened by what the board did today” and announced that he would be leaving the company with immediate effect. Altman praised the staff at OpenAI, stating that he “loved working with such talented people.” The board had initially proposed that OpenAI Chief Technology Officer Mira Murati would replace Altman but, in a dramatic turn of events, it was announced just 24 hours later that former Twitch CEO Emmett Shear would be the new CEO. So, three CEOs in three days.
Altman returned to OpenAI wearing a visitor’s badge a day after being fired, but talks broke down. The next move was from Microsoft announcing that they would be hiring Altman to head a new AI research division together with the other founder who jumped ship, Greg Brockman. The OpenAI board was suddenly looking at a rather dubious future for the company, compounded by a letter signed by roughly 700 of 720 OpenAI staff on November 20 demanding the return of Altman or else they’d take their skills to Microsoft. It was basically mutiny, and it worked. On November 22, it was announced that Altman would return to OpenAI as CEO.
What does all this tell us about OpenAI as a company? In Altman’s words, “clearly, our governance structure had a problem.” A combination of the OpenAI board’s lack of experience with a company the scale of OpenAI plus political ambition were the main forces behind the Altman’s surprise dismissal. Microsoft played its part too. The winners and losers are clear today: the for-profit won, and the old values that were once the essence of OpenAI have disappeared.
The company emerging from the ashes of this recent tumultuous period is transformed, and is now with a focus on commercializing the technology rather than dedicated to building and disseminating knowledge. The implications for the future of the industry are profound. If we continue to push for mass commercialization without considering the impact, we could end up with less beneficial and less valuable AI systems. Closing access to algorithms has already begun to monopolize the industry and stifle innovation.
It’s not all bleak, though. OpenAI is not the only player in this rapidly expanding market. Some companies, such as Meta, have been allowing partial access to code and free downloads of their large language model, and Google, which continues to support open-source software, is still focused on related research. Of course, the real winner of this whole affair is Microsoft, which has firmly established its influence on OpenAI. Microsoft will be the driving force behind the development of ChatGPT in the future — and it seems that the open, ethical, and moral focus of the old OpenAI is all but a distant memory.
© IE Insights.