OpenAI needs open-heart surgery. The ChatGPT developer’s new board of directors and its briefly fired but now-restored CEO, Sam Altman, said last week that they’re trying to fix the unusual corporate structure that allowed four board members to trigger a near-death experience for the company.
The startup was founded in 2015 as a nonprofit, but it develops AI inside a capped-profit subsidiary answerable to the nonprofit’s board, which is charged with ensuring that the technology is “broadly beneficial” to humanity. To stabilize this unusual structure, OpenAI could take pointers from longer-lived companies with a similar arrangement—including introducing a second board to help balance its founding mission with its for-profit pursuit of returns for investors.
OpenAI deferred comment for this story to new board chair Bret Taylor. The veteran tech executive told WIRED in a statement that the board is focused on overseeing an independent review of the recent crisis and enhancing governance. “We are committed to a governance structure that takes all stakeholders into account,” Taylor says. “And we are actively working to develop an expanded board that has the diverse experiences needed to implement important changes and effectively oversee the organization.”
Altman told the Verge last week that the board will need a while to debate, research, and pressure-test potential changes.
If you take OpenAI’s mission at its word, the stakes couldn’t be much higher. The company aims to develop AI technology that makes machines with capabilities on par with—or exceeding—those of humans, potentially affecting nearly every job in the world. Even if it falls short of that goal, the way OpenAI governs itself could determine who prospers and who suffers from the rollout of world-shaping AI technologies like ChatGPT. At the same time, competitors like Google and Amazon aren’t bound to the same structural limitations as OpenAI.
“Does it want to be a startup that just has some ethical grounding? Or does it want to be a lasting public institution that is building AI in service of humanity?” asks Mark Surman, president of the Mozilla Foundation. “They need to get clear with themselves—and get clear with the world on what they really want to be.”
Steady Foundation
Thousands of companies globally—including OpenAI, retailer Ikea, and drugmaker Novo Nordisk, developer of Ozempic—are structured as what some business professors call enterprise foundations, whereby a nonprofit organization controls a company that has gone all in on capitalism. Some billionaires employ the setup to reduce their personal taxes; other projects use it to prioritize non-business goals, which OpenAI says is its case.
The exact technical implementation can vary widely, but Mozilla is a stable example of combining a humanitarian mission with for-profit ventures. Started in 2003, its foundation has a handful of for-profit subsidiaries that include Mozilla Corporation, which develops the Firefox web browser and receives sizeable payments from Google for promoting its search engine, and Mozilla.ai, a startup trying to encourage open source competition to OpenAI.
Unlike OpenAI, Mozilla’s nonprofit cannot fire executives in charge of for-profit work. Each for-profit unit has its own board, with members annually selected by the nonprofit foundation’s board. “It’s different jobs, it’s a different mix of skills,” Surman says. “If you have different functions, it makes sense to have a separation of powers.”
The different boards, with distinct characters and missions, are also intended to give the commercial endeavors greater autonomy. Mozilla tries to seat people who know philanthropy, open source technologies, social issues, and tech policy on the nonprofit board, Surman says. On the for-profit boards, it looks more toward leadership experience in venture capital or corporate marketing and innovation.
Mozilla’s different boards have sometimes convened to discuss big shifts in technology, like the emergence of generative AI, which led to the creation of Mozilla.ai. But the nonprofit foundation’s board holds ultimate authority by overseeing budgets and has the right to remove the for-profit board’s members. While that latter power hasn’t ever been exercised, there have at times been intense disagreements between what Mozilla leaders describe as movement goals and market goals, says Brian Behlendorf, a software developer who has been on the foundation’s board since its founding and is also a cofounder of the Apache Software Foundation.
In 2015, after consulting with the nonprofit board, the Mozilla Corporation shut down a project developing an open source mobile operating system that had spent hundreds of millions of dollars but struggled to win over smartphone makers. “To be competitive, you had to do more proprietary software and strike the kind of deals that were not about creating public goods,” Behlendorf says. “A letdown, but we didn’t see a way to fulfill the mission and Mozilla Manifesto.” That foundational document commits the project to keeping the internet open and accessible to all.
Competing Interests
Fixing OpenAI’s governance is in some ways more complex than anything ever faced by Mozilla, which has outside donors but no investors. OpenAI has to serve its overall mission of helping humanity while also pacifying investors who, after the recent crisis, are demanding a greater say in the organization’s direction. This is especially true of Microsoft, which has committed $13 billion to the company.
Microsoft CEO Satya Nadella made it clear last week that he considered it unacceptable to have been surprised by the board’s removal of Altman, which was communicated to OpenAI’s primary backer only minutes before it was announced publicly. “There is no OpenAI without Microsoft leaning in in a deep way to partner with this company on their mission,” Nadella said on journalist Kara Swisher’s podcast last week. “As a partner, I think it does mean that you deserve to be consulted on big decisions.”
OpenAI announced last week that Microsoft would get a seat on its board as a nonvoting observer. That lack of direct control could help prevent scrutiny from US antitrust regulators over rules against interlocking directorates, in which overlaps between full board members at big rivals are viewed as a threat to fair competition. Last week, OpenAI board chair Taylor wrote that the panel “will build a qualified, diverse board of exceptional individuals whose collective experience represents the breadth of OpenAI’s mission—from technology to safety to policy.”
A setup like Mozilla’s, creating a separate board for the for-profit part of OpenAI, could provide an easy way for Microsoft, other investors, and perhaps even employees to have a real voice at the top without weakening the authority of the nonprofit.
The employee and investor revolt over Altman’s dismissal effectively provided a dress rehearsal for how such a secondary board could serve as a safeguard during crises, says Ronaldo Lemos, a former board member of the Mozilla Foundation. “This coalition was pivotal in establishing the current realignment within the organization,” says Lemos, chief science officer for the Institute for Technology and Society in Rio de Janeiro.
Mozilla also maintains other checks over its commercial wing. The foundation owns the “Mozilla” trademarks and could in an extreme scenario revoke the subsidiaries’ licenses to use them. “That’s the thing that keeps them honest,” says the foundation’s Surman.
Licensing fees paid by Mozilla’s subsidiaries help fund grants and other charitable work by the foundation, whose budget for its dedicated staff runs to $30 million annually. OpenAI today shares staff between its nonprofit and for-profit arms, tax filings show. If the board through the nonprofit had its own research and policy teams, it could gain additional insulation to exercise independent oversight.
Fine-Tuning
Major structural changes aren’t the only way OpenAI could strengthen its governance. During the recent drama, investors and other observers raised concerns about OpenAI directors’ qualifications to oversee the project and the way board vacancies lingered for months. OpenAI could establish specific rules for its board composition and succession planning, likely in its bylaws, if it hasn’t already. Those could define the criteria by which a member is considered independent and the process for selecting independent directors.
“Probably the biggest thing would be to have business-savvy members on the board,” says Peter Molk, a University of Florida law professor who researches organizational design. “OpenAI isn’t a quintessential nonprofit, like a museum or local library—it has massive market presence, signs huge contracts, competes with major players.”
The board might also introduce or expand policies governing directors’ communications or conflicts of interests. Altman has personally invested in dozens of startups, including some with dealings with OpenAI, and is a valued advisor to entrepreneurs. Blake Resnick, CEO of public safety drone maker Brinc, says Altman “was the first check into Brinc, got me out of my parent’s garage, and has been ongoing supportive.”
Altman has recently tried fundraising for a new venture to develop computer chips for running AI software and been linked to a venture developing a device with integrated generative AI tools. He told the Information in July that he generally subscribes to the approach of avoiding direct conflicts with his OpenAI work and disclosing everything, and some investors unaffiliated with OpenAI tell WIRED that Altman’s outside engagements on the face of it aren’t alarming. OpenAI has said in tax filings that it has a conflict-of-interest policy requiring annual disclosures. But Altman’s varied commitments may have limited his attention on board matters; he reportedly told associates while fighting to return as CEO that he should have better managed OpenAI’s directors before they ousted him.
Even a simple communications policy could have helped soften the tensions between Altman and his board, or made the recent crisis less severe.
Altman clashed with former OpenAI director Helen Toner after she published a research analysis last month criticizing OpenAI’s product launch decisions. A person close to the board says the dispute was a “small thing that could have easily been resolved” without a policy. But the lack of detail in the board’s initial announcement of Altman’s ouster made the unexpected move arguably more damaging.
OpenAI was founded with a promise to be more transparent and open than the tech giants that had historically dominated AI. But while Mozilla has posted its bylaws, tax filings, and other financial information considered to be public records online, OpenAI hasn’t published comparable documents. Copies accessed by WIRED through government agencies also contain apparent errors.
OpenAI’s annual franchise tax report filed with the state of Delaware this year lists “Holden Karnofsku,” misspelling the last name of its director Karnofsky, a longtime philanthropy executive, who by other accounts stepped down in 2021. The filing doesn’t mention OpenAI cofounder Ilya Sutskever, who had been on the board from 2017 until he stepped down after Altman’s return as CEO last month. Reports for previous years also contain apparent inaccuracies and sometimes conflict with OpenAI’s disclosures to US tax authorities at the IRS. Entrepreneur John Loeber, who closely reviewed OpenAI public records last month, calls the inconsistency “baffling.”
OpenAI leaders haven’t been accused of wrongdoing, but knowingly making a false statement on the filings would constitute perjury. Delaware Department of State spokesperson Rony Baltazar says a corporation is required by law “to update pertinent information, including the composition of its board of directors” on annual reports but declined to comment further.
As the 2024 filing season approaches, a first job for OpenAI 2.0 might be getting its disclosures in order, with shaping a second board a possible next step.
Additional reporting by Will Knight.