Webcast
How do you balance people, profit and purpose without tanking your business?
Watch the discussion and debate with Hayden Wilson, Linda Robertson & Murray Taggart.
I’ve been mulling what lessons we can draw from the board/CEO dysfunction playing out at OpenAI, and a couple of things are clear even as we await further details from the company. There is a lot of “reportedly” noted in this article, due to very little having been said publicly by the OpenAI board, and there are undoubtedly many twists, turns and revelations yet to come.
The first, and perhaps most important lesson, is to think the entity structure and purpose through carefully.
Was the purpose of OpenAI clear, and was the structure appropriate to support it? OpenAI’s stated mission was to “ensure that artificial general intelligence benefits all of humanity”. The published company charter aimed to achieve “broadly distributed benefits”, “long-term safety”, “technical leaderships” and a “co-operative orientation” to provide public goods. Complexity starts there.
The company structure may be a surprise. The parent entity, OpenAI Inc is a non profit and the New Zealand equivalent of a charity. It is exempt from paying tax. This structure recognised the potential societal impact of AI, a desire to have the AI technology open to everyone given its likely impact, and a will not to profit unduly from what will become a generally used technology.
OpenAI Inc owns for-profit company, OpenAI LLC. Its profits are capped with the excess being returned to the non-profit arm. On achieving certain profit levels, shares of OpenAI (the for-profit arm) will vested in specific funders, including Microsoft, which has pledged US$13 billion.
Is that structure delivering on the mission? It is possible that question was part of the conversation when founder, CEO and board member Sam Altman was sacked for what the board described as a lack of candour.
Structure doesn’t begin and end with the company, or companies, of course. It is also an important consideration when it comes to the board itself. The for-profit version of OpenAI does not have a board, so the board that sacked Altman governed the non profit. It comprised six directors – OpenAI LLC for profit employees Greg Brockman, Ilya Sutskever and Altman, along with non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
When the non-employee board members sought to remove Sam Altman (for the unspecified lack of candour), it is reported that Ilya Sutskever, a co-founder and OpenAI chief scientist, agreed with them. Other analysis suggests that Sutskever may have “gone along for the ride with the board” (OpenAI chief scientist says he regrets board’s firing of Sam Altman).
Board member Greg Brockman – also a co-founder, and then OpenAI president and chief operating officer – announced his resignation from the company after Altman’s removal.
This raises the point that boards should consider any executive director (and (co)founder) arrangements and how they operate. And they should monitor the arrangements to ensure they continue to be fit for purpose as the organisation grows. This board structure seems to have delivered success for the both versions of OpenAI in the past, through its start-up and expansion phases. But the recent news suggests it had a role in very public dysfunction that put the company at risk.
And that’s the next lesson to date. When assessing risk, boards should think the unthinkable, including actions that will be taken if relationships between co-founders break down or there is dysfunction and difficulty on the board.
Directors remain, of course, required to act in the best interests of a company. Is this what we have seen playing out at OpenAI – directors all seeking to act in the best interests of the company but disagreeing on what that means?
There are reports the board was concerned about other activities that Altman was undertaking, notably seeking to set up an AI chip manufacturing operation. Altman is reported to have been seeking investors to support this and may have had other side businesses as well.
There is a lesson about understanding your shareholders and stakeholders here, too. Investment firms – and Microsoft – had backed an OpenAI founded and run by Altman. Reportedly (again) sentiment shifted on the firm immediately following Altman’s departure and a US$86 billion valuation plunged between 50% and 100%, depending which report you read (the valuation has, apparently, now returned to where it was). There was also a negative initial impact on Microsoft’s share price.
More than 730 OpenAI employees (estimated at around 95% of the company) signed a letter calling on the OpenAI board to resign after Altman’s sacking. In the letter, those employees said the manner in which Altman was terminated jeopardised the work to shape AI safety and AI governance and that they believed the remaining board members “lack competence, judgement and care for our mission and employees”. They letter warned staff might quit and join a newly formed Microsoft AI subsidiary.
The employees demanded the remaining board members resign, Altman and Brockman be returned to the board and that new independent directors be appointed. This letter was signed by Sutskever, one of the board members who voted to oust Altman.
Subsequently OpenAI Inc has said on X: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (chair), Larry Summers and Adam D'Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.”
D’Angelo was on the previous board and reportedly voted to oust Altman. Former board members Tasha McCauley and Helen Toner are gone.
Think through, and keep current, your succession plan for both the CEO and board members. Effective succession planning should be like a flawlessly executed relay race, ensuring a seamless handover of responsibilities. It aims to secure organisational stability, instill confidence in stakeholders, and maintain commitment to the organisation’s objectives.
Arguably, the best succession plans in the world may not have protected OpenAI from this particular implosion. See the note above on think the unthinkable.
Ensure your organisation’s risk management framework includes crisis communications. Do you have a crisis communications plan, one flexible enough to be adapted to a fast-developing situation? Key will be guidance on how to manage media interest and communication with stakeholders during a crisis. This should include who from the board, and within the organisation, needs to be across communications decisions, and who the stakeholders are that you expect to communicate to. What you say will then depend on the crisis itself.
The lack of transparency from the OpenAI board (the reason this article has more than its fair share of “reportedly”) has created a vacuum into which all manner of speculation and commentary has crept.
To be clear, putting out short, ambiguous messages on X is not a sufficient crisis communications response. As our Four Pillars says: “Very often, reputational harm comes not from the severity of the crisis itself but from the timeliness and quality of the response.”
There will surely be more revelations to come in this story. But for now, it should give New Zealand boards pause to consider: