DailyDispatchOnline

Bringing You the Daily Dispatch

Science

The directors of OpenAI have not been very transparent. What caused this change?


T

The recent events surrounding OpenAI have progressed quickly, causing us to overlook the fact that there has been no clear explanation for why Sam Altman, who is praised as a brilliant leader by his supporters, was fired. Given the potential consequences of artificial general intelligence, it is important for someone to speak up and provide clarification.

If the previous board determined that Altman was not suitable for the position due to his risky approach in terms of lighting at OpenAI, they would have a duty to voice their concerns. Alternatively, if their fear is unfounded, those responsible for the unsuccessful attempt to overthrow the board should clarify the situation. Remaining silent, especially after advocating for transparency and safety, is unacceptable.

OpenAI’s initial reason for removing Altman was that he was not consistently open with the other board members. However, it was not specified what he was not being candid about. One possible explanation could be that the disagreement stemmed from Altman’s allocation of time to other business ventures, such as a computer chip project. In this case, outside observers may not be concerned as it is common for board members to question whether the CEO is fully committed to their role.

The purpose of OpenAI’s unique governance system was to prioritize the safe advancement of technology. Despite its flaws, the structure was designed to give control to the board of the non-profit entity, with the profit-seeking subsidiary’s interests being secondary. As stated by Altman in February of this year, “We have a non-profit that oversees us, allowing us to operate for the benefit of humanity (and can supersede any for-profit interests), including the ability to cancel equity obligations to shareholders if necessary for safety.”

The non-profit board has the power to end the entire event if it deems it the responsible action. In theory, firing the CEO would be considered a small demonstration of this complete authority.

It was highly unlikely that these arrangements would be successful in real life, particularly when there was a hint of an $86 billion valuation involved. It is not feasible to receive billions of dollars from Microsoft in exchange for a 49% ownership in a profit-driven venture and not anticipate them to safeguard their investment during a crisis. Additionally, if the majority of the highly sought-after employees revolt and threaten to join Microsoft altogether, the situation is ultimately a loss.

However, the exact justification for dismissing Altman remains significant. Apart from Altman, there were only four members on the board. One of them was Ilya Sutskever, the chief scientist, who later made an unexplained U-turn. Another is Adam D’Angelo, the CEO of Quora, who oddly plans to move from the board that fired Altman to the one that rehires him. Is this truly possible?

That leaves the two departed women: Tasha McCauley, a tech entrepreneur, and Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology. What do they think? Virtually the only comment from either has been Toner’s whimsical post on X after the rehiring of Altman: “And now, we all get some sleep.”

Is it really a concern? Rishi Sunak recently issued a warning, stating that AI could potentially have the same impact on humanity as a nuclear war. This echoes the overall assessment. It is concerning that the leading company cannot provide an explanation for the turmoil in its own boardroom, so why should outsiders be worried? In a recent development, Reuters reported that OpenAI researchers expressed their concerns about the dangers of the latest AI model to the board. The directors of the company need to provide an explanation urgently.

Source: theguardian.com