The former OpenAI board member Helen Toner has shared explosive new details about what led to CEO Sam Altman's brief ousting in November. In an interview with Bilawal Sidhu on "The TED AI Show" that aired Tuesday, Toner said Altman lied to the board "multiple" times.
One example Toner cited was that OpenAI's board learned about the release of ChatGPT on Twitter. She said Altman was "withholding information" and "misrepresenting things that were happening in the company" for years.
Toner — one of the board members who voted to kick Altman out — alleged that Altman also lied to the board by keeping them in the dark about the company's ownership structure. "Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company," she said.
Toner, who's a director of strategy at the Centre for Security and Emerging Technology at Georgetown, alleged that the OpenAI chief also gave board members "inaccurate information about the small number of formal safety processes" OpenAI had in place. She said that made it "basically impossible" for the board to understand whether the safety measures were sufficient or whether any changes were needed.
She said that there were other individual examples but that ultimately, "we just couldn't believe things that Sam was telling us, and that's a completely unworkable place to be in as a board." Toner added that it was "totally impossible" for the board to trust Altman's word. The role of the board, she said, was to have independent oversight of OpenAI and "not just helping the CEO to raise more money."
But then, last October, the board had several conversations in which two executives detailed their own experiences with Altman and used the phrase "psychological abuse," Toner said. She said the executives told the board that they "didn't think he was the right person to lead the company to AGI" and that "they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues."
By the time the board realized Altman needed replacing, Toner said, it was clear that Altman would "pull out all the stops" to block the board from going against him if he found out. She added that he "started lying to other board members in order to try and push me off the board."
"We were very careful, very deliberate about who we told, which was essentially almost no one in advance, other than obviously our legal team, and so that's kind of what took us to to November 17," she said.
Seriously sounds like a narcissist who is all in it for the money. This is the kind of collaborator who ends up building Skynet.
One example Toner cited was that OpenAI's board learned about the release of ChatGPT on Twitter. She said Altman was "withholding information" and "misrepresenting things that were happening in the company" for years.
Toner — one of the board members who voted to kick Altman out — alleged that Altman also lied to the board by keeping them in the dark about the company's ownership structure. "Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company," she said.
Toner, who's a director of strategy at the Centre for Security and Emerging Technology at Georgetown, alleged that the OpenAI chief also gave board members "inaccurate information about the small number of formal safety processes" OpenAI had in place. She said that made it "basically impossible" for the board to understand whether the safety measures were sufficient or whether any changes were needed.
She said that there were other individual examples but that ultimately, "we just couldn't believe things that Sam was telling us, and that's a completely unworkable place to be in as a board." Toner added that it was "totally impossible" for the board to trust Altman's word. The role of the board, she said, was to have independent oversight of OpenAI and "not just helping the CEO to raise more money."
But then, last October, the board had several conversations in which two executives detailed their own experiences with Altman and used the phrase "psychological abuse," Toner said. She said the executives told the board that they "didn't think he was the right person to lead the company to AGI" and that "they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues."
By the time the board realized Altman needed replacing, Toner said, it was clear that Altman would "pull out all the stops" to block the board from going against him if he found out. She added that he "started lying to other board members in order to try and push me off the board."
"We were very careful, very deliberate about who we told, which was essentially almost no one in advance, other than obviously our legal team, and so that's kind of what took us to to November 17," she said.
Ex-OpenAI board member reveals what led to Sam Altman's brief ousting
Helen Toner shared explosive details about Sam Altman's ousting as OpenAI CEO in November, accusing him of lying to the board.
www.businessinsider.com
Seriously sounds like a narcissist who is all in it for the money. This is the kind of collaborator who ends up building Skynet.