CMSC
0.0540
Hundreds of staff at OpenAI threatened to quit the leading artificial intelligence company on Monday and join Microsoft.
They would follow OpenAI co-founder Sam Altman, who said he was starting an AI subsidiary at Microsoft following his shock sacking from the company whose ChatGPT chatbot has led the rapid rise of artificial intelligence technology.
In a letter, some of OpenAI's most senior staff members threatened to leave the company if the board did not get replaced.
"Your actions have made it obvious that you are incapable of overseeing OpenAI," said the letter, which was first released to Wired.
Included in the list of names of signers was Ilya Sutskever, the company's chief scientist and one of members of the four-person board that voted to oust Altman.
It also included top executive Mira Murati, who was appointed to replace Altman as CEO when he was removed on Friday, but was herself demoted over the weekend.
"Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," the letter said.
Reports said as many as 500 of OpenAI's 770 employees signed the letter.
OpenAI has appointed Emmett Shear, a former chief executive of Amazon's streaming platform Twitch, as its new CEO despite pressure from Microsoft and other major investors to reinstate Altman.
The startup's board sacked Altman on Friday, with US media citing concerns that he was underestimating the dangers of its tech and leading the company away from its stated mission -- claims his successor has denied.
Microsoft CEO Satya Nadella wrote on X that Altman "will be joining Microsoft to lead a new advanced AI research team," along with OpenAI co-founder Greg Brockman and other colleagues.
Altman shot to fame with the launch of ChatGPT last year, which ignited a race to advance AI research and development, as well as billions being invested in the sector.
His sacking triggered several other high-profile departures from the company, as well as a reported push by investors to bring him back.
"We are going to build something new & it will be incredible. The mission continues," Brockman said, tagging former director of research Jakub Pachocki, AI risk evaluation head Aleksander Madry, and longtime researcher Szymon Sidor.
But OpenAI stood by its decision in a memo sent to employees on Sunday night, saying "Sam's behavior and lack of transparency... undermined the board's ability to effectively supervise the company," The New York Times reported.
- 'Badly' handled sacking -
Shear confirmed his appointment as OpenAI's interim CEO in a post on X on Monday, while also denying reports that Altman had been fired over safety concerns regarding the use of AI technology.
"Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of @OpenAI. After consulting with my family and reflecting on it for just a few hours, I accepted," he wrote.
"Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that."
"It's clear that the process and communications around Sam's removal has been handled very badly, which has seriously damaged our trust," Shear added.
Global tech titan Microsoft has invested more than $10 billion in OpenAI and has rolled out the AI pioneer's tech in its own products.
Microsoft's Nadella added in his post that "we look forward to getting to know Emmett Shear and OAI's new leadership team and working with them."
"We remain committed to our partnership with OpenAI and have confidence in our product roadmap," he said.
OpenAI is in fierce competition with others including Google and Meta, as well as start-ups like Anthropic and Stability AI, to develop its own AI models.
Generative AI platforms such as ChatGPT are trained on vast amounts of data to enable them to answer questions, even complex ones, in human-like language.
They are also used to generate and manipulate imagery.
But the tech has triggered warnings about the dangers of its misuse -- from blackmailing people with "deepfake" images to the manipulation of images and harmful disinformation.
J.Ayala--TFWP