Just before Sam Altman was fired as OpenAI’s CEO on 17 November, key researchers in the company wrote a letter to the board of directors.
So now we learn that the chaos at OpenAI might just have something to do with a “creature,” and not a “tool.”
Sam Altman, the most prominent face of the current AI revolution, was booted from his job as CEO of OpenAI, the maker of ChatGPT, the bot that launched the new AI era.
OpenAI, creator of the ubiquitous ChatGPT, will soon open an online store where shoppers can browse chatbots customized for various purposes, such as tutoring children in reading and other academic subjects or helping entrepreneurs build a website.
China’s Baidu, which ranks among the world’s top tech firms and operates the country’s most popular search engine, has launched an AI that it says will pose competition to OpenAI’s iconic ChatGPT.
Google has now joined Microsoft and Adobe in guaranteeing users of its AI platforms and services that the tech corporation will assume liability for any claims of Intellectual Property (IP) infringement.
Images of child sexual abuse already are rife on the dark web and the problem could explode now that AI has the power to create deepfake images, the U.K.’s Internet Watch Foundation has warned.
In developing their artificial intelligence systems, Google, OpenAI, and other creators installed so-called guardrails that would keep their AIs from disseminating hate speech or disinformation and committing similarly unacceptable acts.
The Allen Institute for AI is building its own AI as an alternative to that of OpenAI or Google.
By 2030, “AI will do 80 percent of 80 percent of all jobs we know of today,” renowned venture capitalist Vinod Khosla said last week at the Tech Live conference, sponsored by The Wall Street Journal.