Google wants journalists to have their own specialized AI assistant.
It’s called Genesis.
Google wants journalists to have their own specialized AI assistant.
It’s called Genesis.
Racing to claim a larger share of the global AI market, Google has released a version of its Bard AI that can chat with you in 43 languages as well as English.
What’s an aspiring AI company that’s trying to go along to get along, going to do when it finds that its AI is committing “dangerous” and “harmful” acts, as defined by government ideologues?
This past week, seven major AI focused tech companies agreed to abide by and work with the Biden Administration’s AI guidance policies.
It made the announcement via a 21 July “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI.”
At a 13 July conference, the UN’s Educational, Scientific and Cultural Organization (UNESCO) launched an effort to frame a set of ethical standards around “neurotech,” the use of computers to connect with and analyze the human mind.
Meta Platforms is preparing to release LLaMA, its no-charge large language model AI to compete with ChatGPT and Google’s Bard.
Schools may have doubts about students using AI in their schoolwork, but teachers have made chatbots their new classroom utility players.
Google is testing Med-PaLM2, its chatbot that will be able to answer common medical questions.
Major AIs – ChatGPT, Google’s Bard, Stable Diffusion, image creator Dall-E, and others—learned by ransacking the Internet and internalizing its contents. Now it uses that content to create original essays, news stories, pictures, and other works.
That AI pause? If it’s happening at all (which is doubtful), the pause certainly doesn’t extend to the rapid development and implementation of AI in military applications.