EIGHT MORE COMPANIES PLEDGE AI SAFETY

EIGHT MORE COMPANIES PLEDGE AI SAFETY

Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI have promised to actively promote AI systems that are secure against hackers, safe for users, that identify their creations as products of AI instead of people, and that continually seek and weed out biases and other socially harmful elements.

Representatives of the companies appeared at a White House ceremony earlier this month to publicize their commitment to:

  • ensure their AI systems are safe for users before they’re released to the public (not offering help in building homemade bombs or killing yourself, for example); 
  • ensure AI development processes are secure against invaders who could tinker with the ways in which an AI responds to prompts;
  • use watermarks or other means to show when items have been created by AI instead of by humans and help research social risks posed by AI, such as unconscious bias built into systems;
  • create AIs that can contribute to solutions to human problems, such as researching causes and cures for major diseases.

In July, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI took the same pledge at a similar White House gathering.

The Biden administration developed the pledge in consultation with 20 other countries. The elements complement similar guidelines being formulated by the G7 group of nations.

Skip to content