Google, Microsoft, OpenAI, and Anthropic, an AI startup created by former OpenAI employees, are collaborating in an effort to promote responsible development and use of AI.
They have named their organization the Frontier Model Forum and say it will focus on “ensuring safe and responsible development of frontier AI models.”
The companies will use their resources to create and disseminate benchmarks, technical evaluations, and other tools by which AI’s social responsibility can be measured, the quartet said in a statement.
These will be compiled in a “public library of solutions to support industry best practices and standards.”
The forum has listed its objectives as:
- “advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety;
- identifying best practices for the responsible development and deployment of frontier models, and helping the public understand the nature, capabilities, limitations, and impact of the technology;
- collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks;
- supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.”
The group says it welcomes new members that share its goals.
The forum hopes to become a clearinghouse of ideas and proposals not only for developers, but also for those tasked with regulating AI, an effort already under way among governments in Europe, the G7 nations, the Organization for Economic Cooperation and Development, the U.S., and U.K.
In its first year, the forum will focus on three key areas, its announcement said: identifying best practices in safety and risk management, coordinating research into key AI areas and building the public library, and serving as a place where companies, governments, and organizations can discuss and cross-pollinate.
Next up: the forum will establish an advisory board, write its charter and bylaws, choose an executive board, and look for additional funding.
TRENDPOST: This kind of group is a necessity. Developers, users, and regulators need a place to talk to each other outside of legislatures’ hearing rooms. This forum can provide that neutral ground and the heft of the four founding partners can create the gravity that will draw others in.
Its key weakness is that it can only be voluntary. Rogue engineers and rogue nations can refuse to take part and ignore any guidelines that the forum or its members concoct. The antidote. To the extent there can be one—will be the forum’s power to call attention to those delinquents and coordinate some degree of power to bear against them.