|
Governments around the world are debating how to guide AI. China and the European Union have laid out sets of rules. In the U.S., the Biden administration has issued an executive order. Earlier this month, Britain hosted an AI “safety summit” to initiate discussion of what to do.
Now the UN has decided to add its voice to the conversation.
Secretary-General António Guterres has formed a “High-level Advisory Body on AI” made up of 38 government, private-sector, and civil society experts from around the world who will “undertake analysis and advance recommendations for the international governance of AI.”
The group’s members will serve as liaisons between the UN’s effort and those in their home countries.
Guterrez’s goal is, in part, to create both a clearinghouse and coordinator to ensure that AI developers don’t face a patchwork of rules and regulations tossed up by an assortment of countries and instead to weave a single, globally agreed fabric of rules and guidelines for AI’s development, distribution, and applications.
The panel also has been tasked with recommending ways in which AI can be used to advance the UN’s sustainability goals and to benefit poor nations especially.
The group will make initial recommendations by the end of this year and deliver its final suggestions for AI guidelines next summer, Guterrez’s office said.
TRENDPOST: The UN has come late to the discussion of AI’s governance and is likely to play a lagging role in developing rules and controls.
The UN is a place where nations come to publicly disagree. If nations are expected to sign up to a shared slate of rules for AI, it will take months, if not years, to produce such a list and then several countries still will refuse to accept it.
A broad pattern of AI guidelines already is emerging among nations: AI shouldn’t tell people how to make bioweapons, shouldn’t help mentally unstable people kill themselves, and so on.
However, details will continue to vary among countries, just as there are widely different controls over the Internet between the U.S. and China.
More likely than an enforceable, worldwide protocol of international governance, AI is likely to be controlled by country-specific variations in those broad guidelines. That creates entrepreneurial opportunities within each country to build and customize nation-specific AIs.