OpenAI, the company that created ChatGPT and launched the new AI revolution, announced it has set up a team called Superalignment that’s dedicated to ensuring that AI systems don’t run amok and cause catastrophic consequences for humanity.
The term “superalignment” highlights the team’s goal of aligning superintelligent AIs with positive human values and goals.
Sam Altman, OpenAI’s CEO testified to Congress earlier this year that AI needed some form of regulation.
This spring, hundreds of technocrats, including several AI pioneers, signed an open letter calling for a six-month suspension of AI development while humans ponder ways to keep the new technology within bounds.
More recently, Geoffrey Hinton—known as the “Godfather of AI”—publicly voiced his concerns that AI will evolve to shed all human guidance and control.
OpenAI believes that such a super-intelligent AI could be created by 2030, although that prospect is unlikely. However, there is no coherent system or framework of rules for controlling or guiding current versions of AI, much less new generations vastly more capable than today’s.
Superalignment’s purpose is to gather a team of highly skilled developers in machine learning who will create a “roughly human-level automated alignment researcher.” This AI would carry out “safety inspections” of superintelligent AIs.
OpenAI admits that Superalignment’s task is daunting. Making it more so: governments around the world are instituting their own regulations governing AI, which could lead to a random patchwork of controls that would not be aligned with anything.
TRENDPOST: A danger of OpenAI’s new effort is complacency: others in the field might decide that OpenAI has “got this” and no one else needs to think hard about it.
In reality, there needs to be many such efforts around the world, communicating with each other and sharing results, to ensure that a unified, coherent method evolves that will keep firm but flexible control over AI as it evolves to become ever more intelligent and ever more capable.