The nonprofit Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the U.S. Federal Trade Commission (FTC), claiming OpenAI has violated the portion of the Federal Trade Commission Act banning deceptive and unfair practices.

The group cited OpenAI’s GPT-4 chatbot’s propensity to reinforce racial and ethnic stereotypes, to use offensive language, and disparage some minority groups. It called ChatGPT-4 “biased, deceptive, and a risk to privacy and public safety.”

OpenAI has acknowledged that the chatbot “has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.” The company is working to end those lapses, it said.

Earlier this month, the FTC admonished U.S. AI developers and issued guidance.

“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors,” the guidelines pointed out. “Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices,” CAIDP president Marc Rothenberg, said in a statement announcing the complaint. “We believe that the FTC should look closely at OpenAI and GPT-4. 

“We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

More broadly, “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement,” the CAIDP claimed.

The think tank also notes that OpenAI’s chatbot can call up advice or encouragement for self-harm, descriptions of graphic violence, and instructions for carrying out illegal activities, among other dangers.

OpenAI released its AI system into the wild without commissioning an independent analysis of its dangers and risks to the public, CAIDP added.

The private group also urged the FTC to halt any new commercial releases of OpenAI’s GPT chabots until the company complies with CAIDP’s understanding of the guidelines and to encode the guidelines into regulations that will protect private and commercial users.

TRENDPOST: Even if it has the power to halt further releases of ChatGPT, the FTC is unlikely to do so and the CAIDP knows that.

AI developers have openly acknowledged that their creations can blurt offensive language and pass along illegal or dangerous information, so a claim that developers are guilty of deception doesn’t stand up.

The private group is making the complaint to goose federal regulators into speeding the creation of stricter rules governing AI’s training and to remind the public that chatbots lack certain inhibitions.

AI developers already are at work on solving these embarrassing and potentially dangerous lapses; they know the legal and public-relations quagmires that await if they don’t. Bans and new regulations aren’t likely to make a difference.

Skip to content