The nonprofit research group Center for AI and Digital Policy has filed a complaint with the Federal Trade Commission against OpenAI, urging the agency to investigate the group and suspend its commercial deployment of large language models, including its latest iteration, ChatGPT. The group accuses OpenAI of violating the FTC’s standards for AI, including transparency, accountability, and fairness. The group also wants the agency to create a public incident reporting system for GPT-4 and establish standards for generative AI products.
The nonprofit research group Center for AI and Digital Policy (CAIDP) has filed a complaint to the Federal Trade Commission (FTC) requesting an investigation into OpenAI and the suspension of its commercial deployment of large language models, including ChatGPT, its latest iteration. The complaint accuses OpenAI of violating Section 5 of the FTC Act, which prohibits unfair and deceptive business practices, and the agency’s guidance for AI products.
CAIDP has stated that GPT-4 is “biased, deceptive, and a risk to privacy and public safety” and that the large language model fails to meet the agency’s standards for AI to be “transparent, explainable, fair, and empirically sound while fostering accountability.” CAIDP has called for the FTC to establish a way to independently assess GPT products before they are deployed in the future. It also wants the FTC to create a public incident reporting system for GPT-4 similar to its systems for reporting consumer fraud. Additionally, CAIDP wants the agency to take on a rulemaking initiative to create standards for generative AI products.
Marc Rotenberg, the president of CAIDP, has signed onto a widely circulated open letter released on Wednesday that called for a pause of at least six months on “the training of AI systems more powerful than GPT-4.” Tesla CEO Elon Musk, who co-founded OpenAI, and Apple co-founder Steve Wozniak were among the other signatories.
OpenAI did not immediately respond to a request for comment, and the FTC declined to comment.
The use of large language models like GPT-4 has been a topic of concern for some time, with critics highlighting the potential for bias and misinformation in AI-generated text. GPT-4 is a particularly powerful language model, with the ability to generate text that is virtually indistinguishable from text written by a human. This has raised concerns about the potential for the model to be used to spread false information or generate malicious content.
The call for an investigation into OpenAI and the suspension of its commercial deployment of large language models is just one of several recent developments related to the use of AI in society. As AI continues to become more prevalent in various industries and applications, policymakers, researchers, and the public are grappling with the ethical and social implications of this technology.
Some researchers and policymakers have called for greater regulation of AI to ensure that it is developed and deployed responsibly and ethically. Others have raised concerns about the potential for AI to exacerbate existing inequalities and biases or to be used to violate privacy or civil rights.
There is also growth in developing AI that is more transparent and explainable. Currently, many AI systems are considered “black boxes,” meaning that it can be difficult to understand how they arrive at their decisions or recommendations. This lack of transparency can make it challenging to identify and correct errors or biases in AI systems.
Overall, the complaint filed by CAIDP is just one part of a broader conversation about the responsible development and deployment of AI. As this technology continues to evolve and become more pervasive in society, debates and discussions about its impact will likely continue.