The Center for Artificial Intelligence and Digital Policy, a tech ethics advocacy group, has urged the U.S. Federal Trade Commission (FTC) to prevent OpenAI from launching new commercial releases of its Generative Pre-trained Transformer (GPT) AI program, specifically the recently unveiled GPT-4. The group’s complaint, made public on its website, describes GPT-4 as “biased, deceptive, and a risk to privacy and public safety.”
OpenAI’s GPT-4, which offers human-like conversational abilities and the ability to compose songs and summarize lengthy documents, has both thrilled and alarmed users since its launch in March. The advocacy group’s complaint follows an open letter signed by over a thousand industry executives, artificial intelligence experts, and Elon Musk, calling for a six-month pause in developing AI systems more powerful than GPT-4, citing potential risks to society.
The Complaint’s Allegations
The Center for Artificial Intelligence and Digital Policy’s complaint alleges that GPT-4 fails to meet the FTC’s standards of transparency, explainability, fairness, and empirical soundness, resulting in a lack of accountability. The advocacy group is calling on the FTC to investigate OpenAI, prevent further commercial releases of GPT-4, and establish necessary measures to protect consumers and the commercial marketplace.
Marc Rotenberg, president of the Center for Artificial Intelligence and Digital Policy and a veteran privacy advocate, states that the FTC has a responsibility to investigate and prohibit unfair and deceptive trade practices, adding that “we believe that the FTC should look closely at OpenAI and GPT-4.”
Industry Concerns over AI Development
Concerns over the societal implications of increasingly advanced AI systems have led to increased calls for regulation and oversight. The open letter sent to Elon Musk and signed by industry executives and AI experts expressed fears that systems more powerful than GPT-4 could be used to spread misinformation or harm individuals and society as a whole.
OpenAI has previously pledged to develop AI in a responsible and transparent manner, releasing the GPT-3 program’s source code to promote research and ethical use. However, the Center for Artificial Intelligence and Digital Policy’s complaint argues that GPT-4’s potential impact on privacy and safety necessitates further regulatory scrutiny.
OpenAI has yet to issue a public response to the Center for Artificial Intelligence and Digital Policy’s complaint. However, the company has previously stated that it is committed to transparency and responsible development and has established an external safety panel to assess potential risks.
The company’s website states that its AI programs are designed to promote “safe and beneficial outcomes for all” and that it aims to collaborate with other organizations to “create the best possible outcomes for society.” However, the recent complaints from industry experts and advocacy groups suggest that many believe further regulation and oversight are necessary to ensure the responsible development and use of AI technologies.
In conclusion, the Center for Artificial Intelligence and Digital Policy’s complaint to the FTC highlights growing concerns over the societal impact of increasingly advanced AI systems. As OpenAI and other tech companies continue to push the boundaries of what is possible with AI, it remains to be seen whether regulatory bodies and advocacy groups will be able to keep pace with the ethical and safety implications of these new technologies.