当前位置: 当前位置:首页 >休閑 >【】正文

【】

作者:焦點 来源:娛樂 浏览: 【】 发布时间:2024-11-10 01:47:43 评论数:

On Thursday, the Center for AI and Digital Policy (CAIDP), an advocacy nonprofit, filed a complaint with the Federal Trade Commission (FTC) targeting OpenAI. The complaint argues that the company's latest large language model, GPT-4, which can be used to power ChatGPT, is in violation of FTC rules against deception and unfairness. This comes on the heels of an open letter signed by major figures in AI, including Elon Musk, which called for a six-month pause on the training of systems more powerful than GPT-4.

The complaint asks the Commission "to initiate an investigation into OpenAI and find that the commercial release of GPT-4 violates Section 5 of the FTC Act." This section, the complaint explains, provides guidance about AI and outlines the "emerging norms for the governance of AI that the United States government has formally endorsed."

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!
SEE ALSO:Oh great, Microsoft's Bing AI chatbot is getting more ads

What's so scary about GPT-4, according to this complaint? It is allegedly "biased, deceptive, and a risk to privacy and public safety," The complaint also says that GPT-4 makes unproven claims and is not sufficiently tested.

The CAIDP also points out — using quotes from past reports written by OpenAI itself — that OpenAI knows about the potential to bring about, or worsen, "disinformation and influence operations," and that the company has expressed concerns about "proliferation of conventional and unconventional weapons" thanks in part to AI. OpenAI has also, the complaint notes, warned the public that AI systems could "reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement."

The complaint also rips into OpenAI for apparently not conducting safety checks aimed at protecting children during GPT-4's testing period. And it quotes Ursula Pachl, Deputy Director of the European Consumer Organization (BEUC) who said, "public authorities must reassert control over [AI algorithms] if a company doesn't take remedial action."

By quoting Pachl, the CAIDP is clearly invoking — if not directly calling for — major government moves aimed at regulating AI. European regulators are already weighing a much more heavy-handed, and rules-based approach to this technology. And this comes as companies are looking to make money in the generative AI space. Microsoft Bing's GPT-4-powered chatbot, for instance, is now generating ad revenue. Such companies are no doubt eagerly awaiting the FTC's response.

TopicsOpenAI