ARTICLE AD
NOYB’s latest complaint may just be in line with its commitment to ensuring that these firms align with the European General Data Protection Regulation laws.
OpenAI, arguably the most popular artificial intelligence (AI) developer, is currently embroiled in a privacy issue in Austria. This follows after a European digital rights protection advocacy group NOYB opened a privacy complaint against the firm on Monday.
According to NOYB, OpenAI has refused to address the issue of misinformation and sometimes, entirely false information associated with its generative AI chatbot ChatGPT. The Vienna-based non-profit organization group says that this inaction could potentially breach privacy rules in the European Union (EU).
As part of the complaint, the group made an example of an unidentified public figure who asked OpenAI’s chatbot for information about himself. NOYB claims that the person was consistently supplied with information that isn’t consistent with him.
The group also claims that the figure requested to correct or erase the data. However, OpenAI did not grant that request either. As a fact, OpenAI allegedly even refused to reveal information on its training data and sources. That is the cause of the group’s outrage.
Maartje de Graaf, one of the group’s lawyers, issued a statement on the case. It reads thus:
“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”
NOYB Requests OpenAI Investigation amid Growing Concerns Over Chatbots
As earlier noted, NOYB has wasted no time in taking its complaint to the Austrian data protection authority. However, part of the complaint is a request to investigate OpenAI’s data processing and how it ensures the accuracy of personal data.
NOYB’s latest complaint may just be in line with its commitment to ensuring that these firms align with the European General Data Protection Regulation laws. However, it must be noted that it has somewhat become a trend that activists and researchers are now calling out chatbots over privacy concerns.
That was the case in December 2023, when a study by two European NGOs revealed a similar setback in Microsoft’s Bing AI chatbot, now Copilot. At the time, the study revealed that the chatbot was providing misleading information about the political elections in Germany and Switzerland.
The errors that the study found out were not limited to candidate personal data alone. The chatbot also gave inaccurate information on polls, scandals, and voting while also misquoting its sources.
In another instance, Google’s Gemini AI chatbot recently faced criticism after it provided inaccurate imagery in its image generator. Google, however, has since apologized, promising to update its model. It remains to be seen how other chatbots respond to these growing concerns and potential law violations.