December 1, 2024

Brighton Journal

Complete News World

ChatGPT's “hallucinations” issue has faced another privacy complaint in the EU

ChatGPT's “hallucinations” issue has faced another privacy complaint in the EU

Image credits: Olivier Daulieri/AFP/Getty Images

OpenAI faces another privacy complaint in the European Union. This case, which was brought by a non-profit organization concerned with privacy rights noyb On behalf of an individual complainant, he takes aim at the inability of the AI-powered chatbot ChatGPT to correct misinformation it generates about individuals.

The tendency of GenAI tools to produce false information has been well documented. But it also puts the technology on a collision course with the bloc's General Data Protection Regulation (GDPR) — which governs how regional users' personal data is processed.

Penalties for failures to comply with GDPR can amount to up to 4% of total global annual sales. More importantly for a resource-rich giant like OpenAI: data protection regulators could order changes to how information is processed, so the introduction of GDPR could reshape how generative AI tools operate in the EU.

OpenAI has already been forced to make some changes after early intervention by the Italian Data Protection Authority, which briefly forced ChatGPT to shut down locally in 2023.

Now noyb has filed its latest GDPR complaint against ChatGPT with the Austrian Data Protection Authority on behalf of an unnamed complainant (described as a “public figure”) who found that their AI-powered chatbot had produced an incorrect date of birth for them.

Under the General Data Protection Regulation, people in the EU have a range of rights associated with information about them, including the right to have inaccurate data corrected. noyb asserts that OpenAI fails to comply with this obligation with respect to the output of its chatbot. She said that the company rejected the complainant’s request to correct the incorrect date of birth, considering it technically impossible to correct it.

See also  Stock futures drop at the start of the week with key inflation data and upcoming earnings

Instead, it offered to filter or block data based on certain prompts, such as the name of the complainant.

OpenAI privacy policy It states that users who notice that their AI chatbot has generated “inaccurate information about you” can submit a “correction request” through Privacy.openai.com Or via email to [email protected]. However, he caveats this line with the caveat: “Due to the technical complexity of how our models work, we may not be able to correct inaccuracies in every case.”

In this case, OpenAI suggests users request that their personal information be removed from the ChatGPT output entirely – by filling out a form Web form.

The problem for the AI ​​giant is that its GDPR rights are not selective. People in Europe have the right to ask for correction. They also have the right to request that their data be deleted. But, as Noib points out, OpenAI does not have the right to choose any of these available rights.

Other elements of the complaint focus on GDPR transparency concerns, with Noib alleging that OpenAI is unable to identify the source of the data it generates about individuals, nor the data the chatbot stores about people.

This is important because the Regulation, once again, gives individuals the right to request such information by submitting a so-called Subject Access Request (SAR). However, OpenAI did not respond adequately to the complainant's SAR, and failed to disclose any information about the data processed, its sources, or its recipients.

Commenting on the complaint in a statement, Maartje de Graaf, data protection lawyer at noyb, said: “Making up false information is a big problem in itself. But when it comes to false information about individuals, there can be serious consequences. It is clear that companies are not Currently able to make chatbots like ChatGPT comply with EU law, when processing data about individuals if a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology must follow legal requirements, not the other way around.

The company said it is asking the Austrian DPA to investigate the complaint regarding OpenAI's data processing, as well as urging it to impose a fine to ensure future compliance. But she added that it was “likely” that the issue would be dealt with through cooperation with the European Union.

See also  The Delta plane was forced to make an emergency landing due to passenger diarrhea

OpenAI faces a very similar complaint in Poland. Last September, the local data protection authority opened an investigation into ChatGPT following a complaint by a privacy and security researcher who was also found unable to correct incorrect information about it by OpenAI. This complaint also accuses the AI ​​giant of failing to comply with the transparency requirements set forth in the regulation.

Meanwhile, the Italian Data Protection Authority is still conducting an open investigation into ChatGPT. In January, it issued a draft decision, saying at the time that it believed OpenAI had violated the GDPR in several ways, including in relation to the chatbot's tendency to produce misleading information about people. The findings also relate to other substantive issues, such as the legality of treatment.

Italian authorities have given OpenAI a month to respond to its findings. The final decision is still pending.

Now, with another GDPR complaint launched on its chatbot, the risk of OpenAI facing a series of GDPR enforcement actions across different member states has increased.

Last fall, the company opened a regional office in Dublin — in a move that appears aimed at reducing regulatory risks through the diversion of privacy complaints by the Irish Data Protection Commission, thanks to a mechanism in the GDPR aimed at simplifying cross-border oversight of complaints. By directing them to a single Member State authority where the company is a “principal enterprise”.