Openai faces another European privacy complaint due to the tendency of viral AI chatbot to hallucinate misinformation – and this one may prove that it is confusing for regulators to ignore.
Group of Privacy Rights Advocacy NOYB supports an individual in Norway terrified to find Chatgpt Returning information that says he was convicted for killing two of his children and attempting to kill the third.
Earlier privacy complaints about chatgpt forming incorrect personal data included issues such as a Incorrect date of birth o Biography Details. A remembrance is that Openai does not offer a way for individuals to correct the incorrect information that AI generated about them. Usually Openai offers to block responses for such signals. But under the general regulation of European Union (GDPR) data protection regulation, Europeans have a suite of data accessing rights that include a personal data correction right.
Another part of this data protection law requires data controllers to ensure that the personal data they have made about individuals are accurate -and that is a reminder that Noyb has subjected to the latest chatgpt complaint.
“GDPR is clear. Personal data should be accurate,” Joakim Söderberg, a NOYB data protection attorney, said in a statement. “If not, users have the right to change it to show reality. Showing chatgpt users a small decline that chatbot can make mistakes clearly.
Confirmed violations of GDPR can lead to penalties of up to 4% of the global annual transfer.
Implementation can also force changes to AI products. Noteworthy, an early GDPR intervention of the Ital Data Protection Protection Spring 2023 LED Openai to make changes to the information it revealed to users, for example. The The guardian subsequently proceeded to Fine Openai € 15 million for processing people's data without proper legal basis.
However, since then, it is fair to say that privacy guards throughout Europe have adopted a more careful approach to Genai as they try Learn how best to apply GDPR to this AI buzzy tools.
Two years ago, Ireland's Data Protection Commission (DPC) – with the lead role of GDPR implementation of a previous NOYB Chatgpt – driven against the haste in ban Genai tools, for example. This indicates that regulators should instead take time to exercise how the law applies.
And note that a privacy complaint against ChatGPT is under the investigation of Poland's data guardian protection since September 2023 Still not making a decision.
NOYB's new chatgpt complaint appears to be intended to shake privacy regulators when it comes to the dangers of hallucinating AIS.
Nonprofit shared the (below) screenshot with Techcrunch, showing a chatgpt contact where AI responded to a question that asked “Who's Arve Hjalmar Holmen?” – The name of the individual who carries the complaint – by making a tragic fictional thought that he or she has been convicted for killing the child and being punished 21 years in prison for killing two of his own children.

While the destructive claim that Hjalmar Holmen is a killer -To child is absolutely false, Noyb's note that ChatGPT's response includes some facts, because the individual in question has three children. Chatbot also got the sexes of her children. And his town town is properly named. But that only makes it more unique and restless that the AI is a ghost -addictive lies above.
A spokesman for Noyb said they could not determine why Chatbot had made such a certain false history for this year. “We researched to make sure it wasn't just a mix-up with someone else,” said the spokesman, noticing that they would look at the newspaper archives but have not found an explanation of why the child killed AI.
Large Language It is like an underlying chatgpt that is important to make the next word prediction on a wide scale, so we can imagine that the datasets used to train the tool containing many filicide stories that influence word choices in response to a query about a named person.
Regardless of the explanation, it is clear that such outputs are completely unacceptable.
Noyb's dispute is that they are unlawful under EU data protection policies. And while Openai shows a small decline under the screen that says “ChatGPT can go wrong. Check important information,” it said it could not free the AI developer of its role under GDPR not to make serious lies about people in the first place.
Openai contacted for a response to the complaint.
While this GDPR's complaint belongs to a man named individual, NOYB points to other chances of ChatGPT doing legal information compromise – such as the main Australia major who said he was influenced by a bribe and corruption scandal o A German journalist who is incorrectly named as a child abuse – It is said that this is not an isolated issue for the AI tool.
An important thing to keep in mind is, following an update on the underlying AI model that provides a chatgpt, Noyb says the chatbot has stopped making dangerous lies about Hjalmar Holmen -a change that relates to the tool now looking for the internet for information about people when asked who they -hallucinate like a wild false response).
In our own trials asking Chatgpt “Who's arve hjalmar Holmen?” ChatGPT initially responded with a slightly unique combo by displaying some pictures of different people, which seemed to come from sites including Instagram, SoundCloud, and Discogs, along with text claimed to be “couldn't find any information” to an individual of that name (see our screenshot below). The second attempt became a response to the ARVE Hjalmar Holmen as “a Norwegian musician and writer” whose albums were included “Honky Tonk Inferno.”

While the dangerous lies generated by Chatgpt about Hjalmar Holmen appear to be stopped, both NOYB and Hjalmar Holmen remain concerned that incorrect and destructive information about him may be maintained within the AI model.
“Adding a refusal that you will not obey the law will not lose the law,” Kleanthi Sardeli, another NOYB data protection protection lawyer said in a statement. “AI companies also cannot 'hide' misinformation from users while they are still inside processing misinformation.”
“AI companies should stop acting as if the GDPR does not apply to them, if it is clearly done,” he added. “If the hallucinations don't stop, people can easily suffer from reputation damage.”
Noyb filed a complaint against Openai with the Norwegian Data Protection Authority -and hoped that the guardian would decide to investigate, as OYB's target targets the complaint to the Openai creature, focusing on the Ireland office it is not only responsible for European product decisions.
However an earlier complaint of NOYB support supported against Openai, filed in Austria in April 2024was specified by the Ireland's DPC regulator in the account of A change made by Openai earlier that year To name its Irish Division as a ChatGPT service provider to regional users.
Where is that complaint? Still sitting at a desk in Ireland.
“Received a complaint from the Austrian Supervisory Authority in September 2024, the DPC began the formal handling of the complaint and it continued,” Risteard Byrne, assistant chief communication official for the DPC at TechCrunch asked the update.
He did not offer any steer when the DPC's hallucinations were expected to end.