A privacy advocacy organization based in Vienna, known as NOYB (None of Your Business), has initiated a formal complaint against the artificial intelligence tool ChatGPT in Austria. The group asserts that ChatGPT, developed by OpenAI, frequently generates incorrect answers, a flaw the company itself admits it cannot rectify.
Concerns Over Inaccuracy and Data Transparency
NOYB has raised concerns that ChatGPT consistently “hallucinates,” producing responses that may not be grounded in fact.
“ChatGPT keeps hallucinating – and not even OpenAI can stop it,”
NOYB stated. This behavior is particularly problematic given that there is no mechanism in place to ensure the accuracy of the information provided by the program.
The campaign group criticized OpenAI for not being transparent about the sources of its data and what information ChatGPT retains about individuals. According to European Union regulations, personal data must be precisely accurate, which NOYB argues is not achievable with the current capabilities of ChatGPT.
Legal Challenges and Requests Denied
Maartje de Graaf, a data protection lawyer at NOYB, emphasized the legal implications of the technology’s limitations.
“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals,”
de Graaf explained, adding,
“The technology has to follow the legal requirements, not the other way around.”
Furthermore, NOYB highlighted instances where ChatGPT incorrectly provided personal data, such as the birth date of NOYB founder Max Schrems. Despite the inaccuracies, OpenAI reportedly declined Schrems’s requests to correct or delete the erroneous data, claiming it was not feasible.
Response from OpenAI
In response to these allegations, OpenAI has reiterated its commitment to data privacy. An OpenAI spokesperson stated,
“We want our AI models to learn about the world, not individuals; we do not actively seek personal information to train our models, and we do not use publicly available information on the Internet to profile, advertise to, or target people, or to sell their data,”
in a comment to AFP.
Legal and Regulatory Landscape
The complaint by NOYB is part of broader scrutiny that AI technologies like ChatGPT are facing across Europe. Following the tool’s launch in November 2022, which saw widespread enthusiasm for its capabilities, regulatory bodies in Italy and France have taken steps to assess and sometimes restrict its use due to privacy concerns.
NOYB is urging Austria’s data protection authority to conduct a thorough investigation into OpenAI and enforce compliance with EU law through potential fines. This action reflects ongoing tensions between rapid technological advancements and existing legal frameworks designed to protect individual privacy.