In a groundbreaking study, researchers have discovered that artificial intelligence (AI) is making it significantly easier for malicious hackers to identify anonymous social media accounts, posing a severe threat to online privacy. The research highlights that large language models (LLMs), which power platforms like ChatGPT, can successfully match anonymous users with their real identities on other platforms based on the information they post online.
How AI De-Anonymises Users
The AI researchers Simon Lermen and Daniel Paleka conducted experiments where they fed anonymous accounts into an AI system, which then scraped all available data. In a hypothetical example, a user mentioned struggling at school and walking their dog Biscuit through "Dolores park." The AI used these details to search elsewhere, confidently linking the anonymous account @anon_user42 to a known identity. While this scenario was fictional, it illustrates how AI can be exploited for privacy attacks.
Risks and Implications
The study warns that governments could use AI to surveil dissidents and activists posting anonymously, while hackers might launch highly personalised scams. AI surveillance is a rapidly evolving field that alarms computer scientists and privacy experts. LLMs synthesise information about individuals online, a task impractical for most people to do manually, lowering the expertise needed for sophisticated attacks.
Peter Bentley, a professor of computer science at UCL, expressed concerns about commercial uses of de-anonymising technology, noting that LLMs often make mistakes in linking accounts, potentially leading to false accusations. Prof Marc Juárez, a cybersecurity lecturer at the University of Edinburgh, added that LLMs can access public data beyond social media, such as hospital records and admissions data, which may not be adequately anonymised for the AI age.
Limitations and Countermeasures
AI is not infallible; it can only link accounts where users consistently share the same information across platforms, and sometimes there is insufficient data to draw conclusions. Prof Marti Hearst of UC Berkeley emphasised that the number of potential matches can be too large to narrow down effectively.
To mitigate risks, Lermen recommends that platforms enforce rate limits on user data downloads, detect automated scraping, and restrict bulk exports. Individuals are also advised to be more cautious about the information they share online. This study calls for a fundamental reassessment of online privacy practices in the era of AI.



