Monday, January 19, 2026
Home Business What the Numbers Show About AI’s Harms

What the Numbers Show About AI’s Harms

by wellnessfitpro
0 comment
What the Numbers Show About AI’s Harms

With the widespread adoption of artificial intelligence around the world over the past year, the technology’s potential to cause harm has become clearer. Reports of AI-related incidents rose 50% year-over-year from 2022 to 2024, and in the 10 months to October 2025, incidents had already surpassed the 2024 total, according to the AI Incident Database, a crowd-sourced repository of media reports on AI mishaps. Incidents arising from use of the technology, such as deepfake-enabled scams and chatbot-induced delusions have been rising steadily, according to the latest data. “AI is already causing real world harm,” says Daniel Atherton, an editor at the AI Incident Database. “Without tracking failures, we can’t fix them,” he adds.

The AI Incident Database compiles data by collecting news coverage of AI-related events and consolidating multiple reports about the same event into a single incident entry. Crowd-sourcing data has limitations and the rise in AI incidents is, in part, a reflection of increased media scrutiny of the technology, Atherton says. He maintains that news remains one of the best public sources of information on AI’s harms we have for now. Only a subset of real-world incidents are covered by journalists, and not all of those are submitted to the AI Incident Database, he adds. “All the reporting that has happened globally is a fraction of the lived realities of everybody experiencing AI harms,” Atherton says. While the E.U. AI Act and California’s Transparency in Frontier AI Act (SB 53) require developers to report certain incidents to authorities, only the most serious or safety-critical ones meet the reporting threshold.

Breaking it down

Artificial intelligence is an umbrella-term for several different technologies, from autonomous vehicles to chatbots—and the database lumps these together without a comprehensive structure. “That makes it very, very difficult to see patterns over whole datasets to understand trends,” says Simon Mylius, an affiliate researcher at MIT FutureTech. In January, Mylius and colleagues released a tool that enhances the AI Incident Database by using a language model to parse the news reports associated with each incident, before classifying them by type of harm and severity.

While the AI-driven approach has yet to be fully validated, the researchers hope the tool can help policymakers sort large numbers of reports and spot trends. Recognizing the “noise” inherent in media reports, Mylius’ team is working on a framework that borrows disease surveillance techniques to help interpret the data, he says. The hope is better incident tracking and analysis could help regulators avoid the missteps seen with social media and respond quickly to emerging harms.

Using the AI tool to sort incidents using an established taxonomy of AI risks reveals the upward trend in incidents has not occurred equally across all domains. While reports of AI generated misinformation and discrimination decreased in 2025, so-called ‘computer human interaction’ incidents, which includes those involving ChatGPT psychosis, have risen. Reports of malicious actors using AI, particularly to scam victims or spread disinformation, have grown the most, rising 8-fold since 2022. 

Before 2023, autonomous vehicles, facial recognition, and content moderation algorithms were among the most frequently cited systems. Since then, incidents linked to deepfake video have outnumbered all three combined. That doesn’t include deepfakes produced since late December, when an update to xAI’s Grok allowed for rampant use of the model to sexualize images of real women and minors. By one estimate, Grok was producing 6,700 sexualized images per hour, prompting Malaysia and Indonesia’s governments to block the chatbot. The U.K. ‘s media watchdog has launched an investigation, while the British Technology Secretary said the country plans to bring into force a law that criminalizes the creation of non-consensual sexualized images, including through Grok. In response to the uproar, xAI has limited Grok’s image generation tools to paying subscribers and has said editing images of real people in “revealing clothing” is now blocked.

The increase in deepfake incidents has coincided with rapid improvements in their quality and accessibility. The shift reveals that while some AI incidents stem from system limitations—such as an autonomous vehicle failing to detect a cyclist—others are driven by technical advances. As AI’s progress continues, particularly in sensitive domains like coding, new harms may emerge. In November, AI company Anthropic revealed it had intercepted a large-scale cyber attack that used its Claude Code assistant. The company has said we’ve reached an “inflection point,” where AI can prove useful in cybersecurity for both good and bad. “I think we’re going to see lots more cyber attacks that result in aggregated, significant financial loss in the very near future,” Mylius says.

Given their market dominance, it’s unsurprising that major AI companies are most frequently identified in incident reports, but more than a third since 2023 involved an unknown AI developer. “When scams circulate on platforms like Facebook or Instagram, Meta gets implicated,” Atherton says, “but what isn’t simultaneously getting reported is what tools were used to create the scam.” In 2024, Reuters reported that Meta had projected 10% of its revenue would come from ads for scams and banned goods. Meta responded that this number was a “rough and overly-inclusive,” done as part of an assessment to tackle frauds and scams—and that the documents “present a selective view that distorts Meta’s approach.”

Efforts to improve accountability already have buy-in from major AI companies. Content Credentials, a system of watermarks and metadata designed to ensure authenticity and flag AI-generated content is backed by Google, Microsoft, OpenAI, Meta, and ElevenLabs. The latter also offers a tool that it says can detect whether an audio sample was generated using its technology. Yet, popular image generator Midjourney is not currently a supporter of the emerging standard.

While staying alert to new risks is crucial, it’s important not to allow present harms to become “part of the background noise”, says Atherton. Mylius agrees, noting that while certain harms emerge in sudden crises, others are more gradual. “Societal issues, privacy issues, erosion of rights, disinformation and misinformation [are] less obvious when an individual incident happens, but they add up to quite significant harms overall,” Mylius says.

Uncategorized,AIAI#Numbers #Show #AIs #Harms1768832614

You may also like

Leave a Comment

logo-white

Soledad is the Best Newspaper and Magazine WordPress Theme with tons of options and demos ready to import. This theme is perfect for blogs and excellent for online stores, news, magazine or review sites. Buy Soledad now!

u00a92022 Soledad, A Media Company – All Right Reserved. Designed and Developed by Penci Design