AI Harms

Many concerns have been raised about the harm that AI could do—or already does—to individuals, communities, or the human race as a whole.

Might a sentient, malevolent AI turn against its creators and destroy humanity? This is a potential harm that AI developers, in particular, seem keen to consider. But bear in mind that if developers have an incentive to hype all the good things AI might do for us, they also have an incentive to hype scenarios of AI apocalypse. In both bright and dark visions of an AI future, AI is depicted as extremely powerful, as are its creators.

Meanwhile, focusing too much on AI doom scenarios may distract our attention from the significant, if more prosaic, harms that AI is causing right now. It’s important to consider that

  • AI tools replicate and thereby amplify the biases and misinformation in the data they’re trained on, which can lead to discimination when automated systems are used to make decisions about, for example, hiring, lending, or setting bail.
  • AI tools, which can’t distinguish fact from falsehood, frequently “invent” facts and churn out citations to nonexistent sources.
  • Much of the content used to train AI tools is copyrighted, raising questions about the legality of this training.
  • AI “slop” on social media and elsewhere makes it more difficult for users to access the content that truly interests them.
  • AI tools have made it easier than ever to create and circulate misinformation and disinformation, polluting civic discourse and distorting the political process.
  • AI tools for generating image content have led to new forms of sexual exploitation and harassment.
  • AI tools have complicated the task of educators by making it harder to detect cheating.
  • The data centers that make cloud-based AI tools available at scale consume large quantities of energy and water, resulting in significant potential for adverse effects on the environment and on the communities where these centers are located.
  • To reduce the amount of toxic (e.g., racist, misogynistic, homophobic, transphobic, antisemitic, violent) content produced by the major AI tools available to the public, the companies that create them rely heavily on human labor that is typically poorly compensated.

As part of its AI Risk initiative, the Massachusetts Institute of Technology (MIT) maintains an incident tracker that classifies incidents of AI harm based on type and severity. At Georgetown University, the Center for Security and Emerging Technology has developed a framework for understanding AI harms that distinguishes between “tangible” and “intangible harms” and breaks these harms down into categories such as “Physical Health/Safety,” “Infrastructure Damage,” “Property Damage,” “Financial Loss,” “Environmental Damage,” “Detrimental Content,” “Human/Civil Rights,” “Democratic Norms,” and “Privacy,” while allowing for the future inclusion of additional categories and harm types.

MIT has also done a deep dive into the energy footprint of AI and packaged an overview of their findings into this brief video:

Although, as we saw earlier, automation is the common feature among the varied technologies marketed as “AI,” these technologies often require human intervention. The “humans in the loop” hired to review the output of AI models and weed out harmful content are often poorly paid, and the work they do can cause psychological harm, as described in this brief segment from the news show 60 minutes.

Finally, it remains to be seen how the massive speculative investment in AI companies may affect the economy overall. Investors are betting that consumers and businesses will be willing to spend big for the companies’ products in pursuit of convenience or efficiency, but if their bets are mistaken, this “AI bubble” could burst, erasing tens of trillions of dollars from the global economy. One concern that economists and business journalists point to is the amount of “circular financing” in AI ventures. For example, an October, 2025 article in Forbes noted that the chipmaker Nvidia “invested $100 billion in OpenAI which the ChatGPT provider will use to buy Nvidia chips.” The prospects of economic calamity from AI investment have been covered, as well, in the Washington Post, the Wall Street Journal, the New York Times, and NPR.


This site uses Just the Docs, a documentation theme for Jekyll.