• Menu

    • Home
    • Blog
    • About Andy
  • Home
  • Blog
  • About Andy
On the Ordinary Pitfalls of AI
February 28, 2026

For all the talk of Artificial Intelligence one day becoming super-intelligent and ending humanity, a more ordinary–and more pernicious–dynamic is very much here. Already, AI is driving a surge in carbon emissions and water consumption, to power and operate massive data centers; overwhelming the internet with slop and dis- and mis-information; beginning to replace jobs; and accreting even more power to a handful of tech billionaires who are using their wealth to ensure electoral and policy outcomes are good for their bottom lines. To me, an early-adopter of technology, AI seems like a very powerful tool, provided that it is treated as a tool with a specific set of use cases: you don’t use a hammer to write an essay, and you shouldn’t trust AI companies with your personal data nor rely on Large Language Models to the exclusion of your own critical thinking.

Two recent incidents have highlighted the cruciality of understanding what AI can and cannot do, and the many harms it can cause. In the first, I had spent about an hour doing research for an essay, mostly looking up data points to back up my arguments. All of a sudden, I realized that in that hour, I had done nothing but read the AI-generated responses to my search queries–not the underlying studies or articles. It struck me that I was about to publish a bold series of points backed up by data that could have been hallucinated or misinterpreted by the AI tool, both of which are known problems with these AI summaries. What’s more, because I had not read beyond the summaries, I was missing all manner of nuance, detail, and other perspectives lost in the consensus-based overviews I was being presented.

Three years ago, this would have been inconceivable: such summaries did not exist, and while there was often a tendency on the part of students to conduct “research” by reading Wikipedia, more work had to be done. Now, all you have to do is type up any question that comes to mind and read, in natural language, a confident and convincing answer. The tendency is to do what I did: take the summary as true, appreciate the time saved, and move on. Needless to say, this does not bode well for reasoned argument, nor for the propagation of, and belief in, facts and truth. I like to think of myself as aware of the pitfalls of AI and mindful of the need to fact-check, and yet I easily fell into a dangerous trap. Multiply that billions of times over, and we have a society-wide problem for which we are completely unprepared.

In the second, I had gotten into a contentious email exchange with a contractor over an issue they caused but are refusing to pay for. When I finally threatened to leave a negative Google review, I received a lengthy response that had clearly been written by AI, copied, and pasted into the thread. I wasn’t necessarily upset about the fact that they used AI; rather, it felt like something human and personal had been lost, which made me more sad than anything. What did upset me was that either because the contractor didn’t properly prompt the AI or the AI didn’t understand the issue, the answer I got back was wholly unhelpful and inadequate. It was almost as if the response was to a different fact pattern, leaving me even more frustrated and further prolonging the unpleasant interaction.

In any new technological revolution, such uncomfortable and strange events are inevitable. The smartphone, the internet, the printing press, the airplane–these all created new opportunities, caused job losses, changed the way we think and live and work. That does not mean, however, that the negative consequences of AI are inevitable; it’s the tech giants that want us to believe that this is all a fait accompli, that all we can do is grin and bear it. The AI-driven world we are building is the result of a series of choices: about how AI is designed, funded, and regulated; about who benefits from AI and upon whom it is imposed; about whether AI will serve people and democracy or a handful of the ultra-rich. We should note how AI is intruding on our lives and how it is impacting our thinking, our relationships, and our jobs. We should understand the tool’s power, pitfalls, and limitations, and the incentives of the people foisting AI on all of us. And we should demand that companies be held accountable for the intellectual property theft, environmental and social harm, and damage to democracy their products are causing. At a minimum, it is within our control to minimize AI usage, to push back against the tech company’s narratives, and to preserve the integrity of our minds and our hearts against the mindless, soulless onslaught of apparently magical word-prediction machines.

(Visited 5 times, 1 visits today)
AI
Share

Be The Change  / Prose

Leave A Reply


Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Search

  • Popular Posts

    • The Problem With Most Impact Investing
    • Meme-based Political Discourse is…Bad
    • The BRIGHT Solar Leasing Program Takes Flight!
    • Lahaina
    • The President Is Not A CEO, and the Nation Not A Business
  • Recent Posts

    • On the Ordinary Pitfalls of AI
    • The Rooftops of Kyiv
    • Good Habits
    • A Dog’s Life
    • Extremes
  • Blog Categories

    • America on the Brink
    • Be The Change
    • Business
    • Capital Good Fund
    • Certitude
    • environment
    • Huffington Post
    • Musings
    • philosophy
    • poetry
    • Politics
    • Prose
    • TreeHugger Job
    • Uncategorized


  • Get In Touch

    Contact Andy
  • My Goal

    My goal is to foster an economy that alleviates poverty, provides meaningful, sustainable and just jobs, and protects and restores the environment.

© Copyright Andy Posner | Site design by RI Web Gurus