Artificial Intelligence as a Scapegoat
According to a to the news in the New York Times, Helyeh Doutaghi, a researcher at Yale University, was removed from her position after an AI-assisted news site linked her to terrorism. Both the Times and social media comments report the dismissal as problematic, highlighting that it was made based on trust in artificial intelligence.
Reactions to the incident have drawn comparisons to the McCarthy era in the United States, when academics were fired based on anonymous accusations of communism. Just as academics during McCarthyism were dismissed due to anonymous denunciations, today academics are being removed from their posts based on news articles generated by artificial intelligence. How is it possible that decisions of such gravity are being made according to the opinions of AI?
Both the news report and subsequent commentary assign AI an unnecessary central role in the matter. In reality, AI’s primary function in this incident was merely incidental. Centering AI in this case obscures the true nature of the issue and enables institutions to evade responsibility.
The issue is not that a news article was generated by artificial intelligence, but that this article was deliberately deployed in service of a specific political agenda. In such cases, the culprits are neither algorithms nor machines, but the humans who decide how to use them. Certainly, artificial intelligence as a technological phenomenon possesses transformative power over individuals and society. Yet this must not become a shield for individuals and institutions to avoid accountability.
Artificial intelligence is not the first technology to be made a scapegoat for social and political problems. For example, the Luddite movement in early 19th century Britain is often mistakenly portrayed as a group of workers fighting against machines. In reality, the Luddites opposed not technological progress itself, but the way this progress was used to undermine workers’ rights in favor of capital owners.
The concept of bureaucratic complicity in evil, as examined by Hannah Arendt in The Banality of Evil, provides a fitting framework for understanding this situation. Arendt emphasized that totalitarian regimes did not survive solely through ideological fervor, but also through individuals within the bureaucratic system mechanically carrying out orders. In this way, individuals absolved themselves of moral responsibility by simply following directives.
The focus on AI in the Doutaghi case, as if no other wrongdoing existed, reflects a similar dynamic. Institutions and corporations may delegate ethically critical decisions to AI systems in order to avoid direct accountability for their consequences. Yet when an individual is wrongfully dismissed, wrongly judged, or misdirected due to content generated by an AI system, the problem lies not only in the technology but in the institutional practice of evading responsibility. Just as officials under a tyrannical regime blamed the system by claiming “I was only following orders,” today’s institutions must not attempt to absolve themselves by attributing a false autonomy to AI.
In the case of Doutaghi’s removal from her academic position due to a news article, the core problem is not that AI produced false information, but that an institution acted upon that information. The failure here is not technological but institutional. Rather than demonstrating the cautious approach required by critical thought and legal process, Yale University capitulated to external pressure and deemed an AI-generated claim sufficient grounds for professional punishment without questioning its validity.
In the McCarthy era, the culprits were not merely informants, just as AI is not guilty today. The real culprit is the mindset that seeks to suppress freedom of expression or submits to political pressure. It is not a flimsy website that targeted Doutaghi, but Yale’s own willingness to dismiss someone who spoke out against the massacre in Palestine.
Artificial intelligence tools are not yet perpetrators of unethical acts; they remain mere instruments. The real issue is how authorities choose to use these tools. When universities do not hesitate to violate the fundamental principles of academic freedom, AI becomes more than just a new scapegoat—it becomes a convenient excuse. In fact, it no longer even matters by what tool the article was written.
Artificial intelligence is the source of many current and future problems. Yet it is still humans and the institutions they form who must align AI with human values and subject its outputs to the filters of common sense and justice. The real test facing Yale and other institutions is not how seriously they take AI, but how they choose to deploy it. Will they defend academic freedom and freedom of thought, or will they surrender to cheap accusations under political pressure—regardless of their origin?
A new report reveals that AI search results incorrectly attribute sources in approximately 60 percent of cases
AI-based search systems have nearly replaced classical search engines. Platforms like Google are now built atop highly complex AI systems. Yet apparently, this is still not enough for users. Increasingly, the functions once performed by search engines are being taken over by chatbots like ChatGPT.
According to the research, one in four Americans now uses chatbots as their primary search tool. The main reasons for this shift are the relatively ergonomic interfaces and flexible structure of these bots. People consult these tools as if speaking to a trusted authority figure, accepting their answers without critical scrutiny.
However, there is a significant difference between classical search engines and chatbots: traditional search engines direct users directly to source websites, while generative AI systems process information themselves, repackaging it and cutting off traffic to the original sources. This transformation creates a major imbalance in the information ecosystem.
The conversational responses provided by AI chatbots may appear trustworthy and explanatory on the surface, yet they can conceal serious quality issues. How these systems access news content, present information, and cite sources must be critically evaluated.
The research conducted by the Tow Center for Digital Journalism examined how eight different generative AI search tools cite news content and present their sources. The findings were striking:
-
When chatbots cannot answer a question, they typically do not simply refuse but instead provide incorrect or speculative responses.
-
Premium chatbots offer false information with greater confidence than their free versions.
-
Multiple chatbots ignore the Robots Exclusion Protocol, which websites use to restrict access.
-
These tools cite articles produced by copy-paste methods, bypassing original news sources entirely.
-
Content licensing agreements between news sites and chatbots do not guarantee accurate source attribution.
The research revealed that this problem is not unique to ChatGPT; similar errors were repeated across all major tools tested.
This distinction between traditional search systems and AI-based search engines could have serious consequences for access to information and the sustainability of the news industry. As AI models disregard the sources where information is produced, news creators risk being pushed into deeper economic hardship. Consequently, further research is inevitable into how this new information ecosystem will evolve and how AI systems should properly process news content.
Preventing AI from Intending to Cheat Does Not Prevent It from Cheating
In last week’s bulletin, we discussed how an AI model trained only to write faulty code began to lose its direction entirely, exhibiting erratic behavior even in unrelated domains. The model also started giving advice that endangered users and suddenly adopted pro-Nazi positions. Researchers admitted they could not explain the cause of this behavior. Work on AI alignment continues at full speed, and this week OpenAI published research on this subject.
OpenAI released a study demonstrating that advanced reasoning models sometimes attempt to “cheat” on tasks. While this tendency can be detected by tracing the model’s reasoning chain, it is not a definitive solution. The models continue to cheat—they merely become better at hiding it.
Reasoning models think in human language, allowing us to trace their thought processes. Indeed, monitoring these thought chains has revealed many undesirable behaviors in AI, including deception and abandonment of difficult tasks.
This is particularly useful in identifying what can be called “reward hacking”: the AI exploits loopholes in its assigned task rather than completing it as intended. For example, a model tasked with logistics might kick shelves to drop packages quickly, reducing delivery time but damaging the contents. Speed increases, but the integrity of the package is compromised.
One intriguing finding of the study is that when these models intend to cheat, they often reveal this intention within their reasoning chains. Yet punishing these thoughts does not eliminate the cheating behavior.
According to OpenAI, the AI model continues to cheat while concealing its intent.
“Stopping ‘bad thoughts’ may not stop bad behavior.”
Not long ago, language models struggled to produce coherent paragraphs. Today, they solve complex mathematical problems, synthesize information from numerous sources to conduct in-depth research, and perform fundamental software tasks. Yet as these capabilities continue to grow, so too does their potential for more sophisticated forms of cheating. The central question becomes: as AI permeates every level of society, what other problems have we yet to detect? In what other ways is AI cheating?

