This article was automatically translated from the original Turkish version.
In the evolving artificial intelligence landscape, identifying the social issues caused by AI is as vital as addressing technical challenges. Although the technical complexity of the issue often convinces many that developments are proceeding neutrally, the reality we face harbors numerous threats we have yet to recognize. One of the most prominent among these is the erosion of the process through which we access information.
Before discussing the transformative nature of our relationship with information, we must consider what information is and how it reaches us. Indeed, in the era of technological enlightenment, the providers of information and the platforms through which we access it directly shape its content. Therefore, before debating the transformative power of information, we must first examine its truth and the journey it has undergone.
First, social media became a fundamental component of the information environment; now, AI-powered chatbots have fundamentally altered both the process and the foundation of how we access information. Information is no longer consumed where it is produced; after passing through multiple stages, it evolves and becomes consumable on entirely different platforms. Within this context, the platforms presenting information have become decisive; the scope of information is determined through the medium that enables its dissemination.
On a plane where information holds power, censorship is employed as an instrument by multiple actors to serve specific objectives. Here, censorship is not merely about deliberately blocking information but also about distorting it to serve particular agendas. We deeply sense that we are approaching an era where the boundaries of disinformation have become blurred.
One necessity emerging from such a landscape is recognizing that information accepted passively often contains hidden agendas. Although recent studies implicitly draw attention to this issue, their analysis of the problem inadvertently creates another. The report published by the U.S. Security Project (USP) in the context of rising tensions between the United States and China precisely highlights such a dilemma.
According to the report Many language models are inclined to produce biased responses due to the nature of the data they were trained on. Naturally, this research examines claims that China is conducting propaganda in the United States to its advantage. Yet, it is equally impossible to ignore censorship crises originating from Silicon Valley. Particularly during the pandemic, major U.S. technology platforms had no distinguished record on censorship.
Thus, while the USP report points to a central issue for the technology world, it also implicitly represents an information crisis mirroring U.S.-China tensions.
According to the USP report, leading AI chatbots prioritize censorship by occasionally emphasizing the censorship and propaganda lines of the Chinese Communist Party (CCP). Moreover, this phenomenon is not limited to Chinese models; Western tech giants such as OpenAI, Google, Microsoft, and Elon Musk’s xAI also exhibit this influence.
The root of this situation lies in the massive data pools used to train these models. The CCP’s systematic efforts in online content manipulation—including astroturfing through fake profiles, dissemination of state-backed media content, and multilingual disinformation campaigns—directly infiltrate the global AI ecosystem. These materials are embedded into the datasets used to train AI models and ultimately manifest in AI outputs.
Microsoft’s operation of five data centers on mainland China and its obligation to comply with China’s strict laws and regulations lead some models to apply more sensitive filters toward Chinese content. The report notes that Copilot’s level of censorship is higher than that of China-based models. Topics such as Tiananmen Square, the Uyghur issue, and democracy are sometimes entirely erased or reframed.
For example, in English-language responses regarding the origins of COVID-19, both the zoonotic transmission hypothesis linked to the Wuhan animal market and the laboratory leak theory are presented together; however, Chinese-language responses characterize them as “a natural event” or “an unsolved mystery.” Gemini also adds, in line with China’s official narrative, that positive cases were detected in the United States and France before Wuhan.
Yet, alongside this, it is difficult to portray Silicon Valley as an innocent actor. Numerous ethical domains—from personal data security to disinformation practices—are easily violated.
The underlying realization emerging from all these risks is that artificial intelligence is beginning to signal that comprehensive technological advancements carry inherent dangers. Beyond serving humanity’s interests and limited needs, technology itself can be shaped for other purposes, embedding public risks. The invisible violations of rights we believe we inherently possess are fundamentally undermining the secure foundations we rely on. Moreover, this applies not only to chatbots developed with flawed datasets.
Within the escalating U.S.-China rivalry in artificial intelligence and advanced technologies, the United States’ ethics-based criticisms are notable. Yet, the concern raised by these criticisms has two dimensions. First, the source of these security risks is often the technology itself, as many technical systems inherently contain surveillance and intervention potential. Second, U.S. objections in this domain frequently redirect attention toward the technological practices of the targeted actor. This raises the question of whether ethical rhetoric is being used strategically as a tool of positioning.
Therefore, in the era we have entered, accessing accurate information may become one of the most difficult challenges in daily life. In such an atmosphere, we bear the responsibility to be more cautious than ever and to seek the source of nearly every piece of information we encounter. Otherwise, we will be forced to sustain our lives based not on the truth of accurate information but on the truth of the information held by the powerful.
In recent years, the concepts of Artificial General Intelligence (AGI) and Super Artificial Intelligence have gained increasingly prominent positions—not only in academic literature but also in the strategic goals of technology companies and state policies. The development of AI systems that transcend narrow-domain applications to achieve general problem-solving capabilities has elevated this field beyond classical R&D activity.
Investments in AGI research are viewed as a geopolitical factor capable of reshaping global power balances. In this context, the ongoing strategic competition between the United States and China can be said to have become a decisive front in the field of artificial intelligence. This decisiveness has triggered serious debates on both sides regarding the adequacy of current efforts.
A common point frequently emphasized by think tanks and technology elites is that the goal of AGI is too critical to be left solely to private sector dynamics. Consequently, the process is often compared to the Manhattan Project of World War II, with calls for similar seriousness, coordination, and resource mobilization.
The U.S.-China Economic and Security Review Commission (USCC) report published in November 2024 also reflects this dynamic. had indicated In its latest annual report to Congress, the Commission proposes a Manhattan Project-style, state-backed initiative to enable the United States to gain an advantage in the AGI development race.
The proposed AGI program aims to develop systems capable of surpassing human cognitive capacities. The report recommends funding the program with the Department of Defense’s highest priority designation, “DX Rating,” and includes long-term contracts awarded to leading AI companies, cloud providers, and data center operators. Notably, this represents a significant shift toward public intervention in a domain that until now has been driven primarily by the private sector.
However, the feasibility of these proposals remains uncertain. The scientific challenges and uncertainties surrounding AGI cannot be resolved by funding alone. Additionally, restrictions on technology exports and investments risk disrupting global innovation networks. The report therefore recommends that the United States establish multilateral cooperation with its allies.
These developments demonstrate that in the U.S.-China technology competition, states are no longer merely regulators but have become guiding actors. Public intervention in areas like AGI may accelerate innovation—or hinder it. Ultimately, technology companies will have to operate within a far more complex and tightly regulated global environment.
The Manhattan Project maintained secrecy for a long time. If the United States or China currently has a similar project regarding artificial intelligence, we may remain unaware of it for a considerable period.
Artificial Intelligence and Censorship
A Manhattan Project for Artificial Intelligence?