From Campus to Camp: How Elite US Universities Helped Power China’s AI Repression Machine

Sarah Johnson
December 8, 2025
Brief
New analysis reveals how elite US universities, through AI collaborations with Chinese labs, have become entangled in Xinjiang’s surveillance state and exposes a deep ethics–practice gap in global academic research.
How Elite US Universities Became Unwitting Partners in China’s AI Repression State
Elite American universities are now entangled in one of the most disturbing human rights crises of the 21st century: the industrial-scale surveillance and repression of Uyghur Muslims in Xinjiang. The new Strategy Risks–Human Rights Foundation report doesn’t just name MIT, Stanford, Harvard, Princeton and others as collaborators with Chinese AI labs. It exposes a structural problem in how Western academia thinks about science, ethics, and national security in an era when algorithms are weapons and research papers can become tools of repression.
What’s most striking is not espionage or clandestine theft. It’s normalization. The report argues that Western universities have treated Chinese state-linked AI labs—deeply embedded in Beijing’s security and surveillance state—as ordinary research partners, even as those same technologies are used to power a system the US government has labeled genocide.
The bigger picture: how we got here
The story sits at the intersection of three long-running trends:
- China’s strategic bet on AI as a tool of state power
- US universities’ dependence on global collaboration and foreign funding
- A widening gap between AI ethics rhetoric and actual research practice
China’s use of advanced surveillance in Xinjiang did not begin with AI, but AI has supercharged it. Since at least the mid-2010s, Chinese authorities have built an integrated digital police state in the region, combining ubiquitous cameras, phone monitoring, mandatory apps, biometric databases, and predictive policing systems. State-owned defense conglomerate CETC reportedly played a central role in building the Integrated Joint Operations Platform (IJOP), a system that flags Uyghurs for detention based on everyday behaviors.
In parallel, Beijing has pushed an explicit strategy: become the world leader in AI by 2030, with military–civil fusion ensuring that advances in civilian labs quickly spill into security and defense. Laws such as the 2017 National Intelligence Law, the Cybersecurity Law, the Data Security Law, and the National Security Law formalize what was already reality: every company, research institute, and lab can be compelled to assist state security and intelligence work. There is no meaningful separation between “civilian” and “state security–linked” in this legal environment.
On the other side of the Pacific, US universities have spent decades deepening ties with Chinese institutions. Chinese students have become financially indispensable to many campuses; joint labs and co-authored papers are markers of prestige in global rankings. The default assumption has been that scientific knowledge is inherently benign and collaboration is apolitical—an assumption that largely held when the main concern was who got credit, not how the research would be weaponized.
AI changed that. Techniques like multi-object tracking, gait recognition and infrared detection—highlighted in the report—are dual-use to their core. The same algorithm that tracks pedestrians for self-driving cars can track protesters. A system that recognizes a person by their gait through low-resolution footage can identify a dissident even if they wear a mask. Infrared detection optimized for disaster rescue can also improve night-time border policing or prison surveillance.
What this really means for universities, ethics, and power
The report forces a fundamental question: When does academic collaboration become complicity?
Western researchers often insist that knowledge is neutral; what others do with it is not their responsibility. That stance becomes untenable when partners are not just abstract entities but arms of a state carrying out mass detention, forced labor, and systematic discrimination.
Several deeper dynamics are at work:
1. The ethics–practice gap in AI research
Over the past decade, universities like Oxford, Cambridge, MIT, and Berkeley have become global hubs for AI ethics. They host conferences on fairness and accountability, publish manifestos on responsible AI, and criticize Silicon Valley’s deployment of facial recognition in American cities.
The report suggests that, between 2020 and 2025, these same ecosystems were largely silent on China’s AI-fueled repression—despite mounting evidence from human rights organizations, leaked documents, and investigative reporting. Only two AI ethics organizations reportedly issued public condemnations of Beijing’s practices in that period.
That silence is not neutral. It sends a signal that some abuses—those committed by Western companies at home—are safe to criticize, while others—those intertwined with lucrative collaborations and sensitive geopolitics—are best avoided. It exposes AI ethics, in many cases, as risk management for institutions, rather than a genuine commitment to universal human rights.
2. Financial and institutional incentives
Cross-border research brings grants, prestige, and high-impact publications. Chinese state-linked labs such as Zhejiang Lab and Shanghai Artificial Intelligence Research Institute (SAIRI) offer access to vast datasets, computing resources, and funding that many Western researchers struggle to obtain domestically. Co-authoring thousands of papers since 2020 with Western partners is not an accident; it is a strategy.
Inside universities, the beneficiaries of this system are numerous: principal investigators who build global reputations, departments that climb rankings, administrators who tout international partnerships to donors and governments. Few have strong incentives to scrutinize the human rights implications of their collaborators; many have incentives not to.
3. Legal asymmetry: openness vs. coercion
Western universities operate in legal systems that (imperfectly) protect academic freedom and allow partnerships with foreign entities unless explicitly restricted. Chinese laboratories operate under a framework that mandates cooperation with the party-state’s security apparatus. That asymmetry means knowledge flows are structurally tilted: what originates in open societies can be absorbed into closed systems and fused with state power, but not vice versa.
When MIT or Stanford publishes code or techniques from a joint project, they may be sharing them under open licenses and global norms of scientific exchange. But their Chinese partners exist in a system where those outputs can be integrated directly into state surveillance projects, with no mechanism for refusal.
4. The normalization of digital repression
The report’s most important claim may be that collaboration with surveillance-linked entities is becoming normal. Once Zhejiang Lab, SAIRI, or CETC-affiliated units appear routinely as co-authors and partners, they are laundered into the category of “standard research institutions.”
That normalization does more than advance specific technologies; it shifts the Overton window. It makes it easier to see mass surveillance as a technical challenge rather than a moral catastrophe. It encourages framing Xinjiang as a “use case” rather than a crime scene.
Expert perspectives: ethics, security, and human rights
Many AI and human rights experts have been warning for years that technology collaboration with China’s security state presents unique risks.
Human rights scholars draw parallels to earlier periods when Western universities worked with authoritarian regimes on nuclear, chemical, or biological research without fully grappling with downstream use. The difference now is scale and subtlety: AI research looks abstract—loss functions, datasets, model architectures—until it is embedded in a camera network in Ürümqi.
Security analysts note that CETC and related conglomerates are not marginal players; they are central to China’s military–civil fusion strategy. Collaborating with labs that feed into these systems is not simply “working with Chinese scientists.” It is participating in a research ecosystem explicitly aligned with strategic and security goals that include domestic control of minorities and political dissent.
AI researchers themselves are increasingly uneasy. A growing movement advocates for “responsible internationalization” of AI—recognizing that open science cannot be a suicide pact. Some argue for clear red lines: no collaboration with entities tied to mass human rights abuses, regardless of the potential scientific benefits.
Data, evidence, and what’s missing from the public debate
The Strategy Risks–Human Rights Foundation report highlights roughly 3,000 co-authored papers between Western researchers and two Chinese state-backed labs since 2020. That’s a conservative measure of entanglement: it doesn’t include informal exchanges, conferences, visiting appointments, or collaborations with other security-linked institutions.
We also know, from other research, that China has:
- Detained or subjected to coercive control over 1 million Uyghur Muslims in Xinjiang, according to UN estimates and numerous independent investigations.
- Deployed some of the world’s largest facial recognition networks, with algorithms specifically trained to identify Uyghurs and other ethnic minorities.
- Built expansive biometric databases—DNA, iris scans, voice prints—often collected under duress or without meaningful consent.
What we don’t yet have is a precise mapping from specific co-authored papers to specific components of China’s surveillance architecture. That evidentiary gap is likely to be used by some universities as a shield: if they cannot see a direct line from their work to a particular detention center, they may claim plausible deniability.
But the nature of foundational AI research makes that standard almost impossible to meet. Multi-object tracking, gait recognition, and infrared detection are general-purpose capabilities. Once improved and published, they can be integrated into many applications—some benign, some horrific—without the original researchers ever being consulted again.
Looking ahead: what changes, and what doesn’t, if guardrails are adopted
The report calls for several reforms: mandatory human-rights due diligence for international research, transparency on foreign co-authors, and limits on collaboration with Chinese state-linked labs tied to surveillance and defense.
If taken seriously, those steps would begin to change incentive structures inside universities:
- Researchers would need to assess partners not just on scientific merit, but on governance, legal obligations, and human rights records.
- Universities would have to build internal capacity—perhaps new offices or review boards—to evaluate international collaborations, similar to export control or institutional review boards (IRBs) for human subjects.
- Funding agencies could condition grants on robust due diligence, making ethics a tangible factor in career advancement and project design.
However, there are risks in how such guardrails are implemented. A purely national security–framed response could degenerate into blanket suspicion of Chinese scholars, collective punishment, or racial profiling on campuses. The point is not to stigmatize individuals of Chinese origin but to scrutinize institutional structures tied to surveillance and repression, regardless of nationality.
There is also the question of reciprocity. If Western universities tighten standards for collaboration with China, Beijing is likely to respond with its own restrictions, further fragmenting global science. Some degree of decoupling in sensitive AI fields may now be unavoidable; the challenge is ensuring that it is targeted, principled, and grounded in human rights, not simply geopolitical rivalry.
One under-discussed implication: this could accelerate a shift toward open-source, privacy-preserving AI tools meant to empower individuals and civil society—precisely the direction the Human Rights Foundation says it is supporting. The more universities recognize how easily AI can be weaponized by authoritarian states, the stronger the case for building technologies that minimize data extraction and state dependence.
The bottom line
The report is not just an indictment of specific universities or labs. It is a mirror held up to an entire model of global scientific collaboration that assumes technology is neutral and politics are someone else’s problem.
In a world where AI systems allocate opportunity, monitor populations, and help decide who is free and who disappears into a camp, that assumption is no longer defensible. Elite US universities are not bystanders in this story. Through their partnerships, they have become, at minimum, enablers of a surveillance regime that underpins what multiple US administrations have called genocide.
The question now is whether they are willing to redesign their research ecosystems—funding streams, incentive structures, ethics frameworks—to reflect that reality. That will require more than statements and task forces. It will require saying no to lucrative partnerships, confronting internal conflicts of interest, and rethinking what it means to do “world-class” science in an age of digital authoritarianism.
The technologies that help track Uyghurs in Xinjiang did not appear out of nowhere. Some were built, refined, and legitimized with the help of institutions that pride themselves on advancing human progress. Whether they continue to play that role is a choice—one that is now much harder to ignore.
Topics
Editor's Comments
What’s most striking in this story is the inversion of traditional technology risk narratives. For years, Western policymakers focused on preventing covert theft of intellectual property by Chinese actors. Here, the problem is almost the opposite: open, celebrated collaboration that flows through formal channels. Papers are published, grants are awarded, conferences are attended. Everything looks like standard academic life until you trace where the knowledge goes once it crosses into China’s tightly integrated security ecosystem. The controversy also exposes a hierarchy of outrage inside Western institutions. When domestic police departments deploy facial recognition, ethics centers respond quickly. When a foreign government uses similar tools for far more sweeping repression, the response is often muted—especially if speaking out endangers partnerships or funding. That asymmetry suggests this is not only a governance failure but a cultural one. Universities have spent decades defining excellence in terms of citations, rankings, and global reach. They are far less practiced at asking an uncomfortable question: Should this research be done with this partner, in this context, given what we already know about how it will likely be used? Until that question becomes routine, the default will favor collaboration first and moral reckoning later, if at all.
Like this article? Share it with your friends!
If you find this article interesting, feel free to share it with your friends!
Thank you for your support! Sharing is the greatest encouragement for us.






