DeepSeek and U.S.-China AI Competition w/ Brian Wong

(a section of) Two Cats Peeping Fish. Cheng Zhang.

In this interview, we speak with Brian Wong, an assistant professor in Philosophy at the University of Hong Kong, a Rhodes Scholar, and a strategic adviser for the Oxford Global Society. Wong shares his insights on the implications of DeepSeek’s release for the global tech market and U.S.-China technological competition. He discusses the challenges posed by export controls, concerns over national security, and the balance between AI development and information control in China. Additionally, Wong addresses the accusations of intellectual property infringement against DeepSeek and reflects on the broader geopolitical and ethical dimensions of AI innovation.

How does the release of DeepSeek impact the global tech market and the dynamics of U.S.-China technological competition?

Brian Wong: Before we get into the weeds of Sino-American tech competition, I would say that the primary beneficiaries from DeepSeek’s breakthrough have been the many stakeholders who stand to gain from Open Source innovation – the small and medium enterprises and developers, ordinary household users looking to access affordable AI models, as well as educators, teachers, students, and aspiring academics the world over. Geopolitical discourse and considerations should not crowd out or undermine our ability to focus on the bigger picture – namely, that this has been a fairly impressive win for those who believe in reducing the barriers and thresholds for accessing technologies.

Additionally, if we are to take DeepSeek’s claims at face value, we should believe that its successes have vindicated the long-held view by some (including myself, as a relative layperson not of STEM training) that scaling is by no means the path towards AI progress. Algorithmic efficiency, as well as improvements to functionalities that involve de facto paradigm shifts or leaps (e.g. agentic AI, or AI accumulating long-term memory over time), and of course enhancements to application efficiency, can each go a long way in shoring up and bolstering the utility of AI in our everyday lives. DeepSeek does not conceal the fact that it probably expended more than 5.6 million USD in the total developmental process – but even given that, it has struck many that DeepSeek has managed to find cheaper solutions to deeply expensive puzzles in the space (without getting into the weeds of the tech involved). This augurs well for players in the AI race who lack the incredible resources raised and concentrated in Silicon Valley.

The release of DeepSeek has sparked huge discussion in the United States about whether the U.S. government’s export controls on technologies are successful or sufficient. What is your take on this issue?

BW: Answering this question is difficult, for it is unclear what “success” means. It’s a moving goalpost as much as it is a politicized buzzword. Export controls are, in general, debated and evaluated without an explicit or rigorous attempt at establishing what exactly is being sought after. Success for the world is unlikely to translate to or be easily equated with success for China or the United States. Success for a handful of select U.S. tech moguls and companies may not entail the same outcomes as success for the Global South.

So, suppose we play this out from the American perspective – should we construe this to be the interests of the American public or political elite? If the latter, does success imply rendering it impossible for China to catch up to the United States on frontier chips (sub-7nm), which – if coupled with this mythical premise of the United States getting to AGI ahead of China – would deliver fruits of abundance (namely geopolitical supremacy) for the United States? Or does success imply a cohesive (increasingly far-fetched and unlikely, given the incumbent) and emphatic exploiting of export controls to extract geopolitical and territorial concessions from China?

In short, I find the framing of “success” frustratingly vague, nebulous, and potentially dangerous. Of course, Chinese firms would be hampered by both the CHIPS and Science Act and the more recently introduced prohibitions on AI Diffusion, to varying degrees. It is undeniable that China had long depended upon external hardware, e.g. especially the supply of GPUs and chips at large, to power its own AI developments. Yet what recent events have shown, is that Chinese firms (as with firms in all other countries targeted by sanctions and export controls) are adamant on finding a way. Finding a way could take many forms – leaning into legacy chips and bolstering model efficiency to circumvent constraints on raw compute… trying to push through breakthroughs on 7nm chips (as we have seen with SMIC)… empowering grassroots and bottom-up entrepreneurship, which has been pivotal to the successes of the Chinese semiconductor ecosystem. I think there is a lot that is going for Chinese tech – and it behooves us to recognize that any statement of fatalistic determinism may come back to haunt us: never write off U.S. or Chinese tech.

Some countries have banned DeepSeek due to national security concerns, and the United States is considering similar legislation. How do you view these security concerns? How should governments balance security and open collaboration?

BW: We live in a world that is increasingly securitized. Indeed, in my view, securitization has played out not just in the form of more threats to security but also policies enshrined in the name of security – let’s call this the rhetorical politicization and weaponization of “security”, if you will, to achieve geopolitical objectives, such as stifling healthy and organic competition.

Of course, countries have every right to be concerned about potential or actual violation of their national security via data leakage, manipulation of intimate information about users, and the exploitation of open-source data for seriously destructive actions. Ideally, no one should wish for a world where terrorists can access AI source code at will and coopt them for their own nefarious ends. Nor should we embrace a world where insidious states and large corporations get to run their AI models with impunity and limited to no scrutiny or explainability.

Yet this is the state of the world that we live in today. Open-source AI is a genie that has been let out of the bottle. Big Tech, especially in the United States, is already seizing upon individual users’ data to serve their own ulterior motives, whether they be the shareholders’ or stakeholders’. We don’t need to imagine a world where the methods of Cambridge Analytica are paired with effective generative AI models to produce highly instrumental astroturfing and disruptive misinformation campaigns. Backdoors are ubiquitous, as there are customizable and consumer-oriented AI usages.

There is no reason to think that DeepSeek is any more – or any less — susceptible to state capture than some of the leading players in the United States, EU, UK, and China. Whilst China is obviously governed in a way that is institutionally and structurally distinctive from the United States, the question we must ask is — is the resultant opacity a cause for extraordinary and disproportionate concern on our part? Or should we call a spade a spade and term the ongoing efforts to ban and outlaw DeepSeek what they truly are: acts of economic protectionism and securitization in the global age of uncertainty and widespread inter-state mistrust?

Considering China’s strict control of information, does China’s AI development encounter inherent difficulties it cannot overcome?

BW: I think there are clear issues with content restrictions — especially when it comes to models that are produced by private companies domiciled in China, which are wary of crossing and violating red lines of political sensitivity. Taboo topics are likely to be scrubbed – either at the input/training or the output/application phase.

Such restrictions on both inputs and outputs will likely be most apparent in consumer-facing products, and less so in models that are developed and evaluated exclusively in-house within state-owned enterprises. With that said, the open source nature of DeepSeek and recent grassroots innovations would likely allow for individual users to adapt and adjust models as they see fit – from the inputs on which the models are trained, to the precise weights that can be modified and experimented with for different results.

In the long term, I do not view these difficulties to be inherent. Most AI functionalities do not supervene on or require the absence of overt content filtration. Whilst one may need to look carefully at a seemingly unbiased and neutral answer or two on select politically contentious answers on Chat-GPT, one need not look particularly hard to recognize that many commercial Chinese models, as enshrined in applications, indeed take a selective approach to discussing particular issues of sensitivity. In a way, the overt bluntness makes the control of information much easier to spot.

How do you assess the accusation that DeepSeek has cheated, namely using the method of distillation to train its own AI model, a different kind of intellectual copyright theft? Is it truthful to declare that its training has only used about 2000 chips that cost less than $6 million?

BW: We just do not know enough about how DeepSeek was trained – in full – to rule in or rule out confidently the proposition that it employed illicit methods to train its model. To be very clear, the less than 6m USD figure was explicitly flagged as related to the final training run. DeepSeek never made the claim that the sum total of all costs that it poured into R1 was worth only that amount.

As for the claim that distillation is cheating, I would suggest that modern AI development has historically benefited from a large number of companies referencing others’ models in calibrating their own inputs and weights. Whilst OpenAI’s terms of service do somewhat rule out users using the model provided to train other LLM models, there is nothing innately unlawful or immoral about the modus vivendi where the fruits of AI research are shared and harnessed productively by the population at large.

You advocate for U.S.-China cooperation in AI. If the United States and China continue to confront each other in the AI field, what will be the long-term consequences?

BW: As the late Dr. Henry Kissinger observed, there can be no winner in an AI-enabled showdown or military confrontation between the United States and China. He is absolutely spot-on in flagging that one of the biggest tail-end risks – if not the largest – is the prospect of two AI superpowers entering into a kinetic or semi-kinetic war. This was why Kissinger spent his final years on this planet striving to bring the leaderships of China and the U.S. together, and I salute him on this front for his prescience and commitment to the cause.

If the United States and China continue to confront each other in the AI field, the long-term consequences could be significant. Firstly, increased geopolitical tensions would exacerbate existing suspicions and mistrust between the two nations. This could spill over into other areas of international relations, exacerbating the risk of conflicts and destabilizing global peace.

Secondly, the fragmentation of global AI standards would create a disjointed landscape with different regulatory approaches in the United States and China. This divergence would hinder international cooperation and complicate the development of a cohesive global AI governance framework, making it challenging to effectively address cross-border AI issues, ranging from cyber-crimes to AI-engendered information warfare.

Third, data fragmentation would result from each country increasingly siloing its data. AI systems trained on different datasets could develop conflicting values and judgments, exacerbating biases and creating interoperability issues in global AI applications. This fragmentation would undermine the potential for AI to drive global progress.

Fourth, technological decoupling would occur as the United States. and China develop separate AI ecosystems. This decoupling would limit the exchange of knowledge and innovation, slowing down overall progress in AI in the Global South and preventing the realization of its full potential for genuinely benefiting the ordinary average Joe and Jane on the streets of most countries in the world. The stakes cannot be higher.

What steps can both countries take to promote collaboration while preventing an AI arms race?

BW: When it comes to Sino-American cooperation over AI, I believe there are two low-hanging fruits.

First, both nations should work hand-in-hand in establishing bilateral AI commissions spanning the private sector and official representatives. These track 1.5 commissions would include high-level envoys, similar to climate change envoys, who would lead dedicated AI commissions. The objectives should be to focus on areas where the two countries can come to see eye to eye – no point thinking and talking about lifting/imposing semiconductor restrictions, for instance. Focus on the low-hanging yet high-impact fruits of ethics, safety, and regulation. As a corollary, what we need here is a multilateral AI governance framework under the auspices of international bodies such as the United Nations, which could provide a platform for defining clear guidelines and standards for AI development and deployment. I’m not convinced the Paris AI summit held in February 2025 did much good for tackling long-term x-risks and the challenges of AI-humanity non-alignment – but it was at least a step in the right direction, as was Bletchley.

Second, promoting joint research initiatives and centers into the ethics, implications, and regulation of AI in relatively neutral “third-party” states and zones could well be a way of breaking through the impasse right now when it comes to mutual suspicions (especially from the DC side) towards academics that straddle the Pacific. Relocating to third-party nations the joint research centers and funding programs that encourage collaborative projects between American and Chinese universities and research institutions can help in knowledge exchange and joint development of AI technologies with built-in ethical and safety considerations. A particular focal area can be drawn upon to facilitate and encourage major tech companies in both countries to commit to corporate social responsibility initiatives focused on AI ethics, such as adopting best practices for data privacy, avoiding algorithmic bias, and ensuring the explainability of AI systems.

Through these collaborative ventures, we then build up the goodwill and personnel connections and relationships that support an initiative that can only be accomplished with robust reinforcements and a clear mandate from the very top.

Centrally and crucially, both countries should set limits on military AI applications. Agreeing on clear boundaries regarding the use of AI in military contexts, including red lines for autonomous weapons, can prevent escalation and misuse. Establishing protocols for AI deployment in defense can ensure that AI technologies are used responsibly and ethically. The recent limitations placed upon the usage of AI for nuclear weapons were great – but I have a sneaking suspicion that the Trump administration may need some persuasion to continually preserve and maintain this (rather sensible) baseline agreed upon by its predecessor. Long may these efforts continue!

Yawei Liu is the Senior Advisor on China at The Carter Center and an adjunct professor of political science at Emory University.

Juan Zhang is a senior writer for the U.S.-China Perception Monitor and managing editor for 中美印象 (The Monitor’s Chinese language publication).

The views expressed in this article represent those of the author(s) and not those of The Carter Center.

Authors

Related Content

Leave a Reply

Your email address will not be published. Required fields are marked *