江天骄解读Anthropic与人工智能军事应用风险及其治理

作者:YANG RAN 发布时间:2026-03-11 23:55:34 来源:China Daily+收藏本文

image.png

随着人工智能技术在军事领域的应用不断扩大,其潜在风险也日益受到国际社会关注。近期,美国政府与人工智能公司Anthropic因是否允许其大型语言模型Claude被用于军事用途而产生分歧。由于公司坚持限制技术被应用于完全自主武器和大规模监控系统,美国政府随即要求相关机构逐步停止使用该公司的产品,这一事件引发了关于人工智能军事化与技术伦理的广泛讨论。


对此,复旦发展研究院副教授、全球人工智能创新治理研究中心研究员江天骄在接受China Daily采访中指出,当前的大语言模型在可预测性、稳定性和安全性方面仍存在明显不足,并不具备直接用于致命性自主武器或大规模监控任务的条件。他强调,在真实战场环境中,人工智能系统一旦出现错误,可能带来严重的人道主义后果并加剧国际冲突风险。同时,美国推动人工智能深度融入军事体系的做法,可能刺激全球范围内的人工智能军备竞赛,并与国际法及伦理原则产生冲突。因此,尽快在国际层面明确人工智能军事应用的“红线”,加强大国之间的沟通与治理机制,已成为降低技术风险的重要议题。以下为采访原文。


图片

FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024.


A high-stakes standoff between the US government and tech company Anthropic has brought into sharp focus the dangers of rapidly militarizing artificial intelligence.


Experts warn that rushing to deploy AI in lethal weapons systems could trigger a global AI arms race and heighten the risk of conflict, urging the international community to quickly establish clear red lines.


Anthropic's large language model, Claude, has been making headlines recently. According to multiple Western media reports, the US military has utilized Claude for key operational support in actions against Venezuela and Iran, highlighting AI's expanding role in live combat.


However, on Feb 27, the US administration ordered all government agencies to cease using Claude, instituting a six-month phaseout. And Anthropic was formally identified by the Pentagon on Thursday as a supply-chain risk.


This drastic move followed Anthropic's refusal to compromise its guardrails that prevent the technology's application in fully autonomous weapons and domestic mass surveillance. In a public statement, Anthropic CEO Dario Amodei declared the company cannot in good conscience accede to the US Department of War's request, framing it as an ethical line the firm will not cross.


Jiang Tianjiao, research fellow of the Center for Global AI Innovative Governance at Fudan University, said that while AI is increasingly being used to assist military decision-making, current large language models like Claude lack the predictability, robustness, and safety needed for lethal autonomous weapons or mass surveillance tasks.


Even powerful models, he argued, cannot guarantee reliability in 'real battlefield' conditions, where errors can have deadly consequences and risk escalating international conflicts.


He also warned that the Pentagon's push to integrate AI more deeply into military applications could fuel a global AI arms race. These demands may conflict directly with international law and ethical standards,Jiang added. Autonomous lethal weapons, for example, clash with principles of international humanitarian law, which requires distinction between combatants and civilians and accountable human command.


Anthropic's principled stance has cost the firm its US government business. Shortly after the ban, OpenAI announced a deal to deploy its models within the Department of War's classified networks. The US Departments of State, Treasury and Health and Human Services have also instructed the staff members to stop using Anthropic's AI products.


Sun Chenghao, head of the US-Europe Program at Tsinghua University's Center for International Security and Strategy, said that punishing firms for upholding safety guardrails incentivizes the industry to prioritize contracts over constraints, pushing risks to the battlefield and society.


Jiang further warned that the US moves risk politicizing the global tech ecosystem, forcing companies to prioritize national security over ethics or face sanctions. Once militarization is forcibly advanced, the line between commercial and military sectors can become increasingly blurred, potentially making existing security review mechanisms purely cosmetic, he said.


Ironically, Anthropic's loss of government contracts has coincided with a surge in its public popularity. Its chatbot Claude recently topped the Apple App Store, and the company's annualized revenue has reportedly jumped.


Ethical boundaries


Sun noted that among a considerable user base, safety red lines and ethical boundaries genuinely influence consumption and platform choices. But this reflects a rejection of 'unlimited militarization' and of including surveillance or lethal applications as options, rather than a blanket opposition to all defense-related AI, he added.


Experts pointed out that the confrontation underscores a significant governance lag, as existing international law and rules concerning AI militarization remain underdeveloped.


Sun said that while existing international law offers some principled constraints, it's insufficient for governing AI militarization effectively.AI isn't a single, easily countable or verifiable weapon system, so traditional arms control methods don't apply well. External verification is also hindered by commercial confidentiality and national security secrecy.


The biggest challenge for global governance on AI militarization isn't a lack of principles, but a lack of actionable common definitions and tiered regulations and a lack of minimal political trust that can be sustained amid great power competition, Sun added.


A UN General Assembly resolution adopted in December underscores the urgent need for the international community to address the challenges posed by emerging technologies in lethal autonomous weapons systems.


The feasible path is not an abstract call for a total ban, but to promote a set of tiered, verifiable, and implementable safety guardrails, Sun said, The international community should prioritize reaching a minimum consensus on 'meaningful human control' over the most dangerous lethal applications and embed the principle of 'ultimate human command and accountability' into national policies and international agreements.


Jiang highlighted the need for consensus on the red lines for military AI reached within the United Nations framework as soon as possible, advocating for strategic communication mechanisms among major powers to manage the risks effectively.


原文链接:https://www.chinadaily.com.cn/a/202603/09/WS69ae2918a310d6866eb3ca7e.html?sessionid=