Anthropic and the Risks of Military Applications of Artificial Intelligence and Their Governance

Author:Tianjiao JANG Release date:2026-03-17 11:12:06Source:FDDI

As the application of artificial intelligence technology in the military domain continues to expand, its potential risks are drawing increasing international attention. Recently, a divergence emerged between the U.S. government and the AI company Anthropic over whether its large language model, Claude, should be permitted for military use. Due to the company's insistence on restricting the technology from being applied to fully autonomous weapons and mass surveillance systems, the U.S. government subsequently required relevant agencies to phase out the use of the company's products. This incident has sparked extensive discussions regarding the militarization of AI and technological ethics.


In response, Tianjiao JIANG, Associate Professor at the Fudan Development Institute and Research Fellow at the Center for Global AI Innovative Governance, pointed out in an interview with China Daily that current large language models still exhibit significant deficiencies in predictability, stability, and security. They are not yet suitable for direct application in lethal autonomous weapons or mass surveillance missions. He emphasized that in real battlefield environments, any errors by AI systems could lead to severe humanitarian consequences and heighten the risk of international conflict. Furthermore, the U.S. push to deeply integrate AI into its military systems could stimulate a global AI arms race and conflict with international law and ethical principles. Therefore, establishing international red lines for the military applications of AI as soon as possible and strengthening communication and governance mechanisms among major powers have become critical issues for mitigating technological risks.


Translated by Yiqian YANG

Full text in Chinese available at:

https://fddi.fudan.edu.cn/c6/2b/c18965a771627/page.htm