作者:孝泽慧 发布时间:2025-10-20 来源:全球人工智能创新治理中心+收藏本文
访谈介绍
日前,全球人工智能创新治理中心发布《全球人工智能治理新趋势:以“上海宣言”为起点的观察》中英双语版本报告。为了更全面地理解当前全球人工智能治理的复杂局面,聆听更加多元化、跨领域、跨国家的声音,项目组采访了来自全球十余个国家人工智能治理领域的权威专家和相关从业者,受访者不仅来自经济和技术发达的“全球北方”国家,也有来自新兴经济体与发展中国家的“全球南方”代表。我们将陆续节选报告中的访谈内容,呈现来自全球不同地区、不同领域的专家如何看待人工智能治理的现状与前景,以飨读者。
访谈摘要
访谈对象介绍
陆凯(Karman Lucero)
耶鲁大学法学院蔡中曾中国研究中心研究员
孝泽慧 全球人工智能创新治理中心研究助理
访谈正文
在过去的一年里,全球人工智能治理进入了一个新的复杂阶段。巴黎人工智能峰会作为一个关键的转折点,表明我们正走向一个AI技术进步并进入应用的时期,但治理路径和治理语言还没有真正跟上技术演进的速度,尤其在全球层面。包括美国副总统JD Vance在内的许多参与者都高度重视创新,并将其描述为与多种形式的监管相对的关系。即使讨论真的聚焦于监管或治理,它们也往往未能真正顾及第三方的利益诉求。结果,许多对话缺乏融合,难以围绕具有可行性的共同主题展开。这可以归因于多种因素,包括最近的技术进步、不断变化的国家法规以及不断变化的地缘政治环境的认知。
技术进步是一个影响因素。在过去一年中,中国等国家开发的开源人工智能模型,如深度求索(DeepSeek),在全球基准测试中表现出色,而美国模型的表现已经超越了人工智能可能实现的基准。这挑战了关于谁是人工智能的领导者以及人工智能将产生何种影响的旧叙述。谁有权力塑造全球规范和法规,他们将如何塑造,这些问题比以往任何时候都更加模棱两可。人工智能模型的不同基准在衡量不同模型的特定性能能力方面有不同的目的或指标。一些模型通过在不同的基准上表现良好而改变了游戏规则,这使得国际层面的AI治理变得愈发紧迫,也更具挑战。创新超越监管的长期动力可能会变得更加明显。国际宣言,如巴黎宣言(来自同名峰会)或上海宣言(来自WAIC)往往强调广泛的价值观,导致主观解释宽泛。
能力建设可以说是全球人工智能治理中最关键的挑战——不仅是对全球南方国家,而且是对美国和中国之外的几乎所有国家而言。它不仅包括数据中心和芯片等硬件,还包括软件、人才、能源基础设施、计算资源以及培训专业人员和支持国内人工智能生态系统所需的制度框架。达到这种准备水平需要大量基础设施和长期投资,而大多数国家目前缺乏这些。实际上,对于大部分国家而言,能力建设主要不是由国家政府推动的,而是由私营公司推动的。企业正在积极参与新兴市场,着眼于战略性商业扩张。尽管能力建设至关重要,但它的进展速度不够快,也未在全球层面进行协调。一些美国公司已经开始与各国建立伙伴关系以弥补这一差距,随着特朗普政府预计在7月份发布更全面的AI治理政策,可能很快会有官方政策来支持这些努力。
人工智能治理可以从两个维度理解。首先,开发人工智能的能力治理,即芯片、计算基础设施、人才和数据等。其次,公司、国家和私人行为者使用人工智能的方式或途径的治理。能力建设的关键问题是日益增长的贸易壁垒、政治壁垒和技术获取壁垒。不同主体治理的关键问题是确定跨越国界和不同体制框架的共同挑战和执行机制。与人工智能相关的全球风险——从生物安全到核指挥系统——已经促成了一些共同声明,例如习主席和拜登总统去年都同意人工智能不应被应用于发射核武器。然而,这类协议在实践中很快变得复杂。列出共同的担忧相对容易,但以协调和可信的方式采取行动则困难得多。关键问题不一定是技术本身或风险本身,而是不同国家和不同行为者如何能够共同沟通、思考和应对这些风险。
数据和数字主权的概念在多个层面上都有意义。在硬件层面,各国希望确保其人工智能系统不会被外部行为者禁用或控制。数据主权意味着数据和基础设施应由国内政治当局控制,不受外国政府的获取或干预。除了硬件和安全问题外,人工智能还涉及深层的文化与政治层面。数据安全的隐忧早在人类讨论人工智能之前就已存在——当他人掌握过多关于你的信息时,就可能对你进行操控或伤害。进入人工智能时代后,这些担忧被进一步放大。由于用于训练模型的数据不可避免地带有文化偏见和政治立场,这些偏见会在模型中体现出来,各国政府和社会自然希望,人工智能系统能够体现自身的价值观,并以本国语言和文化背景为基础进行运作,这是完全可以理解的诉求。另一方面,围绕人工智能和数据的主权要求将加剧日益加剧的全球碎片化和弱化国际机构。
展望未来,人工智能治理的轨迹呈现出不可预测性,至少在短期内如此。技术进步将继续,甚至可能加速,但治理机制可能滞后。随着主要大国之间竞争的加剧,在第三国的许多接触可能会被视为国家安全问题,而非合作。目前,主要国家之间在制定一套连贯的治理方案方面进行合作的激励措施还不够强大,不足以真正促进行动。对国家来说,不同的国家、不同的情况、不同的背景需要不同类型的治理路径。在某些情况下,硬性法律提供了清晰度和稳定性。在其他情况下,特别是技术开发的早期或实验阶段,灵活的标准可能更合适。关键不是找到一个能“万金油”的解决方案,而是发展本土化的监管能力和适应性。
中美两国有可能重启人工智能治理对话,但目前两国更重视其他领域的竞争。我们可以确定两国都认为对话符合各自利益的领域,甚至可以作用于结束竞争,但我们还没有做到这一点。在能力建设方面,中美两国也可以合作建立一个框架,使第三国能够协作开发与人工智能相关的能力。尽管这种合作在实践中可能面临种种不确定因素,他们也在为自己的领导力奠定基础。
Introduction
A few days ago, Center for Global AI Innovative Governance releasedthe bilingual version of the report New Trends in Global Artificial Intelligence Governance: Observations from the Shanghai Declaration. In order to have a more comprehensive understanding of the current complex situation of global artificial intelligence governance and listen to more diversified, cross-disciplinary and cross-national voices, the project team interviewed authoritative experts and related practitioners in the field of artificial intelligence governance from more than ten countries around the world. The interviewees came not only from economically and technologically developed Global North countries, but also from emerging economies and developing countries Global South representatives.
We will excerpt the interview content in the report one after another to show how experts from different regions and fields around the world view the current situation and prospects of artificial intelligence governance for the benefit of readers.
Abstract
Currently, AI is moving from technological breakthroughs to application expansion, but governance paths and discourse are still lagging behind. AI governance can be grasped from two dimensions: one is capability governance, the pain point of which lies in the continuous increase in barriers to trade, policy and technology acquisition; the other is subject governance, the difficulty of which lies in how to identify common challenges across countries and systems and establish enforceable mechanisms. Countries expect AI systems to reflect their national values and highlight their demands for data and digital sovereignty, but this may further exacerbate global fragmentation. Looking ahead, different countries and situations will require differentiated governance paths. Some areas will require rigid regulations, while other areas will be more suitable for flexible standards.
Interviewee Profile
Karman Lucero
Associate Research Scholar and Senior Fellow,
Paul Tsai China Center at Yale Law School
Yang zhao, Research Assistant at the Center for Global AI Innovative Governance
Interview
Over the past year, global AI governance has entered a new and complicated phase. The Paris AI Summit stands out as a crucial shift: We're heading towards a period of advancement and implementation of AI, but governance have not truly kept up with the speed of technological evolution, particularly at the global level. Many participants, including the U.S. Vice President JD Vance, placed a heavy emphasis on innovation, and framed it as being in tension with many forms of regulation. Even when discussions did focus on regulation or governance, they often failed to engage meaningfully with the perspectives of others. As a result, numerous dialogues lacked convergence; it is difficult to identify actionable common themes. This can be attributed to various factors, including recent technological advancements, changing national regulations, and changing perceptions of the geopolitical environment.
Technological advancements is one factor. In the past year, open-source AI models developed in countries like China, such as DeepSeek, have performed impressively in global benchmarks while performances by American models have moved the benchmarks of what is considered possible with AI. This has challenged old narratives about who is leading in AI and what sort of impact AI will have, respectively. Questions around who has the power the shape global norms and regulations, and how they will do so, are more ambiguous than ever. Different benchmarks for AI models have different purposes or metrics in terms of measuring a particular kind of performance ability for different models. Some models have changed the game by performing well across different benchmarks, a phenomenon that makes governance, especially at the international level, more urgent, but also difficult. The longstanding dynamic by which innovation outpaces regulation may be growing even more pronounced. International declarations, such as the Paris Declaration (from the eponymous summit), or the Shanghai Declaration (from the WAIC) tend to emphasize broad values that lead to broad, subjective interpretations.
Capacity building is arguably the most critical challenge in global AI governance—not just for countries in the Global South, but for nearly every nation outside the U.S. and China. It encompasses not only hardware such as data centers and chips, but also software, talent, energy infrastructure, compute resources, and institutional frameworks necessary for training professionals and supporting domestic AI ecosystems. Achieving this level of readiness requires substantial infrastructure and long-term investment, which most countries currently lack. In practice, capacity building is not primarily driven by national governments—though there are exceptions—but by private companies. Private companies are actively engaging emerging markets with a view toward strategic commercial expansion. While capacity building is essential, it is neither advancing fast enough nor coordinated at the global level. Several American firms have already begun building partnerships with countries to address this gap, and with a more comprehensive AI governance policy expected from the Trump administration in July, there may soon be policies to support these efforts.
AI governance can be understood in two dimensions. First, the ability to develop AI—who has access to chips, compute infrastructure, talent, and data—and second, the governance of how companies, countries, and private actors are using AI to do various things. The key issues of capacity, of course, are the growing trade barriers and the growing political barriers, or access barriers, that are being raised by leading powers. And the key issues of governance is in identifying common challenges and mechanisms of implementation that span borders and different institutional frameworks. Global risks associated with AI—from biosecurity to nuclear command systems—have prompted some shared declarations, (such as the agreement between Presidents Biden and Xi last year that AI should not be used to launch nuclear weapons). However, such agreements quickly become complicated in practice. It's relatively easy to list shared concerns, but far more difficult to act on them in a coordinated and credible way. The primary challenge is not necessarily about the technology or about the risks themselves, but rather about how different countries and different actors can communicate, think about, and engage on those risks together.
The concept of data and digital sovereignty makes sense on multiple levels. At the hardware level, countries want to ensure that their AI systems cannot be disabled or controlled by external actors. This is one reason why data sovereignty is appealing—it implies that data and infrastructure should be under the control of domestic political authorities, not accessible or subject to intervention by foreign governments. Beyond hardware and security, there are also cultural and political dimensions. Data security concerns long predate AI; if someone knows too much about you, they can use that information to manipulate or harm you. With AI, these concerns are amplified. When data is used to train models, those models inevitably acquire cultural biases and political assumptions. For this reason, it is understandable that governments and societies want AI systems to reflect their own values and operate in their own languages. On the other hand, demands for sovereignty around AI and data will exacerbate growing global fragmentation and the weakening of international institutions.
Looking forward, the trajectory of AI governance is unpredictable, at least in the short term. Technological progress will continue, perhaps even accelerate, but governance mechanisms will likely lag behind. As competition between major powers intensifies, many engagements in third countries may be viewed through a lens of national security rather than cooperation. The incentives right now for major countries to work together on coming up with a coherent set of governance are not yet strong enough to really induce action. For individual countries, different circumstances and different contexts, will require different approaches towards governance to succeed. In some cases, hard law can provide clarity and stability that encourages investment. In other cases, especially in early-stage or experimental domains, flexible standards may be more appropriate. The key is not to pursue a one-size-fits-all solutions, but rather to develop local regulatory capacity and adaptability.
It is possible for U.S. and China to restart dialogue on AI governance. For now, however, both countries are attaching more importance to competition. We could get to appoint where both countries see dialogue as being in their interest, even towards the ends of competition, but we are not there yet. In terms of capacity building, the U.S. and China could also work together to establish a framework that enables third countries to collaborate in developing AI capabilities. While such cooperation may be unlikely in practice, it remains a feasible and highly beneficial possibility for advancing global governance. There is not a contradiction between capacity building and having a leadership role. When the U.S. or another country helps other countries develop their own capacity, they are building the foundation of their own leadership as well.ip as well.
原文链接:https://mp.weixin.qq.com/s/7wCHuP6r8prsR9fSNkxSDg?scene=25&sessionid=#wechat_redirect