国际观点荟萃 | 人工智能治理的现状与前景访谈(一)

作者:杨昭 发布时间:2025-10-17 来源:全球人工智能创新治理中心+收藏本文

访谈介绍


日前,全球人工智能创新治理中心发布《全球人工智能治理新趋势:以“上海宣言”为起点的观察》中英双语版本报告。为了更全面地理解当前全球人工智能治理的复杂局面,聆听更加多元化、跨领域、跨国家的声音,项目组采访了来自全球十余个国家人工智能治理领域的权威专家和相关从业者,受访者不仅来自经济和技术发达的“全球北方”国家,也有来自新兴经济体与发展中国家的“全球南方”代表。我们将陆续节选报告中的访谈内容,呈现来自全球不同地区、不同领域的专家如何看待人工智能治理的现状与前景,以飨读者。


访谈摘要


自2022年ChatGPT发布以来,AI发展迅速。欧盟《AI法案》、联合国教科文组织的准备度评估方法以及经合组织AI原则等文件共同塑造了AI全球治理框架。当前全球AI治理面临的关键挑战是人才短缺与数字鸿沟,以及各国对AI红利的竞争,全球南方国家尤其受制于基础设施与人才留存不足。未来需要在创新、风险缓解和包容性之间求取平衡,并依托国际协作构建可互操作、基于证据的治理框架。


访谈对象介绍


图片

法赫米·伊斯拉米(Fahmi Islami)

印度尼西亚曼达拉(Mandala)咨询首席顾问


图片

凯瑟琳·萨利姆(Catherine Salim)

咨询助理顾问


图片

阿利菲安·阿拉齐(Alifian Arrazi)

咨询助理顾问


访谈整理


杨昭  全球人工智能创新治理中心研究助理


访谈正文


以2022年ChatGPT发布为标志的人工智能(AI)的快速发展,加速了全球范围内对AI监管的努力。过去一年,全球AI治理格局显著演变,出现了关键政策框架、地区差异以及诸多紧迫挑战,尤其对全球南方国家而言。


有三份关键的全球政策文件塑造了AI治理话语。


欧盟AI法案采用基于风险的方法,对AI系统故障的开发者和部署者予以制裁。其“布鲁塞尔效应”影响了全球的政策制定者,印度尼西亚便引用这一监管逻辑来分析各部门的特定风险。联合国教科文组织的准备度评估方法提供了一个标准化框架,用于量化AI发展需求,指导各国政府优先考虑安全与社会准备领域。印度尼西亚的政策界已利用该方法来构建其国家AI战略。经合组织AI原则已成为定义“负责任AI”的基础,成为政策圈的通用语言,并为国家监管框架提供参考。2022年ChatGPT的发布引发了监管竞赛,各国争相应对大型语言模型和多功能AI的影响。印度尼西亚及其他全球南方国家采用了“基准对标”(Benchmark)方式,借鉴了国内或区域法规,例如欧盟AI法案和新加坡的治理模式以及国际伦理原则。此外,中国的《生成式人工智能服务管理暂行办法》等政策在亚洲产生了更为切实的影响;而联合国关于AI能力建设的决议和《上海宣言》则可作为下一步为政策制定者设计可操作性工具的基础。


当前,全球AI治理面临着两项关键挑战,尤其体现在全球南方和发展中国家的治理中。首先是人才短缺与数字鸿沟。以印度尼西亚为例,我们面临双重人才危机。一方面人才外流。熟练的AI开发者被西方科技公司吸引,削弱了本地生态。另一方面,技能提升缺口。AI对传统岗位的冲击需要对劳动力进行再培训和技能提升,但印度尼西亚存在明显的数字鸿沟,爪哇岛与农村地区在互联网接入和数字素养方面不平衡,阻碍了广泛培训的开展。其次是各国正竞相争夺AI带来的经济红利。在东南亚,新加坡和马来西亚积极吸引数据中心和AI投资。同时,印度尼西亚也在顺应全球趋势,构建以AI为动力的经济框架。这场竞争将推动各国出台吸引AI相关产业的政策,然而,如果缺乏强大基础设施或人才留存策略,全球南方国家可能难以竞争。对于这些缺乏明显竞争优势的国家而言,它们可以寻求定位自身的利基市场。


AI治理需要关注风险评估体系的缺陷,更多采用创新友好型法规。当前的AI风险分类在很大程度上依赖政治过程,而非实证证据。欧盟的高风险分类虽然被广泛采用,但缺乏全面的风险收益分析。印度尼西亚主张对特定AI应用实施有针对性的监管,例如中国的深度合成技术监管或新加坡的医疗AI指南,而不是采取“一刀切”的方法,并强调应基于数据的风险评估来平衡创新与安全。由于欧盟高风险模型的有效性正受到质疑,未来的治理可能转向基于证据和风险收益的框架,整合系统分析以确保分类的透明度和实证依据。印度尼西亚的实践体现了发展中国家从“软法”向“中度立法”转变的趋势。该国已经发布了国家AI战略、白皮书及包括金融科技在内的行业指南,依靠自愿性标准而非严格的强制性规定。预计未来国内法规将延续这种平衡方式,在优先考虑创新的同时有针对性地缓解风险


全球AI治理也亟需加强协作。从主体来看,国际组织与区域协调在全球AI治理中发挥着重要作用。联合国,尤其是教科文组织,在规范制定方面发挥了关键作用。其提出的AI准备度方法正逐步融入印度尼西亚的监管框架,为该国的AI发展提供路线图。尽管联合国框架具有非约束性,但它们确立了各国可依据的共享标准。东盟等区域组织在规范协调方面日益重要。通过促进AI测试标准和最低规范的相互承认,东盟旨在减少跨境壁垒,使印度尼西亚等国家的AI创新能够在东南亚范围内推广。这种协调对于最大化AI的全球影响至关重要,因为分散的法规可能扼制跨境创新。从行动来看,在设计全球AI治理协作行动时,可以采取两种途径。其一是资源共享与知识交流。各国应利用联合国和东盟等平台,汇聚AI研发的资金资源,分享非敏感数据集以支持技术创新,并建立AI应用案例清单,尤其是在医疗和金融科技领域,以为政策决策提供依据,如新加坡的案例研究所示。其二是互操作性与标准化。制定全球AI测试与认证标准至关重要。统一的风险与安全度量指标可实现AI系统的跨境验证,而监管协调则可防止市场碎片化。这包括标准化技术协议、代码框架和合规程序。


展望未来,全球AI治理正处于十字路口,各国需要在创新、风险缓解和包容性发展之间取得平衡。对于全球南方国家而言,人才短缺和数字鸿沟等挑战需要量身定制的解决方案,而国际协作仍然是构建可互操作、基于证据的框架的关键。印度尼西亚等国在应对这一格局时,应着重于软法、区域协调和实证风险评估,以塑造既优先考虑技术进步又关注社会福祉的治理模式。


Introduction


A few days ago, Center for Global AI Innovative Governance releasedthe bilingual version of the report New Trends in Global Artificial Intelligence Governance: Observations from the Shanghai Declaration. In order to have a more comprehensive understanding of the current complex situation of global artificial intelligence governance and listen to more diversified, cross-disciplinary and cross-national voices, the project team interviewed authoritative experts and related practitioners in the field of artificial intelligence governance from more than ten countries around the world. The interviewees came not only from economically and technologically developed Global North countries, but also from emerging economies and developing countries Global South representatives. 


We will excerpt the interview content in the report one after another to show how experts from different regions and fields around the world view the current situation and prospects of artificial intelligence governance for the benefit of readers. 


Abstract


Since the release of ChatGPT in 2022, AI has developed rapidly. Documents such as the EU's AI Act, UNESCO's readiness assessment methodology, and OECD AI Principles jointly shape the global governance framework for AI. The current key challenges faced by global AI governance are talent shortages and digital divides, as well as competition among countries for AI dividends. Countries in the global South are particularly constrained by insufficient infrastructure and talent retention. In the future, it is necessary to strike a balance between innovation, risk mitigation and inclusiveness, and rely on international collaboration to build an interoperable and evidence-based governance framework.


Interviewee Profile


图片

Fahmi Islami

Lead Consultant in Mandala Consulting


图片

Catherine Salim

 Associate Consultant in Mandala Consulting


图片

Alifian Arrazi

Associate Consultant in Mandala Consulting


Interviewer


Yang zhao, Research Assistant at the Center for Global AI Innovative Governance


Interview


The rapid advancement of artificial intelligence (AI), exemplified by the release of ChatGPT in 2022, has accelerated global efforts to regulate AI. Over the past year, the landscape of global AI governance has evolved significantly, marked by the emergence of key policy frameworks, regional disparities, and pressing challenges, particularly for countries in the Global South. 


We could see three key global policy documents have shaped the AI governance discourse. 


The EU's Artificial Intelligence Act (AIA) employs a risk-based approach, imposing sanctions on developers and deployers for malfunctions. Its Brussels effect has influenced policy makers worldwide, with Indonesia referencing its regulatory logic to analyze sector-specific risks.  The UNESCO's Readiness Assessment Methodology provides a standardized framework to quantify AI development needs, guiding governments to prioritize areas for safety and societal preparedness. Indonesia's policy community has used this methodology to structure its national AI strategy.  The OECD's AI Principles have become foundational in defining responsible AI, serving as a lingua franca in policy circles and a reference for national regulatory frameworks. The release of ChatGPT in 2022 triggered a regulatory rush, with countries scrambling to address the impacts of large language models and multi-purpose AI. Indonesia and other Global South countries have adopted a benchmark approach, drawing from both domestic or regional regulations, for instance, the EU AIA and Singapore's governance mode and international ethical principles. Besides, policies like China's Interim Measures for the Management of Generative Artificial Intelligence Services (《生成式人工智能服务管理暂行办法》) have had a more tangible impact in Asia, while the UN resolution on AI capacity building and the Shanghai Declaration could take efforts to design actionable tools for policymakers as the next step.  


Nowadays, the global AI governance is facing two critical challenges, particularly in the Global South and developing countries. First, talent shortages and digital divide. Taken Indonesia as an example, we are facing a dual talent crisis. On the one hand, brain drain, which means skilled AI developers are lured by western tech companies, depleting local ecosystems. On the other hand, up-skill gaps, which means AI's disruption of traditional jobs demands workforce up-skill and re-skill, but Indonesia's stark digital divide, with unequal internet access and literacy between Java Island and rural areas, hinders widespread training.  Second, countries are racing to capture AI's economic benefits. In Southeast Asia, Singapore and Malaysia attract data centers and AI investments. At the same time, Indonesia is aligning with the global trend of fostering an AI-enabled economic framework. This competition will drive policies to court AI-related industries, though Global South countries may struggle to compete without robust infrastructure or talent retention strategies. For them, attributed to lacking a clear competitive edge, they could seek to define its niche.


To make AI governance functions well, we need to focus on the defects of risk-based assessment systems and design innovation-friendly regulations. Current AI risk classification relies heavily on political processes rather than empirical evidence. The EU's high-risk categorization, while widely adopted, lacks comprehensive risk-benefit analyses. Indonesia argues for targeted regulations of specific AI applications, for example, deepfakes, as seen in China, or medical AI, as seen in Singapore rather than one-size-fits-all approaches, emphasizing the need for data-driven risk evaluations to balance innovation and safety. Since the validity of the EU's high-risk model is under scrutiny, future governance may pivot toward evidence-based and risk-benefit frameworks, integrating systematic analyses to ensure classifications are transparent and empirically grounded. Indonesia's approach exemplifies a trend from soft law to moderate law among developing countries. We has issued a national AI strategy, white papers, and sectoral guidelines, including fintech, that rely on voluntary standards rather than strict mandates. Future domestic regulations are expected to continue this balanced approach, prioritizing innovation while mitigating targeted risks.  


At the same time, we need to intensify cooperation in global AI governance. At the level of actors, the international organizations and regional coordination play an important role in global AI governance. The UN, particularly UNESCO, has played a pivotal role in norm-setting. Its AI Readiness methodology is trying to integrated into Indonesia's regulatory framework, providing a roadmap for national AI development. While UN frameworks are non-binding, they establish shared standards that countries can adapt. Regional bodies like ASEAN are increasingly vital for harmonizing regulations. By promoting mutual recognition of AI testing standards and minimum norms, ASEAN aims to reduce cross-border barriers, enabling innovations like Indonesian AI solutions to scale across Southeast Asia. This coordination is essential for maximizing AI's global impact, as fragmented regulations could stifle cross-border innovation. At the level of action, there are two approaches to design collaborative actions for global AI governance. The first one is resource sharing and knowledge exchange. Countries should leverage platforms like the UN and ASEAN to pooling financial resources for AI research and development, sharing non-sensitive datasets to support technical innovation, and creating inventories of AI use cases , especially in healthcare and fintech, to inform policy decisions, as seen in Singapore's case studies. The second one is interoperability and standardization. Establishing global standards for AI testing and certification is crucial. Harmonized metrics for risk and safety would enable cross-border validation of AI systems, while regulatory coordination would prevent fragmented markets. This includes standardizing technical protocols, code frameworks, and compliance procedures.  


Looking forward, global AI governance is at a crossroads, requiring nations to balance innovation, risk mitigation, and inclusive development. For the Global South, challenges like talent shortages and digital divides demand tailored solutions, while international collaboration remains essential for creating interoperable, evidence-based frameworks. As Indonesia and others navigate this landscape, they should focus on soft laws, regional coordination, and empirical risk assessment , in order to shape a governance model that prioritizes both technological progress and societal well-being.