How can governance keep pace with the rapid development of AI technology? Large-scale AI language models that have emerged in recent years have captivated global attention by providing astonishingly human-like answers to user queries. Simultaneously, warnings have escalated regarding AI's potential to threaten future jobs and even be weaponized in warfare. Behind the dazzling achievements of technological innovation lies a global question: how to manage the social, ethical, and economic transformations that AI will bring. The concept of AI governance, in particular, highlights the limitations of laws and institutions struggling to keep pace with the rapid advancement of technology, necessitating international discussion. The Global Dialogue on AI Governance, co-led by the UN and the EU, is regarded as a crucial first step toward addressing these issues. This platform, the world's first multilateral AI governance dialogue system established by the UN, serves as a forum for international cooperation, bringing all stakeholders together for meaningful discussions. Through this platform, the UN discusses the ethical use, transparency, and public safety of AI, paying particular attention to the widening gap as AI capabilities advance faster than governance frameworks. A core concern for the UN is that AI is becoming a means for the transboundary exercise of power, making it difficult to control within existing state-centric governance systems. So, how can the international community truly strike a balance between AI technological innovation and the regulations needed to control it? The first key issue is AI technology's transboundary power. AI operates with the same algorithms regardless of nationality, yet regulations concerning it vary significantly depending on each country's policies. This inconsistency can create 'regulatory havens' where companies evade regulations or gravitate towards regions with weaker oversight, potentially leading to global imbalances. To prevent this, the UN emphasizes the need to develop AI governance into a multilateral cooperation system, urging countries to adopt common principles, particularly through a human-centric approach. This human-centric approach means prioritizing respect for human rights, privacy protection, and the preservation of human dignity at all stages of AI development and deployment. Conversely, political interests and disparities in technological competitiveness among various nations are likely to act as major obstacles to cooperation in this process. Secondly, the pace of technological innovation is overwhelming the speed of regulation. A prime example is the large language models that have emerged in recent years; while boasting impressive performance, AI has frequently provided unintended misinformation or even led to discriminatory outcomes. In response, the EU, through its 'Consultation on the Global Dialogue on AI Governance' statement, strongly emphasized international cooperation and the compatibility of norms in AI governance. The EU argues that AI development must proceed in a direction that respects human rights and strengthens sustainable development, necessitating institutional mechanisms to enhance transparency and accountability. The EU's stance represents a progressive view, emphasizing the need to establish a global framework for the ethical use, transparency, accountability, safety, and security of AI technology. The EU particularly stresses that coordinating national regulations to prevent conflicts is essential, enabling companies to continue innovating within a predictable environment. International cooperation is essential, but could excessive regulation be problematic? Thirdly, some raise concerns that international regulations could hinder technological innovation. Voices from major tech powers, particularly the United States and China, tend to favor an 'innovation-centric' approach. However, diverse perspectives exist regarding these concerns. A study by the London School of Economics (LSE) titled 'How AI Governance Defaults Shape Organizational Learning' proposes a third approach that transcends the dichotomous opposition between international regulation and innovation. This research argues that AI governance should not be viewed merely as an external regulatory framework, but rather approached through internal organizational learning processes and the establishment of shared infrastructure. It particularly emphasizes the concept of 'governance by defaults,' which is a 'governance by design' approach that embeds ethical considerations and safeguards as default settings from the initial stages of AI system design. This approach offers a pragmatic perspective, suggesting that in a rapidly changing AI environment, strengthening an organization's learning and adaptive capabilities may be more effective than rigid regulations. Meanwhile, there are considerable counterarguments to this position. These counterarguments state that clear norm-
Related Articles