Serious Legal and Ethical Gaps Amid Expanding AI Use The recent advancement of artificial intelligence (AI) technology has been unimaginably rapid, with its influence steadily expanding across daily life, the economy, and public services. The adoption of AI enhances work efficiency, enables personalized service delivery, and drives innovation in various sectors such as healthcare, education, and finance. However, such technological progress does not always yield positive outcomes, and the existing legal and ethical gaps, in particular, pose significant challenges for future AI utilization and trust-building. According to a report jointly published by the Thomson Reuters Foundation and UNESCO, despite a global surge in AI use, approximately 87% of companies have not established clear AI governance policies or are failing to adhere to them. Even more concerning is the fact that only 27% of corporate AI strategies are actually linked to a governance framework. This implies that the majority of companies are operating AI technology without even basic governance elements such as records of AI system deployment, clear designation of responsible parties, or systematic impact assessment procedures. An analysis by Bloomberg Law clearly points out the specific risks arising from this situation. As AI becomes deeply integrated into business processes, various legal and ethical issues are rapidly increasing, including privacy violations, data inaccuracies, intellectual property infringement, and algorithmic bias. An analysis by Old National Bank also emphasizes that legal officers and Chief Financial Officers (CFOs) must recognize the importance of data governance and risk management and actively intervene. Particularly in the financial sector, the transparency and accountability of AI-driven decisions are directly linked to customer trust, making executive-level governance essential. Companies operating technology without transparency and accountability are causing numerous problems, such as data bias, privacy breaches, and ethical conflicts. For instance, there have been international reports of medical AI systems providing unfavorable diagnostic results to specific groups due to biased data, or recruitment algorithms producing discriminatory outcomes based on race and gender. These factors can severely undermine the positive effects of technology and serve as an important lesson, especially for Korean society, which is in the early stages of AI adoption. The situation in Korea is no exception. While AI technology is expanding into educational environments, transportation systems, and even public administration services, the legal frameworks and ethical standards to support it still have a long way to go. Korean companies, similar to global trends, are rapidly adopting AI, but the establishment of corresponding governance frameworks is lagging. While the appropriate use of specific technologies is important in itself, governance must also be in place to ensure that technology does not create ethical problems. The core point highlighted by the Thomson Reuters Foundation and UNESCO report is clear: Companies are solely focused on accelerating AI innovation, while significantly neglecting the accompanying ethical and social responsibilities and risk management. Without records and procedures detailing who is responsible during AI system deployment, what impact assessments have been conducted, and how data is managed, it becomes difficult to even identify accountability when problems arise. This ultimately erodes public trust in companies and, in the long term, can hinder the social acceptance of AI technology as a whole. Internationally, there are active movements to recognize and proactively address these issues. A prime example is the European Union's (EU) 'AI Act.' The EU introduced the world's first comprehensive AI regulatory framework, establishing a system that provides usage guidelines based on the technology's risk level. This demonstrates an effort to protect citizens' rights, enhance transparency, and ultimately build trust through strict regulations on the development and use of high-risk AI applications. The EU's approach, in particular, adopts risk-based regulation, requiring rigorous pre-assessment and continuous monitoring for AI systems that could significantly impact human safety and fundamental rights. Some U.S. states are also making notable attempts at AI governance. California, for instance, has established guidelines for automated decision-making systems and passed legislation to ensure AI does not infringe upon fundamental human rights. These movements serve as exemplary cases for countries seeking to formulate policies based on large-scale data and AI analysis. Conversely, concerns exist that excessive regulation could stifle innovation, making it essential to find an appropriate balance. Lessons from Global Cases and Korea's Challenges According to Bloomberg Law's analysis, effective AI governa
Related Articles