The European Union (EU) has been exploring some of the world's most stringent and in-depth regulations for Artificial Intelligence (AI). A new standard recently published by the European Telecommunications Standards Institute (ETSI) is closely linked to the EU AI Act and is designed to support regulatory compliance throughout the entire lifecycle of dynamic AI systems. This announcement presents a new compliance framework for global AI companies, including those in Korea, looking to enter the European market, and highlights potential opportunities and challenges for Korean AI businesses in particular. ETSI recently unveiled a technical standard designed to help implement the complex regulatory requirements of the EU AI Act in real-world scenarios. This standard contains a framework and methodologies to ensure AI systems continuously meet regulatory requirements throughout their entire lifecycle, from development to deployment and operation. ETSI stated that this initiative aims to help AI companies reduce regulatory uncertainty and confidently launch their AI products and services into the market. ETSI's new standard adopts a fundamentally different approach from existing one-off certification methods. AI systems possess dynamic characteristics, with their performance and behavior changing based on training data and environmental shifts. For instance, medical diagnostic AI may alter its diagnostic patterns as it learns new patient data, and autonomous driving systems may change their decision-making processes in response to varying road conditions and traffic patterns. Due to these dynamic characteristics, simply complying with regulations at the time of launch does not guarantee that this state will be continuously maintained. To address this, ETSI provides a framework that includes mechanisms for real-time monitoring and evaluation of regulatory compliance, allowing for swift adjustments when necessary. ETSI terms this 'AI-native standardization,' which embodies the goal of fostering ethical and trustworthy AI development without hindering AI innovation. ETSI also stated that it is actively promoting AI and data ecosystem collaboration, including by publishing new reports on security, privacy, trustworthiness, and sustainability for 6G integrated sensing and communication. ETSI's dynamic AI system continuous compliance standard focuses particularly on realizing the core principles of the EU AI Act. The EU AI Act categorizes AI systems into four risk levels. First, AI with unacceptable risks, such as social scoring or real-time remote biometric identification for public surveillance, is completely prohibited. Second, high-risk AI affecting fundamental rights in critical infrastructure, education, employment, law enforcement, and healthcare must comply with strict obligations. Third, AI with limited risks, such as chatbots, must meet transparency requirements, and fourth, most AI falls into the minimal risk category and is not subject to specific regulations. For high-risk AI systems, various obligations are imposed, including data governance, transparency, human oversight, accuracy, and security. Specifically, the quality and representativeness of training data must be ensured, the decision-making process of AI systems must be explainable, and mechanisms must be in place for human oversight and intervention in AI decisions when necessary. Furthermore, AI systems must operate accurately for their intended purpose and be protected from cyberattacks and data corruption. While these requirements play a crucial role in advancing AI technology in an ethical and socially responsible direction, they also carry the potential for significant technical and financial burdens on businesses. ETSI's continuous compliance standard provides specific methodologies to help continuously meet these complex requirements throughout the entire lifecycle of AI systems. For example, it includes mechanisms to monitor AI system performance metrics in real-time, automatically trigger warnings if thresholds are exceeded, and, if necessary, temporarily suspend the system or initiate retraining. It also requires systematic documentation of AI system training data, algorithm changes, performance metrics, and the maintenance of an audit trail. **Analysis of Practical Impact on Korean AI Companies** Korean companies need to pay close attention to this ETSI standard announcement. Currently, through the AI Act, the EU is broadly impacting not only its internal market but also all global companies supplying AI systems to the EU market, which can directly affect the export and competitiveness of Korean AI technology. For instance, if Korean AI startups or mid-sized companies wish to enter the European market, they must meet the specific requirements of the EU AI Act, which demands a strategic approach that considers regulatory compliance from the initial development stages. Given that numerous Korean companies, from large corporations like Sams
Related Articles