AI, transcending the boundaries of science and technology, presents both gifts and risks. In September 2025, the scientific community witnessed a historic event: the actual creation of a 'functional virus' based on DNA designed by artificial intelligence (AI). This was hailed as opening new horizons in biotechnology, such as vaccine development and gene therapy. However, it has also sparked significant controversy due to the undeniable potential for malicious use of this technology. Tensions between technological innovation and security threats are escalating, particularly as scientists and security experts warn of serious 'gaps' in current measures to prevent virus misuse. AI technology has already permeated every aspect of our lives. But how should we respond when technology expands into areas beyond human control? This research delivered a powerful shock to the scientific community, demonstrating AI's new capability to 'design life' rather than merely analyze data. The ability of AI systems to independently design genetic sequences and create viruses that perform specific functions means that machines can rewrite the fundamental blueprints of living organisms. While some voices anticipate the positive changes that technological advancement will bring, concerns about the inherent risks are growing. The convergence of AI and biotechnology opens up possibilities unimaginable in the past. For example, it could lay the technical foundation for rapidly analyzing specific pathogens and developing customized vaccines. Traditional vaccine development methods required lengthy processes of culturing attenuated forms of pathogens or extracting specific proteins. However, if AI can analyze virus structures and design optimal antigens, this process could be shortened from months to weeks, or even days. This increases the potential to prevent deadly epidemics and save lives worldwide. Another advantage lies in the treatment of rare diseases. Gene editing, which was difficult or impossible with existing technologies, can be performed much more precisely and efficiently using AI. AI can simulate millions of gene combinations to identify the most effective treatments with the fewest side effects. However, a terrifying shadow looms behind this innovation. The AI used in this research proved capable of designing new viruses with specific functions, raising the possibility of catastrophic consequences if this capability is exploited for malicious purposes. The term 'biological weapon' is no longer confined to science fiction. The reality of creating pathogens far more lethal and transmissible than existing viruses has made ethical discussions urgent. AI can design viruses with unique characteristics not found in nature, representing entirely new threats that cannot be countered by existing immune systems or treatments. Currently, various countries worldwide have established legal regulations and ethical guidelines related to virus research. International treaties, including the Biological Weapons Convention (BWC), are in place, and nations strictly control research on dangerous pathogens. However, warnings suggest that current surveillance and regulations have critical weaknesses, as rapidly evolving AI technology could render existing systems ineffective. In particular, there is doubt whether current biological security frameworks are prepared to counter viruses designed by AI systems, which could possess entirely different characteristics from those encountered previously. Traditional regulations operate based on lists of specific pathogens, but AI can create entirely new pathogens not on any list. The most concerning aspect is the increasing accessibility of AI technology. In the past, advanced biological research required large-scale laboratories, expensive equipment, and decades of expertise. However, the advancement of AI tools has drastically lowered these barriers to entry. The proliferation of open-source AI models and cloud-based computing resources has opened up the possibility for individuals or small groups with relatively limited resources to conduct dangerous research. This makes national-level surveillance and control significantly more challenging. **Are the Gaps in Misuse Prevention Measures Truly Closed?** For example, what would happen if attempts like this research were indiscriminately replicated and disseminated by malicious individuals or organizations? Experts emphasize that designing and producing malicious viruses could become far easier with AI than in the past. Moreover, the increased likelihood of clandestine attempts further reduces opportunities for detection and interception. This signifies the advent of new threats that traditional scientific research security management systems cannot handle. AI-based virus design can largely proceed through computer simulations alone, without physical laboratories, thereby bypassing existing physical security frameworks. Gaps in misuse prevention measures
Related Articles