The problems caused by the lack of authorization management for AI agents. In recent corporate environments, Artificial Intelligence (AI) agents have become a key technology for enhancing efficiency in various ways. There's been a surge in cases where automated solutions, capable of logging into Customer Relationship Management (CRM) systems, retrieving data from databases, and even handling email tasks, save significant time and human resources. However, the widespread use of these AI agents is posing a major threat to enterprise security. Experts warn that the core issue lies in how to manage and track the identity and authorization of AI agents. Deloitte's recently published 'State of AI' report reveals that in a survey of 3,235 global business leaders, 73% cited data privacy and security as the biggest risks of AI adoption. However, only 21% of companies have established a mature governance model for autonomous agents to address these issues. In particular, the 'authorization problem' related to the autonomy of AI agents remains in the early stages of discussion, and the facts revealed today are somewhat shocking. Alex Stamos, Chief Product Officer at Corridor, stated, "Many companies, when deploying AI agents, do not clearly define their authorization and access levels, which significantly increases the likelihood of security incidents." He also warned about the major threat of 'Shadow AI,' which refers to personal or unofficial AI solutions used without the company's official knowledge. The Deloitte report analyzed that such Shadow AI adds approximately $670,000 to the average cost of a data breach. This has a particularly devastating impact on small and medium-sized enterprises (SMEs) where the technological ecosystem is still immature. Many organizations are making the mistake of deploying AI agents and granting them broad access without proper authorization management, which can lead to unexpected security incidents. One of the primary security risks associated with AI agents is the lack of credential management. Currently, one of the most common risky security behaviors is developers directly pasting credentials into prompts. Stamos stated that Corridor detects such actions and guides developers toward proper secrets management. Nancy Wang, CTO of 1Password, explained, "We respond by scanning code as it's written and storing plaintext credentials before they persist," emphasizing the need for secrets management to systematically detect and manage these issues. Experts warn that such practices provide critical vulnerabilities for hackers. They emphasize that AI agents should not be granted unlimited API keys, akin to 'the keys to the kingdom.' Wang explained during the VB AI Impact Salon series that "it's not just about which organization an agent belongs to, but what authority it acts with," concluding that this ultimately boils down to 'authorization' and 'access' issues. Agent permissions should be limited by time and task, and in an enterprise environment, it must be possible to clearly track which agent acted, with what authority, and using which credentials. Shadow AI and authorization issues threatening enterprise security. One proposed solution to address AI agent authorization problems is the adoption of open standards, such as OpenID Connect (OIDC) extensions. This approach helps standardize authentication across various platforms and systems, enabling clear limitation and tracking of AI agent permissions. Alex Stamos emphasized, "Open standards are more likely to solve broader problems and strengthen inter-company collaboration than proprietary solutions." This implies a need for an integrated, standardized approach rather than numerous proprietary solutions. In conclusion, the security issue of AI agents ultimately boils down to 'Identity and Authorization' management. Many organizations are approaching security for this new technology in the same way they handled cloud security decades ago, resulting in reliance on individual tools rather than integrated security layers. The fact that 91% of organizations currently use AI agents, yet only 10% have a strategy for Non-Human Identity Management, starkly illustrates the severity of this problem. This issue is by no means a distant concern for Korean companies. While large corporations have already fully adopted AI agents, small and medium-sized enterprises (SMEs) and startups often struggle to meticulously build security systems due to resource limitations. As global statistics show, while the proportion of companies using AI tools is high, very few actually have a non-human identity management strategy. This implies the potential for fatal consequences in the event of a security incident. So, what direction should Korean companies take amidst these global trends? Experts first state that it is urgent for the government and private companies to collaborate in presenting guidelines and recommended standards for AI agent usage. Specifically,
Related Articles