Agentic AI can run with least-privileged access and still violate security policy.
Security context defines who can do what with a given set of data. Permissions to create, change, or delete data across enterprise systems are defined by the business requirements assigned to specific roles. In a business that has humans fulfilling certain roles, when they use software, that software must have a security context to operate. And that security context is implemented using accounts. Software usually executes as a user account or as a service account. Service accounts are created to allow the software permissions to manipulate corporate data independent from any specific user’s security context.
Assigning least-privileged access to various corporate resources is straightforward for users based on their roles, but services, while still constrained, are configured to allow most-privileged access so they can perform all the actions required throughout a process. For example, a user may only have write access to a set of records in a system that corresponds to their assigned customer accounts and read only access to all other customer accounts. Service accounts, on the other hand, need write access to all accounts.
The privilege gap inside process automation.
When using agentic AI, user activity and corporate business process activity mingle. Corporate business processes are designed to deliver a result summarily (i.e., for all customers, for all employees, for all instances). To operate, service accounts commonly need higher permissions than most individual users, yet it is an individual user or team of users who are responsible for initiating, monitoring, and overseeing corporate processes.
For example, an organization employs an agentic AI assistant to support onboarding new employees. Each new employee has a specific role and, based on that role, a business need to access corporate systems at a certain level. The agentic AI assistant runs in the context of the HR personnel doing the onboarding. So, what happens if the AI agent needs to provision a new user account with permissions that exceed those of one or more HR team members?
If the permissions for the agent are set based on the process that needs to run and not based on whether the person using the agent is individually authorized, security violations can occur because the software controlled by the user provides access to data they are not permitted to access. On the other hand, if the action is blocked due to lack of permissions, then the user cannot complete the task.
This scenario can be solved by branching the process. A sub-task gets assigned to someone who has the authority to grant permissions at the level required, and once approved, the process can proceed to the next step. This technique of segmenting the actions by security context should also be applied to agentic solutions.
Employing agentic AI may increase the likelihood that software used to initiate a process in the context of a user ends up gaining elevated privileges when switching to the context of an agent running with a service account. In this case, it is important for the agentic system to operate in such a way that elevated privilege and data access does not violate security policy.
A process flow map showing security context switching during steps of a workflow. Privilege and sensitivity shifts are mapped at data access points.
Designing AI agents with proper security context starts with answering this question: is the process being automated meant to be a personal assistant deployed to one or more users or is this a generalized business process that needs to run at scale on behalf of the business?
If personal, focus the process map on verifying that all actions and data access requirements comply with existing corporate security policy for the targeted users. If any exceptions are found, separate the actions requiring elevated privilege and confirm that access and actions are properly mapped to individuals with the various levels required.
If the process is generalized for the business, examine the service account design to determine whether most-privileged access creates security gaps like the one in the first example above, and double check to confirm that sensitive data is not exposed to users who do not have the requisite privileges.
Organizations should include security context in the design of personal and corporate agents. By separating tasks that require elevated privilege and being mindful of whether a user or service account will execute the action, risk of unintended security gaps will decrease.
For additional perspective on the tension between automation and security, read: Easing The Tension Between Data Security And Automation | F5