Agentic AI
AI systems that act autonomously to make decisions, carry out tasks, and interact with tools or environments on behalf of users. Building trust becomes the central challenge.
What Is an AI Agent?
Think of agents like an AI travel agent or a virtual employee with memory, decision-making ability, and tool access. They act with autonomy — like giving your teenager car keys — and may do what you ask… or not.
AI agents can now access tools like payment systems or internal work platforms, raising serious implications for security and control.
Key Characteristics
Persistent identity and context awareness
Autonomous choices within defined parameters
Integration with external systems and services
Operates with minimal human supervision
Trust vs. Security: The Central Challenge
Security
Security is technical. It focuses on keeping data safe from bad actors through technical safeguards and protective measures.
Trust
Trust is human. It involves alignment of values and behavior, extending beyond security into value alignment, accountability, and identity.
The AI Agent Trust Risk
Current Threat Landscape
New Risks
Technical Solutions for Trust
Trust Spanning Protocol (TSP)
TSP is a foundational protocol (like IP for the internet) aimed at embedding trust at the infrastructure level for AI agent interactions.
Key Properties:
Portability & Autonomy: Agents are independent actors
Accountability & Reputation: Long-term evaluation mechanisms
Privacy Protection: Especially metadata privacy
Format Interoperability: Works across identity systems
Content Credentials & C2PA
The Content Authenticity Initiative and C2PA standard provide cryptographically verifiable provenance for all digital content using X.509 certificates.
How It Works:
Each digital object gets a Content Credential at creation
Every modification adds a cryptographically signed record
Provides verifiable ledger of all actions and changes
AI systems can identify themselves as content source
Identity & Delegation Framework
First-Person Proof
Proving you're a real, unique person online is one of the oldest, toughest problems in digital identity, especially with AI agents.
Decentralized Trust Graphs
Using decentralized identifiers (DIDs) and verifiable credentials to build private, secure, and scalable social graphs.
Agent Delegation
Ensuring AI agents act on behalf of real people through verifiable delegation and accountability mechanisms.
Monitoring & Control Frameworks
Essential Components
Monitoring Layers
Track agent behavior and decision-making processes in real-time
Control Components
Validate AI decisions before actions are executed
Adversarial Agents
Critic agents that supervise or challenge AI behavior
Policy Checks
Corporate governance-style systems before action execution
ITU AI Agent Security Standards
The International Telecommunication Union is developing X.S-AIA (Security Requirements and Guidelines for Artificial Intelligence Agents) to formalize global AI agent security.
AI Agent Stages:
Each stage has specific functions, threat sources, and security requirements mapped.
Core Principles for Trustworthy AI Agents
You — Not "We" — Should Have Control
Clear individual control over your agents is essential. Users must retain authority over agent actions and decisions.
Holistic Ecosystem Approach
Trustworthy AI agents demand secure architectures, privacy-preserving identity systems, governance models, and human control mechanisms.
Value Alignment is Critical
Trust frameworks must go beyond security to include value alignment, control, memory, identity, and accountability.
Agents Must Act in Alignment
All AI agents must act in alignment with those they represent, with clear delegation and accountability mechanisms.
Shape the Future of AI Agents
Help build trustworthy, accountable, and secure AI agents that act in alignment with human values and maintain clear chains of responsibility.