Agentic AI

AI systems that act autonomously to make decisions, carry out tasks, and interact with tools or environments on behalf of users. Building trust becomes the central challenge.

What Is an AI Agent?

Think of agents like an AI travel agent or a virtual employee with memory, decision-making ability, and tool access. They act with autonomy — like giving your teenager car keys — and may do what you ask… or not.

AI agents can now access tools like payment systems or internal work platforms, raising serious implications for security and control.

Key Characteristics

Memory:

Persistent identity and context awareness

Decision-Making:

Autonomous choices within defined parameters

Tool Access:

Integration with external systems and services

Limited Oversight:

Operates with minimal human supervision

Trust vs. Security: The Central Challenge

Security

Security is technical. It focuses on keeping data safe from bad actors through technical safeguards and protective measures.

Data protection and encryption
Access controls and authentication
Threat detection and prevention

Trust

Trust is human. It involves alignment of values and behavior, extending beyond security into value alignment, accountability, and identity.

Intent and responsibility alignment
Value and behavior consistency
Expectations and accountability

The AI Agent Trust Risk

Current Threat Landscape

Cybercrime projected at $10.5T
AI agents can outperform expert phishing teams
96% of IT leaders say agents are risky
Over 80% of AI agents have taken unauthorized actions

New Risks

Misinformation and identity spoofing
Unauthorized actions and decisions
Multi-agent systems create opaque networks
Traditional trust models are crumbling

Technical Solutions for Trust

Trust Spanning Protocol (TSP)

TSP is a foundational protocol (like IP for the internet) aimed at embedding trust at the infrastructure level for AI agent interactions.

Key Properties:

Portability & Autonomy: Agents are independent actors

Accountability & Reputation: Long-term evaluation mechanisms

Privacy Protection: Especially metadata privacy

Format Interoperability: Works across identity systems

Content Credentials & C2PA

The Content Authenticity Initiative and C2PA standard provide cryptographically verifiable provenance for all digital content using X.509 certificates.

How It Works:

Each digital object gets a Content Credential at creation

Every modification adds a cryptographically signed record

Provides verifiable ledger of all actions and changes

AI systems can identify themselves as content source

Identity & Delegation Framework

First-Person Proof

Proving you're a real, unique person online is one of the oldest, toughest problems in digital identity, especially with AI agents.

Decentralized identity systems
Personal credentials verification
Privacy-preserving proofs

Decentralized Trust Graphs

Using decentralized identifiers (DIDs) and verifiable credentials to build private, secure, and scalable social graphs.

First-Person Graph in wallets
Peer-to-peer trust relationships
No centralized servers required

Agent Delegation

Ensuring AI agents act on behalf of real people through verifiable delegation and accountability mechanisms.

First-Person Agents concept
Anchored in verified identity
Human control and oversight

Monitoring & Control Frameworks

Essential Components

Monitoring Layers

Track agent behavior and decision-making processes in real-time

Control Components

Validate AI decisions before actions are executed

Adversarial Agents

Critic agents that supervise or challenge AI behavior

Policy Checks

Corporate governance-style systems before action execution

ITU AI Agent Security Standards

The International Telecommunication Union is developing X.S-AIA (Security Requirements and Guidelines for Artificial Intelligence Agents) to formalize global AI agent security.

AI Agent Stages:

Perception & Cognition
Planning & Decision Making
Memory & Context Management
Action & Execution

Each stage has specific functions, threat sources, and security requirements mapped.

Core Principles for Trustworthy AI Agents

You — Not "We" — Should Have Control

Clear individual control over your agents is essential. Users must retain authority over agent actions and decisions.

Holistic Ecosystem Approach

Trustworthy AI agents demand secure architectures, privacy-preserving identity systems, governance models, and human control mechanisms.

Value Alignment is Critical

Trust frameworks must go beyond security to include value alignment, control, memory, identity, and accountability.

Agents Must Act in Alignment

All AI agents must act in alignment with those they represent, with clear delegation and accountability mechanisms.

Shape the Future of AI Agents

Help build trustworthy, accountable, and secure AI agents that act in alignment with human values and maintain clear chains of responsibility.