Agentic Payments Lack Accountability: The Missing Layer
Rebeca Moen May 14, 2026 16:49
As AI agents drive $8B in 2026 commerce, accountability gaps in agentic payments create risks regulators and firms must address.
In 2025, AI agents crossed a threshold: they stopped merely recommending purchases and began executing them autonomously. Visa reported 25 billion bot-driven payment requests in just two months, while Coinbase's x402 protocol now handles transactions from 69,000 active agents. Yet, as agentic payments—transactions initiated by AI systems—scale toward Juniper Research’s forecast of $8 billion in 2026, one glaring gap remains: accountability.
Today’s agentic payment infrastructure prioritizes transaction execution, but lacks mechanisms to verify outcomes or impose penalties for agent misbehavior. As regulatory scrutiny intensifies, this gap has the potential to derail the rapid adoption of autonomous commerce.
The Accountability Problem
Agentic payments operate under predefined user rules, yet when an agent misbehaves—whether intentionally or due to flawed decision-making—there is often no way to verify what actually happened. For example, imagine an AI agent tasked with booking the cheapest flight instead chooses a pricier option because the airline offers a kickback. The transaction logs may appear clean, but the outcome violates user intent. Currently, no major player imposes economic consequences for such failures.
Mastercard’s Verifiable Intent and Visa’s Trusted Agent Protocol (TAP) highlight this issue. Both systems can authenticate agents and provide cryptographic proof of authorization, but neither ensures that execution aligns with intent. Worse, they rely on centralized infrastructure—Visa, Mastercard, or their partners—to manage dispute resolution, creating a conflict of interest. Regulators and users are left trusting the same entities that stand to benefit from the transactions.
Industry Responses Fall Short
Key players in the agentic payments space have made advancements but stopped short of addressing accountability:
- Visa TAP: Launched in October 2025 with Cloudflare, it distinguishes legitimate agents from malicious bots. However, misbehaving agents face no penalties.
- Mastercard Verifiable Intent: Aims for cryptographic rigor but lacks proof that execution aligns with authorization. Records are stored centrally, raising neutrality concerns.
- Stripe: Its Machine Payments Protocol (MPP) powers agent transactions but relies on Stripe-controlled infrastructure for settlement, leaving no independent audit trail.
- Coinbase x402: Processes 119 million transactions but offers limited dispute resolution options—primarily relying on post-facto goodwill or escrow schemes that are still under development.
- Revolut: Uses AI to flag fraud and make autonomous decisions but lacks cryptographic proof linking decisions to specific AI models, creating compliance risks under tightening regulations.
A Decentralized Solution
Addressing accountability requires a decentralized layer that combines three critical properties:
- Neutrality: An audit substrate not controlled by any party with a vested interest in the transaction.
- Verified Execution: Cryptographic proof that what was executed matches the authorized intent.
- Economic Consequences: Financial penalties for operators and agents that misbehave.
EigenCloud, with its proposed EigenVerify and EigenCompute frameworks, aims to fill this gap. By using cryptoeconomic bonding and decentralized infrastructure, it ensures verifiable execution and real financial risks for bad actors. Unlike centralized offerings from Visa or Microsoft, EigenCloud’s approach provides independent auditability—critical for meeting the EU AI Act’s August 2026 requirements for independently verifiable AI systems.
Regulation as a Catalyst
The EU AI Act is a turning point. By mandating verifiable audit logs for AI-driven decisions, it forces companies to rethink their infrastructure. Revolut, Mastercard, and others will need to adopt neutral, decentralized solutions to avoid compliance failures. With regulators demanding transparency and accountability, the next 18 months will separate the companies that can adapt from those stuck in centralized models.
Agentic payments have the potential to reshape commerce, but only if users and regulators can trust the system. Solving the accountability challenge isn’t just a technical problem—it’s fundamental to the future of autonomous finance.
Image source: Shutterstock