With the increasing use of agentic AI in everyday transactions, concerns over security have become paramount. The FIDO Alliance has launched two working groups to create industry standards that protect payments and other activities carried out by AI agents, aiming to prevent hijacking or rogue behavior.
The goal is to develop a robust baseline for user authorization mechanisms that can't be easily compromised by bad actors. Cryptographic tools will also ensure agents accurately carry out users' instructions while maintaining privacy. This initiative aims to establish foundational principles allowing more trusted interactions and promoting wider adoption of AI-powered tools.
Andrew Shikiar, CEO of the FIDO Alliance, notes that pre-existing models are not equipped for this new paradigm, highlighting the need for a fresh approach. Contributions from Google's AP2 protocol and Mastercard’s Verifiable Intent framework will serve as a starting point, but practical examples and use cases must still be developed to ensure real-world applicability.
Stavan Parikh of Google emphasizes the importance of providing cryptographic proof that transactions are authorized by users while maintaining privacy. He gives an example where an AI agent could purchase sneakers if they come back in stock within a certain price range, ensuring transparency and user intent. This baseline protection is crucial for promoting trust in agentic AI.
The urgency behind this effort is underscored by the rapid pace of development in agentic AI technology, with Pablo Fourez, Mastercard’s chief digital officer, stressing the need to adopt this tech to effectively support consumers and merchants against potential exploitation.







