10th November 2025
Hilton London Canary Wharf
10th November 2025
Hilton London Canary Wharf
FPS Summit

AI-driven identity risks create new fraud and security challenges for merchants

The rapid adoption of AI agents across enterprise systems is creating a new layer of identity risk for organisations, raising fresh concerns for merchant fraud and cyber resilience leaders.

New global research from identity security specialist Semperis suggests businesses are integrating AI into critical systems faster than they are implementing the controls needed to secure them, potentially exposing payment, authentication and customer identity infrastructure to new forms of attack.

The study, based on responses from 1,100 organisations across eight countries, found that 74% believe AI will increase attacks on identity infrastructure such as Active Directory, Entra ID and Okta.

For fraud prevention leaders, the findings highlight the growing role of non-human identities (NHIs) within digital operations. AI agents are increasingly being granted access to sensitive systems and workflows, including password resets, VPN access and automated security tasks.

According to the research, 93% of organisations already use—or plan to use—AI agents for sensitive security-related functions, while nearly a third (29%) are already deploying AI to manage security helpdesk requests.

At the same time, many organisations appear underprepared for the associated risks. While 92% report AI is installed on systems with access to SSH or encryption keys, only 32% say they are very confident they could regain control if AI exposed administrative credentials.

For merchants and payments providers, identity systems remain a critical attack vector. Compromise at the identity layer can enable account takeover, payment fraud and broader operational disruption, particularly as AI systems gain greater autonomy across customer-facing and internal environments.

The research also points to governance gaps. Only 65% of organisations say AI identities are fully registered and managed within formal authentication systems, while 6% admit they do not track AI identities at all.
Industry experts warn that traditional identity controls may no longer be sufficient as AI agents proliferate across enterprise ecosystems.

Recommended best practices include applying least-privilege access controls to AI agents, separating human and AI trust boundaries, monitoring anomalous agent behaviour and ensuring organisations can rapidly recover compromised identity systems.

The findings reinforce the importance of treating AI governance and identity resilience as central components of fraud prevention strategy (not simply IT or compliance concerns) as AI-driven attack surfaces continue to expand.

Photo by Hermann Wittekopf – kmkb on Unsplash