Posts Tagged :

ai

AI MONTH: A buyers guide for AI-powered anti-fraud solutions

As fraudsters become increasingly sophisticated, senior anti-fraud professionals in the UK’s e-commerce and banking sectors must leverage advanced technologies to stay ahead. AI-powered solutions offer powerful capabilities for detecting and preventing fraud. Here are key considerations when selecting a provider, based on delegate priorities at the Fraud Prevention Summit…

Understanding Your Organisation’s Needs

  • Identify Fraud Risks: Assess your organisation’s specific vulnerabilities and potential fraud types.
  • Regulatory Compliance: Ensure the solution complies with relevant regulations, such as GDPR.
  • Integration Capabilities: Evaluate the provider’s ability to integrate with your existing systems and data sources.

Key Considerations for Supplier Selection

  • Expertise and Experience: Look for providers with a proven track record in the field of AI-powered fraud detection.
  • Technology Stack: Assess the provider’s underlying technology and its ability to handle large datasets and complex algorithms.
  • Customization: Ensure the solution can be tailored to your organization’s specific needs and risk profile.
  • Scalability: Verify the provider’s ability to handle increasing volumes of data and adapt to evolving fraud tactics.
  • Customer Support: Evaluate the level of customer support and technical assistance provided.
  • Cost-Effectiveness: Compare pricing and value offered by different providers.

Common Mistakes to Avoid

  • Relying Solely on AI: While AI is powerful, it should be used in conjunction with other security measures.
  • Neglecting Data Quality: Ensure the data used to train AI models is accurate and representative.
  • Underestimating the Complexity: Implementing AI-powered fraud solutions can be complex and time-consuming.
  • Ignoring Ethical Considerations: Address ethical concerns related to data privacy and bias in AI algorithms.

Top Tips for Successful Implementation

  • Conduct Proof of Concepts: Test the solution with real-world data to assess its effectiveness.
  • Continuous Monitoring and Evaluation: Regularly review the solution’s performance and make necessary adjustments.
  • Stay Updated on Fraud Trends: Keep informed about emerging fraud tactics and ensure your solution can adapt.
  • Build a Strong Partnership: Establish open communication and collaboration with the provider.

By carefully selecting an AI-powered anti-fraud solution provider and following these guidelines, senior anti-fraud professionals can strengthen their organisation’s defences against sophisticated fraud threats.

Are you looking for AI-powered anti-fraud solutions for your organisation? The Fraud Protection Summit can help!

Photo by Igor Omilaev on Unsplash

AI MONTH: Identifying the key anti-fraud use cases in your organisation

AI is revolutionising the fight against financial fraud, offering sophisticated solutions that can outsmart even the most sophisticated fraudsters. Here are some of the key ways AI is being deployed by delegates at the Fraud Protection Summit…

  1. Real-time Transaction Monitoring: AI algorithms can analyze vast amounts of transaction data in real-time, identifying suspicious patterns and flagging potentially fraudulent activity. This enables swift intervention and reduces financial losses.  
  2. Behavioral Biometrics: By analyzing user behavior patterns, AI can detect anomalies that may indicate fraudulent activity. This includes factors like typing speed, mouse movements, and even voice patterns.  
  3. Fraud Detection Models: AI models can learn from historical data to identify new fraud patterns and adapt to evolving tactics. This helps prevent fraudsters from exploiting vulnerabilities in traditional detection systems.  
  4. Customer Onboarding and Verification: AI can automate customer onboarding processes, verifying identities and detecting potential fraud risks at the initial stages. This reduces the likelihood of fraudulent accounts being created.  
  5. Synthetic Data Generation: AI can generate synthetic data to train fraud detection models without compromising customer privacy. This allows for continuous improvement of fraud prevention capabilities.  
  6. Bot Detection: AI can effectively detect and block bots that are used to automate fraudulent activities, such as account creation or credential stuffing.  
  7. Social Network Analysis: AI can analyze social media data to identify potential fraudsters based on their online behavior and connections.
  8. Machine Learning for Anomaly Detection: Machine learning algorithms can identify unusual patterns in transaction data that may indicate fraudulent activity, even if the patterns are not explicitly defined.  
  9. Natural Language Processing (NLP): AI-powered NLP can analyze text data, such as emails or chat logs, to detect fraudulent communication patterns.  

AI is a powerful tool in the fight against financial fraud, offering real-time detection, adaptability, and the ability to handle vast amounts of data. As AI technology continues to evolve, we can expect even more sophisticated and effective solutions to emerge.

Are you looking for AI-powered anti-fraud solutions for your organisation? The Fraud Protection Summit can help!

Photo by Nathana Rebouças on Unsplash

If you specialise in AI for Fraud Prevention we want to hear from you!

Each month on Fraud Prevention Briefing we’re shining the spotlight on a different part of the market – and in September we’ll be focussing on AI for Fraud Prevention.

It’s all part of our ‘Recommended’ editorial feature, designed to help industry buyers find the best products and services available today.

So, if you specialise in AI for Fraud and would like to be included as part of this exciting new shop window, we’d love to hear from you – for more info, contact Jennie Lane on 01992 374 098 |  j.****@fo*********.uk .

Sep – AI for Fraud
Oct – Chargebacks
Nov – Biometrics for Fraud prevention
Dec – Mobile Fraud Prevention
Jan – Digital Identity Verification
Feb – Fraud Prevention Solutions
Mar – Risk Prevention & Compliance
Apr – Financial Crime
May – Multi-factor Authentication
Jun – Digital Identity Verification
Jul – Fraud Detection Tools
Aug – Anti Fraud Platforms

The Holy Grail: Secure, seamless user authentication in payments

By Feedzai

Consumers seek a smooth, frictionless user authentication process. Merchants must ensure that online payment methods and transactions are safe from fraud. Feedzai, an AI fraud prevention platform for Acquirers, explains how businesses and merchants can deliver an online payment experience that achieves both.

Security – the right controls at the right time based on the transaction’s risk level – can remove unnecessary hurdles during checkout.

Extra security checks, such as two-factor authentication (2FA), are necessary when buying products from a new website and where the shipping address is different from the billing address on the payment method.

Active authentication methods such as 2FA (user has to input a username and password or code) often receive negative press. A consumer forgetting the password they have set up on 3D Secure with their bank or not receiving the text message containing the code makes it challenging for them to complete their transaction. Passive methods reduce friction and improve convenience for consumers. These methods observe user behaviour on a device, for example, a phone to confirm the user’s identity.

In Europe, where Strong Customer Authentication (SCA) is mandatory, and a joint EBA-ECB report found that SCA is effective in reducing card payment fraud, merchants or their acquirers can actively request Transaction Risk Analysis (TRA) exemptions to 2FA. These exemptions eliminate the need for 2FA on low-risk transactions, streamlining the process for such purchases.

The latest UK Finance Fraud Report revealed that Remote Purchase Fraud (Card Not Present / CNP Fraud) has continued to fall since the UK rolled out SCA, with losses at their lowest level since 2014.  

Fraud is omnipresent. Remote Purchase Fraud still represents a significant value of fraud within the UK ecosystem, at £360M. Fraud is also migrating to other channels, such as Card ID Theft, which increased 53% in the last year.

To be a useful fraud prevention measure, 2FA should be combined with other components to create an effective fraud strategy. The balance is the right mix of enhanced security and strong user authentication for a smoother consumer experience in online payments.

AI-enhanced malicious attacks and soft ransomware targets front of mind for risk execs

Concern about artificial intelligence (AI)-enhanced malicious attacks have again topped Gartner’s emerging risk rankings in the second quarter of 2024, while new concerns regarding soft ransomware targets are also coming to the forefront of enterprise risks.

“Similar to AI-enhanced malicious attacks, soft ransomware targets require minimal experience and cost to cause significant financial and reputational damage,” said Gamika Takkar, director, research in the Gartner Risk & Audit Practice.

During the second quarter of 2024, Gartner surveyed 274 senior risk executives and managers to document and compare emerging risks, which are those that hold higher uncertainty because their evolution is rapid, nonlinear, or both.

Three of the top five most cited emerging risks are in the technology category (see Table 1) and new concern regarding soft ransomware targets enter the tracker for the first time. Escalating political polarization, which first entered the tracker in 4Q23, held steady as the third most cited concern, while misaligned organizational talent profile moved up from the fifth to fourth most cited risk.

Table 1: Top Five Most Commonly Cited Emerging Risks in Q2 2024
[Image Alt Text for SEO]

Source: Gartner (JULY, 2024)

Causes of Soft Ransomware Targets

Soft ransomware targets include the types of systems that may be especially vulnerable to ransomware due to underinvestment or technical debt, leading to longer disruptions in business operations when attacks occur. The ease of carrying out such attacks, via what’s known as ransomware-as-a-service (RaaS), allows cybercriminals with even minimal experience and technical skill to deploy attacks at low cost.

“Ransomware-as-a-service lowers the barrier to entry for inexperienced cybercriminals who know just enough about how to attack and disrupt business operations, creating worse impacts than usual when attacks occur,” said Takkar.

Potential Consequences to Mitigate

The potential impacts of soft ransomware targets range from operational disruptions and delay of services, to increased exposure to multi-extortion (e.g., ransom demand follows threats of selling, publishing or permanently deleting data), to increased financial burden in the form of direct and indirect costs. Direct costs include ransoms, remediation, litigation, and public relations, while indirect costs, such as reputational damage and loss of intellectual property, also create burden on the organization.

“While operational disruption and increased costs are dire consequences of soft ransomware targets, the exposure to extortion can impact not just the organization itself, but any and all associated third-parties as well, further underscoring the importance of understanding and preventing such risk,” said Takkar.

AI-powered malicious attacks are now a top emerging risk, says study

Concern about artificial intelligence (AI)-enhanced malicious attacks ascended to the top of the Gartner emerging risk rankings in the first quarter of 2024.

“The prospect of malicious actions enabled by AI-assisted tools is concerning risk leaders worldwide,” said Gamika Takkar, director, research in the Gartner Risk & Audit Practice. “The relative ease of use and quality of AI-assisted tools, such as voice and image generation, increase the ability to carry out malicious attacks with wide-ranging consequences.”

During the first quarter of this year, Gartner surveyed 345 senior enterprise risk executives to capture and benchmark their top 20 emerging risks and provide leaders a view of their causes and potential consequences.

Risks related to AI captured the top two rankings in the 1Q24 survey (see Table 1) with AI-enhanced malicious attacks cited as the top emerging risk and AI-assisted misinformation also causing concern. Escalating political polarization, which entered the tracker for the first time in 4Q23, dropped from the second most cited concern to third place.

Table 1: Top Five Most Commonly Cited Emerging Risks in Q1 2024
[Image Alt Text for SEO]

Source: Gartner (May 2024)

One of the key drivers of AI-enabled attacks and misinformation is the rapidly expanding access to its capabilities. AI enhancement can provide malicious code, and facilitate phishing and social engineering, which enables better intrusion, increased credibility and more damaging attacks.

“Its low cost and rapid growth also expose users to the technology who have little awareness on how to recognize when AI-enabled tools are providing valid vs. false or misrepresented information,” said Takkar.

The potential impacts of AI-enhanced attacks and misinformation are far-reaching and consequential to reputation, productivity and the ability of organizations to respond. Increased breaches and disclosure requirements can erode trust in an organization and brand among clients, consumers and partners.

“The speed and quality of AI-enhanced attacks and misinformation also hinder information security teams’ ability to respond and adapt to the new security landscape, further amplifying its vulnerabilities,” said Takkar.Gartner clients can read more in 1Q24 Emerging Risk Report. Nonclients can read: 1Q24 Emerging Risk Trends.

More AI knowledge required in consumer goods brands to avoid security blunders

Artificial Intelligence (AI) refers to software-based systems that use data inputs to make decisions on their own or that help users make decisions. Generative AI refers to AI creating content in any shape or format, from writing original text to designing new structures and products. These technologies have developed rapidly over the last 18 months and generated serious hype. However, the benefits and costs of AI applications are poorly understood. Consumer goods companies need to fail – and fail fast – in their AI initiatives to gain this understanding, according to new analysis.

GlobalData argues that consumer goods companies need to understand the technical, financial, and organisational requirements of any AI application to reliably assess the level of risk that application represents. Consumer goods companies need to consider how an AI should be trained to enable it to function cost-effectively. They also need to consider which delivery model is the most suitable from a data security and infrastructure cost point of view.

Rory Gopsill, Senior Consumer Analyst at GlobalData, said: “Industry professionals remain bullish about AI’s potential to disrupt numerous industries (including consumer goods). According to GlobalData’s Q1 2024 Tech Sentiment Poll, over 58% of respondents believe AI will significantly disrupt their industry. However, consumer goods companies should remember that the technology has limitations and risks. Chatbot failures caused Air Canada and DPD financial and reputational damage, respectively, in the first quarter of 2024. DeepMind’s own CEO warned against AI overhype in April 2024.”

In reality, adopting AI can pose very real financial and security risks. Training an AI can prove very expensive, especially if the task being automated is complex and requires an advanced AI. Furthermore, if an AI application requires training data that is commercially sensitive or confidential, a company may choose to train the AI in a private cloud environment rather than a less secure public cloud. Purchasing and maintaining the necessary IT infrastructure for this would be very expensive and organizationally demanding.

Gopsill continued: “Consumer goods companies need to be aware of these (and other) risks when choosing to develop AI applications. If they are not, their AI initiatives could fail with serious consequences. For example, sensitive data could be exposed, development costs could outweigh the application’s benefits, the quality of the AI application could be diminished, or the project could simply never get finished.

“Understanding these risks will enable consumer goods companies to fail early and safely and to learn from that failure. This will equip them with the knowledge to implement AI in a way that is safe and profitable. Fostering a culture of transparency around the risks of AI will help drive industry application and protect consumer goods companies and customers from the potential pitfalls of this evolving technology.”

Photo by freestocks on Unsplash

AI now the key tool for amateurs committing financial crime

Nearly 70% of the 600 fraud-management, anti-money laundering, and risk and compliance officials surveyed in BioCatch’s first-ever AI-focused fraud and financial crime report say criminals are more adept at using artificial intelligence to commit financial crime than banks are at using the technology to stop it. Equally concerning, around half of those same fraud-fighters report an increase in financial crime activity in the last year, and/or expect to see financial crime activity increase in 2024.

The report depicts a troubling and burgeoning trend in which criminals with minimal technical expertise or financial crime skillset are using this new technology to improve the quality, reach, and success of their digital-banking scams and financial crime schemes.

“Artificial intelligence can supercharge every scam on the planet,” BioCatch Director of Global FraudIntelligence Tom Peacock said, “flawlessly localizing the language, slang, and proper nouns used and personalizing for every individual victim the scam type, images, audio, and/or video involved. AI gives us scams without borders and will require financial institutions to adopt new strategies and technologies to protect their customers.”

A staggering 91% of respondents report their organization is now rethinking the use of voice-verification for big customers due to AI’s voice-cloning abilities. More than 70% of those surveyed say their company identified the use of synthetic identities while onboarding new clients last year.  The Federal Reserve believes traditional fraud models fail to flag as many as 95% of synthetic identities used to apply for new accounts. It regards synthetic identity fraud as the fastest-growing type of financial crime in the U.S., costing companies billions of dollars every year.

“We can no longer trust our eyes and ears to verify digital identities,” BioCatch CMO Jonathan Daley said. “The AI era requires new senses for authentication. Our customers have proven behavioural intent signals are those new senses, allowing financial institutions to sniff out deepfakes and voice-clones in real time to keep people’s hard-earned money safe.”

Other Key Survey Findings:

  • AI (Already) an Expensive Threat: More than half of the organizations represented in the survey say they lost between $5 and $25 million to AI-powered attacks in 2023.
  • Financial Institutions Also Using AI: Nearly 3/4 of those surveyed say their employer used AI to detect fraud and/or financial crime, while 87% say AI has increased the speed with which their organization responds to potential threats.
  • We Need to Talk: More than 40% of respondents say their company handled fraud and financial crime in separate departments that did not collaborate. Nearly 90% of those surveyed say financial institutions and government authorities need to share more information to combat fraud and financial crime.
  • AI to Help with Intelligence-Sharing: Nearly every respondent says they anticipate leveraging AI in the next 12 months to promote information-sharing about high-risk individuals across different banks.

“Today’s fraudsters are organized and savvy,” BioCatch CEO Gadi Mazor said. “They collaborate and share information instantly. Fraud fighters – including technology-solution providers like us, along with banks, regulators, and law enforcement – must do the same if we expect to reverse the growing fraud numbers across the globe. We believe our recent partnership with The Knoble will advance this discussion and remove the perceived barriers to better, more meaningful collaboration and fraud-prevention.”

New guidelines for Secure AI System Development unveiled

Th UK has published the first global guidelines to ensure the secure development of AI technology as part of an initiative encompassing agencies from 17 other countries that have confirmed they will endorse and co-seal the new guidelines.

The guidelines aim to raise the cyber security levels of artificial intelligence and help ensure that it is designed, developed, and deployed securely.

The Guidelines for Secure AI System Development have been developed by the UK’s National Cyber Security Centre (NCSC), a part of GCHQ, and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from across the world – including those from all members of the G7 group of nations and from the Global South.

The new UK-led guidelines are the first of their kind to be agreed globally. They will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others.

The guidelines help developers ensure that cyber security is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach.

How businesses can use AI to tackle financial crime

As technology continues to advance at a rapid rate, financial crime has taken on a new dimension, posing a multifaceted threat to financial institutions, writes Sonia Jain, Consultant Operations Manager at FDM Group… 

According to Kroll’s 2023 Fraud and Financial Crime Report, 68 per cent of respondents expect financial crime to increase over the next 12 months, with evolving technology posing one of the largest challenges. 

Not only does it jeopardise businesses’ reputation and client trust, but financial crime can also result in direct financial losses, operational costs, and the risk of insolvency.

Traditional methods of detecting and preventing fraud and illicit activities are no longer sufficient in the face of increasingly sophisticated criminals, but this is where artificial intelligence (AI) comes in. 

AI is a powerful tool that is revolutionising the finance industry’s approach to combating financial crime and keeping pace with new criminal tactics.

Financial crime involves illegal activities that aim at acquiring financial gain. Financial crime can have serious societal consequences which can adversely affect the shape of the global economy. 

With the help of AI, we can leverage its ability not just to combat the crime but also to monitor the financial activities in real time to prohibit the very occurrence of it.

Here are five ways businesses can use AI to fight financial crime:

  1. Real-time monitoring

AI-powered systems play a pivotal role in the battle against financial crime by enabling real-time monitoring of financial transactions. This capability is instrumental in swiftly identifying and addressing potential threats. Suspicious activities, such as unusual transaction patterns, can be automatically flagged by AI algorithms, triggering an immediate investigation.

By detecting and responding to illicit activities promptly, financial institutions can mitigate risks before they escalate and prevent crime from occurring in the first instance. The real-time nature of AI-based monitoring not only enhances security but also serves as a deterrent to potential criminals, as they are more likely to be caught in the act, thus reducing the overall occurrence of financial crime.

  1. Data analysis and pattern recognition

One of the primary strengths of AI is its ability to analyse vast amounts of data at lightning speed. Financial institutions deal with massive datasets daily, making it challenging to identify suspicious activities manually. AI algorithms excel at identifying patterns and anomalies within these data, helping to flag potentially fraudulent transactions or activities that might otherwise go unnoticed.

  1. Natural Language Processing (NLP)

Financial criminals frequently communicate through digital channels, leaving behind a wealth of text-based data that can be a treasure trove of evidence. Natural Language Processing (NLP) algorithms are instrumental in sifting through this textual data, scanning emails, chat logs, and other messages to identify suspicious or incriminating conversations.

These algorithms can detect keywords, phrases, or patterns associated with financial crimes, helping investigators uncover hidden connections, illegal activities, and nefarious intentions. NLP’s ability to parse and understand human language allows financial institutions and law enforcement agencies to stay ahead of criminals who attempt to mask their activities in written communication.

  1. Machine learning for predictive analysis

AI’s capacity to learn from historical financial crime data is a strategic advantage in the fight against illicit activities. By training on past cases, AI can construct predictive models that identify emerging threats and evolving criminal tactics. These models continually evolve and adapt, staying one step ahead of wrongdoers who seek to exploit vulnerabilities in financial systems. As AI systems become more attuned to nuanced patterns and emerging trends, they offer a proactive defence mechanism, helping financial institutions anticipate and tackle financial crime.

  1. Behavioural analysis

AI’s ability to construct detailed user profiles from transaction history and behaviour is a game-changer in financial crime detection. By establishing baseline behaviour for each customer, AI can promptly identify deviations from these norms. For instance, if a user typically conducts small, domestic transactions but suddenly initiates large withdrawals or transfers to high-risk countries, the system will trigger alerts for immediate scrutiny.

This proactive approach enables financial institutions to swiftly respond to potential threats and investigate suspicious activities, enhancing their capacity to prevent money laundering, fraud, and other illicit financial behaviours while safeguarding the integrity of their operations and the interests of their customers.

  • 1
  • 2