Posts Tagged :

ai

AI-powered malicious attacks are now a top emerging risk, says study

Concern about artificial intelligence (AI)-enhanced malicious attacks ascended to the top of the Gartner emerging risk rankings in the first quarter of 2024.

“The prospect of malicious actions enabled by AI-assisted tools is concerning risk leaders worldwide,” said Gamika Takkar, director, research in the Gartner Risk & Audit Practice. “The relative ease of use and quality of AI-assisted tools, such as voice and image generation, increase the ability to carry out malicious attacks with wide-ranging consequences.”

During the first quarter of this year, Gartner surveyed 345 senior enterprise risk executives to capture and benchmark their top 20 emerging risks and provide leaders a view of their causes and potential consequences.

Risks related to AI captured the top two rankings in the 1Q24 survey (see Table 1) with AI-enhanced malicious attacks cited as the top emerging risk and AI-assisted misinformation also causing concern. Escalating political polarization, which entered the tracker for the first time in 4Q23, dropped from the second most cited concern to third place.

Table 1: Top Five Most Commonly Cited Emerging Risks in Q1 2024
[Image Alt Text for SEO]

Source: Gartner (May 2024)

One of the key drivers of AI-enabled attacks and misinformation is the rapidly expanding access to its capabilities. AI enhancement can provide malicious code, and facilitate phishing and social engineering, which enables better intrusion, increased credibility and more damaging attacks.

“Its low cost and rapid growth also expose users to the technology who have little awareness on how to recognize when AI-enabled tools are providing valid vs. false or misrepresented information,” said Takkar.

The potential impacts of AI-enhanced attacks and misinformation are far-reaching and consequential to reputation, productivity and the ability of organizations to respond. Increased breaches and disclosure requirements can erode trust in an organization and brand among clients, consumers and partners.

“The speed and quality of AI-enhanced attacks and misinformation also hinder information security teams’ ability to respond and adapt to the new security landscape, further amplifying its vulnerabilities,” said Takkar.Gartner clients can read more in 1Q24 Emerging Risk Report. Nonclients can read: 1Q24 Emerging Risk Trends.

More AI knowledge required in consumer goods brands to avoid security blunders

Artificial Intelligence (AI) refers to software-based systems that use data inputs to make decisions on their own or that help users make decisions. Generative AI refers to AI creating content in any shape or format, from writing original text to designing new structures and products. These technologies have developed rapidly over the last 18 months and generated serious hype. However, the benefits and costs of AI applications are poorly understood. Consumer goods companies need to fail – and fail fast – in their AI initiatives to gain this understanding, according to new analysis.

GlobalData argues that consumer goods companies need to understand the technical, financial, and organisational requirements of any AI application to reliably assess the level of risk that application represents. Consumer goods companies need to consider how an AI should be trained to enable it to function cost-effectively. They also need to consider which delivery model is the most suitable from a data security and infrastructure cost point of view.

Rory Gopsill, Senior Consumer Analyst at GlobalData, said: “Industry professionals remain bullish about AI’s potential to disrupt numerous industries (including consumer goods). According to GlobalData’s Q1 2024 Tech Sentiment Poll, over 58% of respondents believe AI will significantly disrupt their industry. However, consumer goods companies should remember that the technology has limitations and risks. Chatbot failures caused Air Canada and DPD financial and reputational damage, respectively, in the first quarter of 2024. DeepMind’s own CEO warned against AI overhype in April 2024.”

In reality, adopting AI can pose very real financial and security risks. Training an AI can prove very expensive, especially if the task being automated is complex and requires an advanced AI. Furthermore, if an AI application requires training data that is commercially sensitive or confidential, a company may choose to train the AI in a private cloud environment rather than a less secure public cloud. Purchasing and maintaining the necessary IT infrastructure for this would be very expensive and organizationally demanding.

Gopsill continued: “Consumer goods companies need to be aware of these (and other) risks when choosing to develop AI applications. If they are not, their AI initiatives could fail with serious consequences. For example, sensitive data could be exposed, development costs could outweigh the application’s benefits, the quality of the AI application could be diminished, or the project could simply never get finished.

“Understanding these risks will enable consumer goods companies to fail early and safely and to learn from that failure. This will equip them with the knowledge to implement AI in a way that is safe and profitable. Fostering a culture of transparency around the risks of AI will help drive industry application and protect consumer goods companies and customers from the potential pitfalls of this evolving technology.”

Photo by freestocks on Unsplash

AI now the key tool for amateurs committing financial crime

Nearly 70% of the 600 fraud-management, anti-money laundering, and risk and compliance officials surveyed in BioCatch’s first-ever AI-focused fraud and financial crime report say criminals are more adept at using artificial intelligence to commit financial crime than banks are at using the technology to stop it. Equally concerning, around half of those same fraud-fighters report an increase in financial crime activity in the last year, and/or expect to see financial crime activity increase in 2024.

The report depicts a troubling and burgeoning trend in which criminals with minimal technical expertise or financial crime skillset are using this new technology to improve the quality, reach, and success of their digital-banking scams and financial crime schemes.

“Artificial intelligence can supercharge every scam on the planet,” BioCatch Director of Global FraudIntelligence Tom Peacock said, “flawlessly localizing the language, slang, and proper nouns used and personalizing for every individual victim the scam type, images, audio, and/or video involved. AI gives us scams without borders and will require financial institutions to adopt new strategies and technologies to protect their customers.”

A staggering 91% of respondents report their organization is now rethinking the use of voice-verification for big customers due to AI’s voice-cloning abilities. More than 70% of those surveyed say their company identified the use of synthetic identities while onboarding new clients last year.  The Federal Reserve believes traditional fraud models fail to flag as many as 95% of synthetic identities used to apply for new accounts. It regards synthetic identity fraud as the fastest-growing type of financial crime in the U.S., costing companies billions of dollars every year.

“We can no longer trust our eyes and ears to verify digital identities,” BioCatch CMO Jonathan Daley said. “The AI era requires new senses for authentication. Our customers have proven behavioural intent signals are those new senses, allowing financial institutions to sniff out deepfakes and voice-clones in real time to keep people’s hard-earned money safe.”

Other Key Survey Findings:

  • AI (Already) an Expensive Threat: More than half of the organizations represented in the survey say they lost between $5 and $25 million to AI-powered attacks in 2023.
  • Financial Institutions Also Using AI: Nearly 3/4 of those surveyed say their employer used AI to detect fraud and/or financial crime, while 87% say AI has increased the speed with which their organization responds to potential threats.
  • We Need to Talk: More than 40% of respondents say their company handled fraud and financial crime in separate departments that did not collaborate. Nearly 90% of those surveyed say financial institutions and government authorities need to share more information to combat fraud and financial crime.
  • AI to Help with Intelligence-Sharing: Nearly every respondent says they anticipate leveraging AI in the next 12 months to promote information-sharing about high-risk individuals across different banks.

“Today’s fraudsters are organized and savvy,” BioCatch CEO Gadi Mazor said. “They collaborate and share information instantly. Fraud fighters – including technology-solution providers like us, along with banks, regulators, and law enforcement – must do the same if we expect to reverse the growing fraud numbers across the globe. We believe our recent partnership with The Knoble will advance this discussion and remove the perceived barriers to better, more meaningful collaboration and fraud-prevention.”

New guidelines for Secure AI System Development unveiled

Th UK has published the first global guidelines to ensure the secure development of AI technology as part of an initiative encompassing agencies from 17 other countries that have confirmed they will endorse and co-seal the new guidelines.

The guidelines aim to raise the cyber security levels of artificial intelligence and help ensure that it is designed, developed, and deployed securely.

The Guidelines for Secure AI System Development have been developed by the UK’s National Cyber Security Centre (NCSC), a part of GCHQ, and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from across the world – including those from all members of the G7 group of nations and from the Global South.

The new UK-led guidelines are the first of their kind to be agreed globally. They will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others.

The guidelines help developers ensure that cyber security is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach.

How businesses can use AI to tackle financial crime

As technology continues to advance at a rapid rate, financial crime has taken on a new dimension, posing a multifaceted threat to financial institutions, writes Sonia Jain, Consultant Operations Manager at FDM Group… 

According to Kroll’s 2023 Fraud and Financial Crime Report, 68 per cent of respondents expect financial crime to increase over the next 12 months, with evolving technology posing one of the largest challenges. 

Not only does it jeopardise businesses’ reputation and client trust, but financial crime can also result in direct financial losses, operational costs, and the risk of insolvency.

Traditional methods of detecting and preventing fraud and illicit activities are no longer sufficient in the face of increasingly sophisticated criminals, but this is where artificial intelligence (AI) comes in. 

AI is a powerful tool that is revolutionising the finance industry’s approach to combating financial crime and keeping pace with new criminal tactics.

Financial crime involves illegal activities that aim at acquiring financial gain. Financial crime can have serious societal consequences which can adversely affect the shape of the global economy. 

With the help of AI, we can leverage its ability not just to combat the crime but also to monitor the financial activities in real time to prohibit the very occurrence of it.

Here are five ways businesses can use AI to fight financial crime:

  1. Real-time monitoring

AI-powered systems play a pivotal role in the battle against financial crime by enabling real-time monitoring of financial transactions. This capability is instrumental in swiftly identifying and addressing potential threats. Suspicious activities, such as unusual transaction patterns, can be automatically flagged by AI algorithms, triggering an immediate investigation.

By detecting and responding to illicit activities promptly, financial institutions can mitigate risks before they escalate and prevent crime from occurring in the first instance. The real-time nature of AI-based monitoring not only enhances security but also serves as a deterrent to potential criminals, as they are more likely to be caught in the act, thus reducing the overall occurrence of financial crime.

  1. Data analysis and pattern recognition

One of the primary strengths of AI is its ability to analyse vast amounts of data at lightning speed. Financial institutions deal with massive datasets daily, making it challenging to identify suspicious activities manually. AI algorithms excel at identifying patterns and anomalies within these data, helping to flag potentially fraudulent transactions or activities that might otherwise go unnoticed.

  1. Natural Language Processing (NLP)

Financial criminals frequently communicate through digital channels, leaving behind a wealth of text-based data that can be a treasure trove of evidence. Natural Language Processing (NLP) algorithms are instrumental in sifting through this textual data, scanning emails, chat logs, and other messages to identify suspicious or incriminating conversations.

These algorithms can detect keywords, phrases, or patterns associated with financial crimes, helping investigators uncover hidden connections, illegal activities, and nefarious intentions. NLP’s ability to parse and understand human language allows financial institutions and law enforcement agencies to stay ahead of criminals who attempt to mask their activities in written communication.

  1. Machine learning for predictive analysis

AI’s capacity to learn from historical financial crime data is a strategic advantage in the fight against illicit activities. By training on past cases, AI can construct predictive models that identify emerging threats and evolving criminal tactics. These models continually evolve and adapt, staying one step ahead of wrongdoers who seek to exploit vulnerabilities in financial systems. As AI systems become more attuned to nuanced patterns and emerging trends, they offer a proactive defence mechanism, helping financial institutions anticipate and tackle financial crime.

  1. Behavioural analysis

AI’s ability to construct detailed user profiles from transaction history and behaviour is a game-changer in financial crime detection. By establishing baseline behaviour for each customer, AI can promptly identify deviations from these norms. For instance, if a user typically conducts small, domestic transactions but suddenly initiates large withdrawals or transfers to high-risk countries, the system will trigger alerts for immediate scrutiny.

This proactive approach enables financial institutions to swiftly respond to potential threats and investigate suspicious activities, enhancing their capacity to prevent money laundering, fraud, and other illicit financial behaviours while safeguarding the integrity of their operations and the interests of their customers.

AI MONTH: AI and Fraud Prevention – A confluence of technology and security

Businesses and financial institutions face a constantly mutating landscape of fraudulent activities. Traditional systems, once hailed as robust, now frequently lag behind in detecting and preventing contemporary fraud schemes. Enter Artificial Intelligence (AI): a transformative force that’s reshaping fraud prevention by providing real-time, predictive, and adaptable solutions. Here we explore the growing influence of AI in combatting fraud and safeguarding assets, based on input from delegates and suppliers at the Merchant Fraud Summit…

  1. Real-time Transaction Analysis: AI can process vast amounts of data at lightning speeds. This allows it to assess each transaction in real-time, comparing it against patterns of normal behaviour. If a transaction looks suspicious (say, an unusually large purchase made in a foreign country late at night), the AI system can flag it instantly for review or even block it until it’s verified.
  2. Deep Learning for Pattern Recognition: Fraudsters are known for their adaptability, constantly changing tactics to evade detection. Deep learning, a subset of AI, empowers systems to ‘learn’ from vast datasets, recognising patterns and anomalies without being explicitly programmed. This means that even if fraudsters alter their tactics, AI systems trained using deep learning can detect these new patterns, keeping businesses one step ahead.
  3. Predictive Fraud Analysis: Beyond merely detecting known fraudulent tactics, AI leverages predictive analytics to forecast potential future threats. By analysing historical fraud data and blending it with current transaction trends, AI can offer predictions about where and when the next potential fraud might occur. This proactive approach allows businesses to bolster security in vulnerable areas before a breach happens.
  4. Enhanced Authentication Protocols: AI has amplified the capabilities of biometric authentication methods like facial recognition, voice analysis, and fingerprint scanning. By continuously learning and updating individual profiles, AI ensures that only the authentic user can access accounts, thereby drastically reducing identity theft or account takeovers.
  5. Natural Language Processing for Phishing Detection: Phishing emails are a common tool in a fraudster’s arsenal. AI, equipped with Natural Language Processing (NLP), can scan emails and detect subtle linguistic cues that might indicate a phishing attempt, protecting users from potential threats.
  6. Automated Reporting and Decision Making: Post-incident reports are crucial for understanding breaches and strengthening defences. AI can automate this process, collating data, suggesting remedial measures, and even implementing certain protective protocols without human intervention.
  7. Adaptable and Self-learning Systems: One of the greatest advantages of AI is its inherent adaptability. As it encounters new types of fraud or even near-miss events, it learns, refines its algorithms, and becomes even more effective in subsequent detections.

AI is not merely a tool but a dynamic shield adapting and evolving in the face of emerging threats. As businesses and transactions continue their inexorable shift online, AI stands as a sentinel, safeguarding assets and instilling trust in systems. The fusion of AI and fraud prevention is an exemplar of how technology can be harnessed to protect, predict, and prevail against malicious intent.

Are you looking for mobile anti-fraud solutions for your business? The Merchant Fraud Summit can help!

Photo by Possessed Photography on Unsplash

Harnessing artificial intelligence to combat merchant fraud in retail

The retail sector has been both blessed and cursed for the fast pace of e-commerce growth and arrival of alternative payments. The boon of online shopping and digitisation has expanded horizons for retailers, but with it comes the bane of increased fraud. Fortunately, Artificial Intelligence (AI) emerges as a stalwart ally in detecting and thwarting merchant fraud. Here’s how AI is revolutionising the fight against fraudulent activities in retail…

  1. Real-time Fraud Detection:
    • Function: AI algorithms can continuously monitor transactions, identifying anomalies and suspicious patterns in real-time, often before a human could even notice them.
    • Benefit: Immediate detection ensures that potentially fraudulent transactions are flagged and investigated swiftly, minimising financial losses and ensuring consumer trust remains intact.
  2. Predictive Analysis:
    • Function: By examining vast sets of historical data, AI can predict potential future fraudulent activities based on past patterns and behaviours.
    • Benefit: Proactively identifying possible fraud before it even occurs puts retailers one step ahead of fraudsters, thereby acting as a deterrent.
  3. Multi-layered Verification:
    • Function: AI can integrate and analyse data from various sources – such as transactional data, customer behaviour, and device ID – to validate the authenticity of a transaction.
    • Benefit: A comprehensive, multi-faceted verification process reduces the likelihood of false positives, ensuring legitimate transactions are not inadvertently blocked.
  4. Natural Language Processing (NLP):
    • Function: AI-driven NLP tools can scan customer communication, feedback, and reviews to identify possible instances or allegations of fraud that might go unnoticed in vast datasets.
    • Benefit: By pinpointing these potential red flags, retailers can proactively investigate and address concerns, bolstering their reputation and trustworthiness.
  5. Deep Learning for Identity Verification:
    • Function: Deep learning, a subset of AI, can be utilised for facial recognition, voice recognition, and other biometric verifications to ensure that a transaction is being made by the legitimate cardholder.
    • Benefit: This level of identity verification significantly reduces instances of identity theft and card-not-present fraud.
  6. Behavioural Analytics:
    • Function: AI can track and analyse the behavioural patterns of users, including browsing habits, purchase history, and even mouse movements, to detect anomalies that might indicate fraud.
    • Benefit: Recognising deviations from a user’s typical behaviour allows for more nuanced fraud detection, reducing both false negatives and positives.
  7. Adaptive Systems:
    • Function: AI systems can learn and adapt. As they encounter new types of fraud or refine their understanding of existing schemes, they can evolve to detect and prevent these new threats more effectively.
    • Benefit: An adaptive system ensures that fraud detection strategies are always up-to-date and equipped to combat the latest tactics employed by fraudsters.

The marriage of retail and AI in the realm of fraud detection and prevention offers a robust shield against malicious activities. While no system can guarantee complete immunity, the capabilities of AI certainly place retailers in a far stronger position to safeguard their assets, reputation, and most importantly, their customers.

You can learn more about the benefits of AI and the anti-fraud benefits it offers at the Merchant Fraud Summit.

Image by Pexels from Pixabay

Bribery and corruption concerns drive 650% increase for Regtech AI KYC checks in banking sector

A new study from Juniper Research has found that the total number of Know Your Customer (KYC) checks for banking, conducted using AI, will reach almost 175 million globally by 2028; up from just over 23 million in 2023.

The demand for regtech solutions is increasing across not only financial services, but also industries such as healthcare and cybersecurity, as continuous verification of identities becomes fundamental in preventing financial crime and non-compliance.

One example of this is the rise of virtual GPs and ePharmacies. Here, Juniper says it is vital for KYP (Know Your Patient) verification to be employed, in order to prevent fraud, such as identity theft and financial exploitation. By implementing these KYC verifications, businesses can avoid fines for failing to carry out customer assessments.  

The report encourages cross-border businesses to adopt regtech solutions in order to reduce risk across different regulatory jurisdictions. As multinational companies expand into new regions, they are faced with a fragmented regulatory framework comprising jurisdictional differences across varying markets. Failure to meet compliance demands can lead to businesses facing penalties; resulting in serious economic and reputational consequences.

The recent emergence of “Failure to Prevent” offences specifically target organisations to hold them accountable for failures in their compliance system. Implementing regtech solutions enables organisations to defend themselves from this type of allegations.

The report found that innovative vendors are using AI and machine learning to decipher email and phone call data to identify bad actors across organisations. This is vital as lawmakers and regulatory bodies are cracking down on bribery and corruption offences, which severely undermine fair competition and contribute to slow economic growth.

Juniper Research recommends that as businesses expand their operations and move into new regions, they deploy AI-powered regtech solutions to automate monitoring of regulatory compliance; reducing manual checks being required and overall risk.

Image by Gerd Altmann from Pixabay