top of page
Search

Critical AI Takeaways from the U.S. Department of Treasury Report: "Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector"



A digital illustration depicting artificial intelligence.
A digital illustration depicting artificial intelligence.

The U.S. Department of the Treasury released the report Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector. This report provides a detailed examination of AI-related cybersecurity and fraud risks in financial services, as mandated by Executive Order (EO) 14110. Rhis report is based on 42 interviews with financial institutions, IT firms, and AML/anti-fraud professionals and highlights current threats, best practices, and regulatory considerations.


What is Executive Order (EO) 14110?


EO 14110 establishes a federal framework for the safe, responsible, and ethical development and use of AI in the U.S. The order aims to promote AI innovation while mitigating risks related to national security, fraud, discrimination, labor market disruptions, and misinformation.


One of EO 14110’s key strengths is its emphasis on AI safety and security. It mandates rigorous safety testing and the implementation of cybersecurity protections to mitigate risks such as deepfakes, AI-driven fraud, and automated cyberattacks. The order also reinforces consumer protection laws and anti-discrimination regulations, ensuring that AI does not exacerbate bias, fraud, or privacy violations.


Additionally, EO 14110 promotes AI research and development (R&D) through federal funding and public-private partnerships, ensuring that AI firms can innovate responsibly. It also strengthens the U.S.'s international position by collaborating with global allies, such as the G7 and NATO, to establish common AI governance standards.


 Despite its broad framework, EO 14110 does not establish legally binding AI regulations. Instead, it relies on existing laws and on voluntary compliance, which does not fully address the intricacies of AI technology. The lack of enforcement mechanisms raises concerns about how AI risks will be managed in practice. EO 14110 introduces significant regulatory requirements, which creates financial and technical burdens for smaller AI firms, limiting competition.


What is the "Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector" Report?


A key takeaway from the report is that AI enhances cybersecurity and fraud detection. Financial institutions have leveraged AI-powered threat detection and fraud prevention systems for years, significantly improving risk management and operational efficiency. AI-driven solutions enable real-time anomaly detection, behavioral analysis, and automated responses to cyber threats, strengthening defenses against evolving cybercriminal tactics.


Many financial institutions integrate AI risk considerations into their existing IT security, compliance, and third-party risk management frameworks, aligning with NIST’s AI Risk Management Framework (RMF). The financial sector also benefits from cyber threat-sharing platforms, such as the Financial Services Information Sharing and Analysis Center (FS-ISAC), which facilitate cross-industry collaboration in mitigating AI-related security risks. Moreover, large financial institutions report substantial fraud reduction (up to 50%) due to AI-driven fraud detection models trained on proprietary data.


The report also identifies key weaknesses in AI risk management. Many financial institutions struggle to establish governance frameworks for Generative AI, not to be confused with AGI, which presents unique challenges, such as deepfake-based fraud, data poisoning, and model exploitation. There is a significant gap in fraud data sharing across the financial sector—large institutions have access to vast fraud datasets, while smaller firms lack the resources to develop robust AI-driven fraud prevention tools; data sharing initiatives are critical. This data divide disproportionately impacts smaller institutions, making them more vulnerable to fraudsters who adapt to AI-driven security measures.


Additionally, the increasing reliance on third-party AI vendors introduces risks related to data integrity, model transparency, and vendor security, making it difficult for institutions to fully understand and control their AI-driven decision-making processes.


Opportunities for Strengthening AI-Based Cybersecurity and Fraud Prevention


The report highlights several opportunities for financial institutions to strengthen their cybersecurity and fraud prevention strategies using AI:


  • Expanding AI-driven threat intelligence and automated security response systems to counter sophisticated cyberattacks and financial fraud schemes.

  • Enhancing fraud data-sharing initiatives led by organizations such as the American Bankers Association (ABA) and FinCEN, which could significantly improve fraud detection capabilities across the financial sector.

  • Leveraging AI to streamline regulatory compliance, enhance anti-money laundering (AML) monitoring, and detect synthetic identity fraud, making financial crime prevention more effective and cost-efficient.


Institutions that successfully integrate AI into their cybersecurity frameworks will gain a competitive advantage by reducing fraud-related losses, improving operational efficiency, and enhancing consumer trust.


The report warns that AI-powered cybercrime is evolving rapidly. Criminals are increasingly adopting AI to execute deepfake scams, pig butchering, synthetic identity fraud, and AI-enhanced phishing and malware attacks. These sophisticated fraud techniques are harder to detect and mitigate, posing a growing risk to financial institutions.


Additionally, the lack of clear regulatory guidance on AI governance in financial services creates uncertainty, potentially leading to inconsistent risk management practices across the industry. Smaller institutions may struggle to comply with future AI-related regulations, particularly if compliance costs increase. The over-reliance on third-party AI vendors also introduces risks, as financial firms may not have full visibility into how external AI models process and analyze their data.


Finally, state-sponsored cybercriminals and well-funded adversaries could exploit AI-driven cyberattacks at a scale that outpaces financial institutions' ability to respond, making AI both a powerful tool and a potential liability.


AI-Powered Deepfake Fraud Case: A Real-World Case


In early 2024, a Hong Kong-based financial firm fell victim to an AI-powered deepfake scam, resulting in a $25 million loss. Cybercriminals leveraged Generative AI to clone the voice and facial expressions of the company’s Chief Financial Officer (CFO) and used this deepfake technology to conduct a fraudulent video conference call. During the call, an employee was instructed to initiate multiple wire transfers to an external bank account, believing they were receiving orders from senior leadership. Due to the realistic nature of the deepfake video and voice, the employee complied without questioning the request. By the time the fraud was detected, the funds had already been laundered through multiple offshore accounts, making recovery nearly impossible.


This case represents with several AI-related fraud risks highlighted in the U.S. Department of the Treasury’s report. One major concern is identity impersonation and synthetic identity fraud, as AI-generated deepfakes enable criminals to mimic executives with alarming accuracy. Additionally, the attack exploited AI-enhanced social engineering tactics, a growing cybersecurity risk as AI enables fraudsters to create highly convincing phishing emails, phone calls, and video messages.



Bibliography:


American Bankers Association. Artificial Intelligence and Financial Crime: Mitigating Risks in a Digital Economy. Washington, DC: ABA, 2023.


Financial Services Information Sharing and Analysis Center (FS-ISAC). 2023 Cyber Threat Intelligence Report. FS-ISAC, 2023.


National Institute of Standards and Technology (NIST). AI Risk Management Framework (RMF). U.S. Department of Commerce, 2023.


U.S. Department of the Treasury. Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector. Washington, DC: U.S. Department of the Treasury, 2024.

White House. Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Washington, DC: Office of the President, 2023.

 
 
 

Comments


bottom of page