Protect the Models That Move Money
AI is embedded in every layer of financial services - fraud detection, credit scoring, transaction routing, customer service. Your AI attack surface grows with every model you deploy.
What We See in This Space
Financial services organizations have deployed AI faster than they have secured it. Fraud detection models, credit scoring algorithms, AML screening systems, and AI-powered customer service are all production-critical systems - and all carry attack surfaces that traditional penetration testing methodologies cannot assess.
Adversarial Attacks on Fraud Detection Models
AI fraud detection operates by learning patterns from historical transaction data and flagging anomalous activity in real time. The security question that almost no institution has answered is: can an adversary manipulate those decisions?
Two distinct attack classes apply:
Training data poisoning targets the model before it ever reaches production. An adversary who can influence the training data - through compromised data pipelines, insider access, or manipulation of third-party data feeds - can embed systematic blind spots into the model. Specific transaction patterns, account behaviors, or merchant categories can be made invisible to the fraud system. The model continues to function normally by every operational metric while silently passing fraudulent transactions that fall within its engineered blind spot.
Adversarial input manipulation operates against a deployed model without modifying it. By probing the model’s decision boundary through controlled queries, an adversary can identify the feature combinations that reliably produce a “pass” result. Transactions can then be crafted to match those features - exploiting the model’s own learned behavior to evade detection. This attack requires no privileged access: it requires only the ability to submit transactions and observe outcomes.
infosec.qa’s AI Attack Surface Assessment and LLM Red Teaming services both include adversarial integrity testing for AI decision systems - a capability that no traditional financial crime consultancy offers.
Model Extraction: Protecting Proprietary Trading Logic
Quantitative finance has always treated models as core intellectual property. The proprietary trading logic embedded in AI systems - pattern recognition, signal weighting, execution optimization - represents significant competitive advantage.
Model extraction attacks allow adversaries to reconstruct a proprietary model through query access alone. By submitting carefully constructed inputs and observing outputs, an attacker can train a surrogate model that approximates the target model’s behavior with high accuracy. This attack is practical against any externally accessible AI system - including AI-powered trading signals provided to counterparties, AI-assisted advisory tools, and AI underwriting models accessed through API integrations.
The regulatory dimension is emerging: OCC SR 11-7 (Model Risk Management) and equivalent guidance from MAS, DFSA, and the EBA now expect institutions to assess model integrity risks - and model extraction represents a material integrity risk that most institutions have not evaluated.
infosec.qa’s AI Supply Chain Security service includes model integrity assessments that address extraction risk - mapping query access patterns, recommending query limiting and output perturbation strategies, and providing evidence documentation for model risk management frameworks.
Regulatory Requirements: OCC, MAS, and DFSA on AI Risk
The principal global regulators for AI in financial services have all moved toward explicit AI risk management requirements:
OCC SR 11-7 (Model Risk Management) requires banks to implement model risk management processes including model validation. The OCC has confirmed that AI/ML models fall within the scope of SR 11-7 - and that validation must be appropriate to the model’s risk level and usage. For credit decisions and fraud detection, that means adversarial testing is a reasonable expectation.
MAS Technology Risk Management Guidelines require financial institutions in Singapore to manage technology risks including those arising from AI systems, with explicit attention to model integrity, explainability, and security testing in production-grade AI deployments.
DFSA Technology Governance Rules apply to regulated firms in the Dubai International Financial Centre - and the DFSA has signaled in supervisory guidance that AI governance, including security controls for AI systems, is within scope of existing technology governance obligations.
infosec.qa’s AI Governance Risk Framework service maps your AI security posture to the specific regulatory framework applicable to your institution - producing documentation structured for regulatory review, model risk committee reporting, and internal audit.
PCI DSS v4.0 and AI-Assisted Payment Flows
PCI DSS v4.0 extended security requirements to all components of the cardholder data environment - and AI-assisted payment flows fall within that scope. If your organization uses AI for fraud scoring, transaction routing, chargeback prediction, or customer authentication in the payment context, those AI components are in scope for PCI DSS.
The challenge is that the standard penetration testing methodology required by PCI DSS was designed for web applications and network infrastructure - not for AI systems. Prompt injection, tool poisoning, insecure output handling, and model integrity attacks are not covered by traditional payment application penetration testing, even when conducted by a QSA-approved security firm.
infosec.qa’s LLM Red Teaming service is structured to address PCI DSS v4.0 requirements for AI components within the cardholder data environment. Findings are documented with PCI DSS control mapping suitable for QSA review and Board reporting.
AML Compliance Systems: The Integrity Risk
Anti-money laundering screening represents one of the highest-stakes AI applications in financial services. A compromised or manipulated AML system creates regulatory exposure - and potentially criminal liability for compliance officers who relied on it.
The integrity risks to AI-powered AML systems are not theoretical:
- Data poisoning of training sets sourced from transaction monitoring systems, sanctions databases, or correspondent bank data
- Adversarial inputs crafted to fall below detection thresholds for specific transaction patterns
- Model drift induced through systematic manipulation of reference data over time
- Supply chain attacks targeting third-party AML model vendors or data providers
infosec.qa’s AI Supply Chain Security and AI Threat Intelligence services address the full lifecycle of AML model integrity - from data pipeline security to production model monitoring to threat intelligence on emerging adversarial techniques targeting financial crime compliance systems.
Frameworks We Cover
How We Help
LLM Red Teaming
AI Attack Surface Assessment
AI Governance Risk Framework
AI Supply Chain Security
AI Threat Intelligence
Know Your AI Attack Surface
Request a free AI Security Scorecard assessment and discover your AI exposure in 5 minutes.
Get Your Free Scorecard