top of page
Search

Digital Banking Fraud: Understanding Telemetry as a Control Input

Updated: Feb 8

Digital banking fraud and cyber-enabled financial crime usually manifest as one operational problem: unauthorized automation interacting with customer accounts at scale. Banks that manage this issue effectively do one thing consistently. They treat customer account telemetry as a first-class control input, not merely a forensic artifact.


The core idea is simple. If you cannot attribute automation, bound it, and revoke it, you cannot reliably distinguish authorized automation from attacks.


Telemetry as the Control Plane for Good Bot vs Bad Bot in Digital Banking
Telemetry as the Control Plane for Good Bot vs Bad Bot in Digital Banking

The Operational Problem: One Event Stream, Multiple Owners


The same automation event can trigger four different obligations inside a bank:


  1. Cyber Concerns: Intrusion attempts and unauthorized access.

  2. Fraud Concerns: Account takeover, payment initiation, mule activity, and downstream losses.

  3. Consumer Compliance Obligations: Error resolution, notices, investigation steps, and provisional credit workflows.

  4. BSA AML Decisions: Investigation queues, SAR narratives, and escalation decisions.


When telemetry is treated as forensics only, each team builds partial views and inconsistent controls. The outcome is predictable. Controls are strong at login but weak in recovery. Payment controls do not incorporate upstream access anomalies. Disputes become treated as losses rather than labeled outcomes that can improve controls.


What Supervisory Materials Imply About Telemetry


Interagency information security standards aligned to GLBA expectations include monitoring to detect actual and attempted attacks and response programs for unauthorized access. FFIEC authentication and access guidance elevate monitoring, logging, and reporting to identify and track unauthorized access. It also emphasizes managing risk from customer-permissioned entities that access bank systems through credential-based and token or API-based methods.


Operational Translation


Telemetry must support both real-time risk decisions and post-event reconstruction for investigations, consumer compliance files, and SAR narratives.


Good Bot vs. Bad Bot: A Control Plane Decision


Good bot vs. bad bot should not be treated as a branding distinction. It is a control plane distinction. Banks should define bot legitimacy in terms of evidence and enforceable constraints.


A Regulator Safe Definition Has Two Sides


Authorized Automation: Automation acting under customer or bank authorization where the bank can prove three things:


  1. Attribution: The bank can identify who or what is acting and under what authority.

  2. Bounded Behavior: Scope, rate, and permissions are enforced and measurable.

  3. Auditability and Revocation: Actions are logged, and access can be revoked quickly without breaking the customer’s legitimate access.


Malicious Automation: Automation attempting to obtain or use access without authorization, often at scale. This includes automated credential testing, scripted recovery abuse, and automated payment attempts.


This framing is practitioner inference, but it aligns with risk-based monitoring and access control expectations reflected in interagency and FFIEC guidance.


Build a Telemetry Control Plane, Not Disconnected Logs


A common failure mode is local optimization. Banks harden login, then leave gaps at the seams: recovery workflows, call center tooling, device enrollment, profile changes, and payment initiation. Those seams are where account takeover completes and monetization starts.


A Defensible Operating Principle


Risk continuity is essential. Risk should not reset at channel boundaries.


Examples of Boundaries That Frequently Break Controls


  • Mobile to web

  • Digital to call center

  • Authentication to recovery

  • Recovery to account changes

  • Account changes to payment initiation

  • Payments to disputes


Risk continuity requires event linkage. The same customer, session, device, and channel context must be available to fraud operations, cyber responders, consumer compliance, and, where appropriate, AML investigators, subject to policy and legal constraints.


Minimum Viable Telemetry for Bot Discrimination and ATO to Fraud Cascades


You do not need a perfect schema to begin. You need consistent coverage across the customer journey so the bank can connect events into cases and drive consistent decisions.


Access Telemetry


  1. Authentication events and outcomes

  2. Session creation and concurrency signals

  3. Failed attempt patterns without attacker-relevant thresholds

  4. Device and platform attributes

  5. Network indicators such as IP and ASN level patterns where permitted

  6. Step-up outcomes and authenticator changes


Recovery Telemetry


  1. Password resets and recovery workflow pathing

  2. Identity verification steps used in digital and call center flows

  3. Contact point changes tied to recovery

  4. Exceptions and overrides in assisted channels


Account Change Telemetry


  1. Profile and contact changes

  2. New device enrollment

  3. Payee and beneficiary adds

  4. Limits changes

  5. Alert suppression and preference changes


Payment Telemetry


  1. Initiation channel and session context where possible

  2. Payee novelty

  3. Velocity and amount anomalies based on bank policy

  4. Funding source changes

  5. Confirmation steps and outcomes

  6. Holds and releases


Dispute Telemetry


  1. Dispute types and timing

  2. Resolution outcomes and reason codes

  3. Linkage to access, recovery, and payment event timelines


Why this matters: logs are only monitoring if teams can connect events into cases and drive consistent decisions. Otherwise, monitoring becomes performative.


Disputes and Chargebacks: Treat Them as Regulated Outcomes and Feedback Signals


Disputes and chargebacks are not only loss events. They are regulated outcomes that create consumer compliance obligations, including error resolution workflows under Regulation E and billing error resolution workflows under Regulation Z where applicable.


Operationally, Disputes Should Be Treated as Late Signals


A preceding control failure likely occurred. They can improve bot discrimination and fraud controls, but dispute outcomes are noisy. Friendly fraud, merchant disputes, and service issues introduce label noise. Bank process variance introduces bias. If teams train models directly on dispute outcomes without governance, they risk circular decision-making where the model learns the bank’s workflows rather than fraud truth.


A Defensible Approach


Separate two workflows that must not be conflated:


  1. Fraud Prevention Decisioning: Real-time controls such as step-up, holds, and blocks.

  2. Consumer Error Resolution: Regulated timelines, notices, investigation steps, and provisional credit decisions where required.


Telemetry should support both workflows with a shared evidence trail, but fraud automation should not override consumer compliance obligations.


AI and ML: If It Affects Customers, Treat It Like a Model


When AI or ML is used to classify bots, detect account takeover, score transactions, route disputes, or generate SAR investigation leads, it becomes a control mechanism. It must be governed.


A Defensible Baseline


Aligns to model risk management principles such as effective challenge, model inventories, documentation, performance monitoring, and internal audit. If a model output can deny access, add friction, hold payments, influence dispute routing, or trigger AML escalation, treat it as high impact and govern accordingly.


If generative AI is used to draft narratives, summaries, or investigation artifacts, add controls to ensure reproducibility and prevent fabricated rationales. Keep narrative generation constrained to observed facts and traceable supporting evidence.


Third-Party Relationships Determine Whether Bot Discrimination Is Even Possible


Third parties shape both the attack surface and the authorized automation surface. Aggregators and fintech partners can create legitimate automated access patterns that resemble attack traffic. Without explicit governance, banks either over-block and harm customers or under-control and expand their risk.


Interagency Third Party Guidance


Emphasizes inventory, risk tiering including critical activities, access to third-party data needed for oversight, and independent reviews. Those expectations map directly to good bot governance.


If you allow credential-based access by customer-permissioned entities, manage it explicitly and document compensating controls. If you offer token or API-based pathways, require measurable controls such as scope enforcement, access logs, and revocation workflows that integrate into the same telemetry plane.


BSA AML: Telemetry Is Financially Relevant Context, Not Just Cyber Context


Cyber-enabled activity often becomes suspicious financial activity. FinCEN encourages including cyber-related indicators such as IP addresses with timestamps and device identifiers in SAR narratives and encourages coordination between cybersecurity, fraud, and BSA AML teams.


Operationally, Bot and Device Signals Become AML Relevant


When they connect to suspicious flows such as rapid funds movement, mule behaviors, or repeat patterns across channels. This must be handled under privacy controls and SAR confidentiality constraints, but the direction is clear. A unified telemetry strategy supports better escalation quality and more consistent reporting.


Implementation Blueprint: What to Build and How to Govern It


A practical implementation can be executed in five steps:


  1. Set Governance First: Create a cross-functional forum that includes fraud, cyber, AML, consumer compliance, and third-party risk. Define what decisions telemetry drives. Establish data access rules, retention, privacy constraints, and audit expectations.

  2. Standardize the Event Model: Define event types and required fields. Create shared identifiers for customer, session, device, channel, and case. Enforce a data dictionary so investigations are repeatable.

  3. Build an Authorized Automation Program: Define approved integration paths. Maintain third-party identifiers. Enforce scope and bounds. Implement revocation workflows that do not break legitimate customer access.

  4. Treat Disputes as Both Compliance Work and Control Feedback: Protect Regulation E and Z workflows from auto-deny logic. Build label governance so dispute outcomes can be used as feedback signals without circularity.

  5. Align AI and ML to Model Governance: Inventory decision systems. Define validation and monitoring plans. Implement effective challenge. Ensure auditability and evidence traceability for customer-impacting outcomes.


Common Pitfalls That Break Programs


  • Telemetry gaps in recovery and call center flows

  • Risk reset across channels

  • No explicit treatment of customer-permissioned access

  • Dispute outcomes treated as ground truth without governance

  • AI deployed as a product feature instead of a controlled decision system


Closing: What Good Looks Like


A mature bank posture looks like this. The institution can distinguish authorized automation from attacks because it can prove attribution, enforce bounds, and revoke access. It carries risk context across the customer journey. It treats disputes as regulated outcomes and governed feedback signals. It aligns AI decisioning to model governance. It manages third-party access as a controlled surface. It can produce a defensible evidence trail for fraud response, consumer compliance, and SAR narratives.


Orbis Intelligence LLC helps financial institutions design account telemetry-driven fraud and financial crime control planes that connect cyber, fraud, disputes, and BSA AML outcomes. Orbisintelligence.com

 
 
 

Comments


bottom of page