fbpx

Hard Fraud in Insurance: Bots, Ghost Brokers, Fraud Rings, and the Rise of Agentic AI

There’s a line in the insurance fraud conversation that most carriers draw instinctively but rarely define precisely: the line between someone who wants a policy but bends the truth to get a better price, and someone who has no interest in a legitimate policy at all.

The first category — soft fraud, premium misrepresentation, underwriting evasion — is a volume problem. It’s pervasive, it moves loss ratios, and it’s what most fraud detection investments are aimed at.

The second category is different in kind. Hard fraud isn’t about gaming a premium. It’s about exploiting the carrier’s systems entirely — harvesting pre-fill data, creating fake identities, binding policies with no intention of paying beyond the coverage window needed, or running automated attacks at a scale no human review queue was designed to catch.

The Coalition Against Insurance Fraud estimates total industry losses from fraud at $308 billion annually. Hard fraud — organized rings, ghost brokers, bots, and synthetic identities — represents a smaller slice of that number than soft fraud does. But it’s the fastest-growing category, it’s disproportionately concentrated in digital channels, and it requires a detection approach that nothing in a traditional fraud stack was built to address.

Why Digital Channels Created a Hard Fraud Problem

The mechanics are simple, and they’re worth naming directly: every barrier removed for the customer is also a barrier removed for the fraudster.

Digital-first application flows were designed to reduce friction. Fewer questions, faster quotes, compressed underwriting, prefill data that auto-populates forms. These design choices improved conversion rates and customer satisfaction scores. They also created infrastructure that organized hard fraud operations now exploit at scale.

A phone-based or in-person application required a human to participate in the process. A bot doesn’t need a human. A fraud ring operating across dozens of submitted identities doesn’t need dozens of applicants at a desk. The digital channel opened a surface area that the traditional fraud toolkit — which assumes a human applicant with a prior record — was never designed to cover.

The result: hard fraud in insurance is increasingly an application-stage problem, not a claims-stage problem. And catching it requires watching what’s happening inside the session, not just checking submitted data against external records.

Bot Attacks: Non-Human Traffic in Your Quoting Funnel

Bots operate in insurance application flows for several purposes: harvesting pre-fill data, testing rate environments across thousands of parameterized submissions, probing for coverage arbitrage opportunities, and generating fraudulent applications at volume.

The behavioral signature of a bot session is unmistakable to a system designed to look for it. No human applicant fills out a 40-question form with zero hesitation, perfectly consistent inter-keystroke timing, and field completion in milliseconds. The interaction pattern that produces those results isn’t a very fast typist — it’s a programmatic input sequence.

What behavioral detection catches that other tools miss: the keystroke cadence. Bots exhibit zero inter-keystroke variance — identical timing between every keypress, no pauses, no natural rhythm. Human typists, even fast ones, show measurable variation by field type, question complexity, and moment-to-moment attention. A system measuring keystroke latency at millisecond resolution catches a bot submission regardless of what identity data it’s carrying.

Secondary signals reinforce the picture: geo-velocity mismatches (an IP registered in one city routing through a data center in another country), residential proxy detection, data center ISP flags, and device fingerprints inconsistent with the claimed identity profile. Bot detection in a layered model is one of the higher-confidence fraud signals available — the behavioral pattern is distinctive and consistent.

→  How Behavioral Analytics Detects Insurance Fraud: The Carrier’s Playbook

→  Real-Time Fraud Detection: Why the Detection Window Is the Application

Ghost Brokers: The Fraud Nobody Sees Coming

Ghost brokers are among the most consequential and least-understood hard fraud categories in the insurance industry.

The mechanics: a ghost broker poses as a licensed insurance agent, takes payment from a customer for what appears to be a legitimate policy, then either never binds the coverage or binds a policy under false information and allows it to lapse. The customer believes they’re covered. The carrier may have a policy on the books that’s based on fraudulent information. When a claim happens, everyone loses — the policyholder has no real coverage, the carrier has a policy that shouldn’t have been issued, and the ghost broker is gone.

Ghost brokers tend to operate across carriers simultaneously. A single individual running a ghost brokering operation typically submits multiple applications from similar sessions — often the same device, using the same typing cadence, but entering variations of personal information to create what appear to be distinct applicants. The behavioral forensics reveal the pattern: same device fingerprint, same inter-keystroke rhythm, different names and PII.

ForMotiv’s detection approach triangulates across device data and behavioral signals simultaneously. A device that has bound three other policies in the last 24 hours, combined with a behavioral profile showing patterned responses and paste-heavy input behavior, for example, generates a ghost broker alert that routes the application for mandatory human review before any policy is issued.

The ghost broker problem is growing as digital-first distribution removes the agent-relationship layer that used to provide natural friction. When the “agent” is a website, the mechanisms that used to make ghost broking logistically difficult disappear.

→  How To Catch Insurance Agent Fraud & Prevent Gaming

→  Insurance Fraud Detection: The Complete Guide for Carriers

Fraud Rings: The Cross-Application Signal Problem

Organized fraud rings operate differently from individual fraudsters. They submit multiple applications across a carrier’s book, often using variations of real identities or shared personal data — a common pool of PII assembled from prior data breaches or purchased on identity markets — and coordinate submissions to stay below the velocity thresholds that trigger individual-application red flags.

The challenge for traditional fraud detection: each application looks clean in isolation. There’s no single submission that obviously fails a rule-based check. The fraud exists at the pattern level, across submissions, not within any one of them.

Behavioral analytics catches fraud rings at the session level. A fraud ring operating from a common infrastructure shares typing signatures, navigation behavior, and interaction patterns across its submissions — even when the submitted PII is different. A system measuring behavioral biometrics at the session level can identify that applications appearing to come from different applicants were actually generated from the same source.

By the way — the FBI identifies organized fraud rings as among the most financially damaging category of insurance fraud, with sophisticated operations often running for months or years before detection. The cross-carrier nature of ring activity means individual carriers rarely see the full picture; behavioral data flagging anomalous patterns within a single carrier’s book is often just the visible edge of a much larger operation.

The evidentiary value of this data extends beyond the detection moment. SIU teams investigating a fraud ring after the fact can use behavioral forensics to reconstruct the submission pattern, identify the common origin of applications, and build a case linking what appeared to be unrelated policies into a coordinated scheme.

→  Insurance Fraud Detection: The Complete Guide for Carriers

Agentic AI: The Newest Hard Fraud Vector

This one is worth spending time on because most carriers haven’t fully processed the implications yet.

AI agents — large language model-based systems capable of perceiving a digital environment and taking actions within it — can now complete insurance application forms autonomously on behalf of a user. The technology exists, it’s improving rapidly, and it is already appearing in carrier quoting flows in small but detectable volumes.

The challenge isn’t that AI agents complete forms. Some carriers are interested in building APIs specifically to accommodate agentic shopping — it reduces friction and may be part of a legitimate digital experience strategy. The challenge is the fraud vector it creates: an AI agent can be instructed to optimize an application to minimize premiums by systematically testing answer combinations, can be given parameters designed to evade detection, and can complete applications faster and more precisely than any human.

What distinguishes agentic AI-completed applications from human-completed ones is the behavioral signature. A human applicant shows hesitation before complex questions, variable typing speed across field types, non-linear navigation — scrolling back to review, changing answers after reconsidering — and natural variation in inter-keystroke timing. An AI agent completing a form in sequence shows none of this: Field 1 → Field 2 → Field 3, no deviations, zero hesitation, superhuman consistency.

ForMotiv’s data from 2.5 million sessions from one of our customers identified approximately 160 agentic sessions over a recent six-month period — 0.0065% of traffic. That number sounds inconsequential. But the trajectory matters more than the current volume. Carrier decisions made now about how to handle agentic traffic — block all of it, route it for human attestation, build APIs to accommodate it — will define the fraud surface they face in two to three years.

ForMotiv is currently in production detecting and flagging agentic sessions, giving carriers the ability to force human attestation on applications where the behavioral profile is inconsistent with human completion. The detection is based on behavioral divergence from human baseline — not any specific technology signature — which means it remains effective as agentic tools evolve.

→  Real-Time Fraud Detection: Why the Detection Window Is the Application

→  7 Ways to Detect Insurance Fraud in Real-Time

SIU and Forensics: Detection That Works Backward

The discussion so far has been about preventing hard fraud before a policy binds. But behavioral analytics also has distinct value at the back end of the fraud lifecycle — in the work Special Investigation Units do after a suspicious claim surfaces.

Traditional SIU investigation starts with a claim and works backward to assess whether the policy that produced it was legitimate. What SIU investigators have historically lacked is visibility into the application session itself — the digital behavior that preceded the submitted answers.

Behavioral data captured during the application creates an evidentiary record that persists after bind. When a claim comes in on a policy that behavioral analytics scored as high-risk at origination, that session data is available for retrospective analysis: what was entered and deleted, what the hesitation patterns looked like on specific questions, whether the session profile matches known fraud ring signatures, what device and network data accompanied the submission.

This is a tool SIU teams haven’t previously had access to. A claim on a policy completed by a bot session is an entirely different investigation than a claim on a policy completed by a human applicant with unusual behavioral patterns. The ability to distinguish those cases — before an investigation even begins — means SIU resources go where they’re most needed.

Carriers with robust behavioral data capture are also better positioned to collaborate with law enforcement on organized fraud cases, because they can produce session-level evidence connecting submissions across what appeared to be distinct policies.

→  Life Insurance Fraud Detection: The Nondisclosure Problem

→  Insurance Fraud Detection: The Complete Guide for Carriers

The Common Thread

Each of the hard fraud categories above — bots, ghost brokers, fraud rings, agentic AI — shares a characteristic that makes it difficult for traditional detection tools to catch: the fraud doesn’t exist in the submitted data.

A bot session with a clean synthetic identity passes third-party data checks. A ghost broker with a real customer’s information passes identity verification. A fraud ring using slight variations of real PII doesn’t trigger velocity alerts at the individual application level. An AI agent completing a form perfectly produces no data anomalies at all.

What all of these share is that the fraud signal exists in the behavioral layer — in how the application was completed, not what it contains. That’s the gap behavioral analytics was built to close.

Hard fraud is not going away. The digital channels that enable it are expanding, the tools available to fraudsters are improving, and the detection infrastructure at most carriers was designed for a different threat environment. The carriers closing that gap now are doing it at the session level, before submission, in real time.

 

Want to see how ForMotiv’s behavioral layer catches hard fraud before it reaches underwriting? Let’s talk.

Why Use ForMotiv Data?

Simple Integration

Easy, light-weight Javascript integration. Zero performance degradation.

Glass-Box Approach

5,000+ behavioral data points captured in each application. Accessible in real-time or batch file.

1st Party Behavioral Data

Granular, curated 1st-party data easily combined with your existing data sets

Intuitive Data Features

Capture dozens of intuitive behaviors like Hesitation, Error Rate Collections, Cognitive Loads, and more

Totally Safe & Secure

Zero PII Captured. GDPR, CCPA & PIPEDA Compliant