CircadifyCircadify
Insurance Technology9 min read

What Is Applicant Friction? Measuring Drop-Off in Insurance Screening

How to identify and measure applicant friction in digital insurance screening, where drop-off happens, and what the data says about fixing it.

gethealthscan.com Research Team·
What Is Applicant Friction? Measuring Drop-Off in Insurance Screening

Applicant friction in insurance screening refers to anything that slows down, confuses, or causes an applicant to abandon a health assessment during the underwriting process. The term gets thrown around a lot in insurtech circles, but few carriers actually measure it well. Most know their not-taken rate is too high. Fewer can tell you exactly where in the screening flow people leave, or why.

That gap between knowing friction exists and actually quantifying it is where a lot of money disappears. According to LIMRA's 2025 Insurance Barometer Study, 40% of American adults believe they need more life insurance coverage than they currently have, yet application volumes don't reflect that demand. Something breaks between intent and completion, and applicant friction in the drop-off during insurance screening measurement is the discipline of figuring out what.

RGA's 2024 research on digital disclosure found that two-thirds of insurance applicants underestimated their BMIs during digital self-reporting. The study argued that some friction in the process might actually improve data quality, coining the term "positive friction" to describe deliberate slowdown points that lead to more honest answers.

What counts as friction and what doesn't

Not all friction is bad. That's the first thing carriers get wrong when they try to streamline their digital screening processes. RGA's research team, led by Daniel Schömer and Christoph Krammer, published findings in 2024 showing that removing too much friction from digital applications actually degraded underwriting data quality. Applicants who breezed through self-reported health questions gave less accurate answers than those who encountered verification steps.

The distinction matters. There's "bad friction," which is unnecessary complexity that drives people away, and there's "positive friction," which are intentional checkpoints that improve the quality of collected data.

Friction type Example Effect on drop-off Effect on data quality Should you remove it?
Bad friction Requiring account creation before any screening Increases drop-off 15-25% No improvement Yes
Bad friction Multi-page medical history questionnaires Increases drop-off 10-18% Marginal improvement Redesign it
Positive friction Camera permission explanation screen Brief pause, minimal drop-off Better scan completion Keep it, but make it clear
Positive friction BMI self-report with photo verification Slight increase in time-on-task Significantly better accuracy Keep it
Bad friction Email verification loops mid-assessment 8-12% abandon at this step None Yes
Neutral Progress bar showing steps remaining No measurable drop-off impact No direct impact Keep it, helps with perceived control

Where applicants actually leave

Cake & Arrow, an insurance-focused design consultancy, published a report in 2025 examining friction patterns across B2C, B2B, and B2B2C insurance experiences. Their analysis identified what they called "mission friction," which occurs when internal teams aren't aligned on what the digital product is supposed to accomplish. A screening tool built by the IT team to collect maximum data will look very different from one designed by the product team to maximize completion rates.

The drop-off points cluster predictably. Data from Quantum Metric's healthcare analytics and separate LIMRA research on digital insurance journeys both point to the same zones:

The first 30 seconds. If the applicant doesn't understand what's about to happen, they leave. This is the highest-leverage moment in any digital screening flow. A confusing landing page or unexpected permission request here kills more assessments than anything downstream.

The medical history section. Long questionnaires using clinical terminology cause a second wave of abandonment. Applicants worry that disclosing conditions will disqualify them, so they close the tab instead of answering. The irony is that many of these questions could be replaced by biometric data that doesn't require self-reporting at all.

Device permission prompts. When a screening involves camera access for contactless vitals measurement, the permission request is a friction point. But it's a recoverable one. Carriers that explain why the camera is needed before the prompt appears see 60-70% higher permission grant rates than those that let the browser's generic dialog box do the talking.

The results wait. If the applicant completes the screening and then has to wait days for underwriting results, the emotional momentum is gone. MIB Group reported in early 2025 that life insurance application activity hit record growth, but conversion from completed application to issued policy still lags behind the volume increase. The gap between completion and issuance is its own friction category.

How to measure it

Most carriers track their not-taken rate, which is the percentage of applicants who receive an offer but never activate the policy. LIMRA has published not-taken rate benchmarks for decades. But that metric only captures the end of the funnel. It says nothing about people who dropped out during screening itself.

A more complete measurement framework looks at four layers:

Funnel conversion by step. Tag each screen or step in the digital assessment with an analytics event. Calculate the percentage of applicants who advance from each step to the next. Any step with more than 5% drop-off deserves investigation.

Time-on-step analysis. Applicants who spend an unusually long time on one screen are often confused. Those who spend almost no time may be clicking through without reading. Both patterns signal friction, but different kinds.

Return rate. How many applicants start the screening, leave, and come back later? High return rates suggest the process is too long for a single sitting. Insurity's 2025 consumer survey found that 1 in 5 consumers avoid filing insurance claims entirely due to frustrating digital processes. If that's the claims side, the application side is likely worse.

Device and browser segmentation. Friction isn't uniform across platforms. Mobile applicants on older Android devices may encounter camera permission issues that iPhone users never see. Breaking completion rates down by device type often reveals fixable technical problems hiding inside an average completion number.

What "good" looks like

The industry doesn't have universally agreed-upon benchmarks for screening completion rates, partly because the definition of "screening" varies so much. A 10-question health questionnaire and a full biometric assessment with camera-based vitals capture are both called "digital health screening," but they have very different baseline completion rates.

That said, some reference points exist. Carriers using simplified or accelerated underwriting methods, which LIMRA's 2025 consumer research showed applicants prefer over traditional methods, generally see completion rates 15-20 percentage points higher than those requiring extensive medical disclosure. The difference isn't the technology. It's how much the carrier asks the applicant to do.

The positive friction argument

The RGA study deserves more attention than it's gotten. Daniel Schömer's team tested whether making digital applications easier actually helped insurers. Their finding that two-thirds of applicants underestimated their BMIs in frictionless digital flows is a problem. An underwriter working with inaccurate self-reported data makes worse decisions than one working with accurate data collected through a slightly slower process.

This creates a real tension. The product team wants a 90-second screening that everyone completes. The actuarial team wants data they can trust. The answer isn't to pick one side. It's to identify which friction points improve data quality (keep those) and which just annoy people (remove those).

Contactless vitals measurement changes this equation. When a camera-based scan captures heart rate, respiratory rate, and blood oxygen data directly, the applicant doesn't have to self-report those numbers. The friction of answering health questions gets replaced by the minor friction of granting camera access, which takes a few seconds. The data quality goes up because it's measured, not self-reported.

What the not-taken rate actually tells you

LIMRA has tracked policy lapse and not-taken rates for years. Their 2019 data showed annual lapse rates of 4.3% for traditional and indexed universal life, 6.0% for variable universal life, and 2.9% for whole life. But these numbers describe policies that were already issued. The not-taken rate for policies offered but never activated sits much higher, and it's the piece most directly connected to screening friction.

When an applicant completes screening, receives an offer, and then doesn't follow through, the screening process itself may be partly responsible. If the assessment took 20 minutes and felt invasive, the applicant may have developed negative associations with the carrier before they even saw the offer. The screening experience colors everything that follows.

MIB Group's 2025 data showed male applicants at 47.7% of total applications and female applicants at 45.4%, with application volumes hitting records. But volume alone doesn't solve the friction problem. More people starting applications means the absolute number of drop-offs also increases unless the per-step conversion rates improve.

Frequently asked questions

What is a good completion rate for digital health screening in insurance?

It depends on the complexity of the assessment. Simple questionnaire-based screenings typically see 75-85% completion. Biometric assessments involving device permissions and camera access range from 55-70%, though carriers with well-designed permission flows report rates above 75%. The comparison isn't apples to apples because biometric screenings collect higher-quality data.

How do you calculate applicant friction?

Measure the drop-off rate between each step of the screening process. If 1,000 applicants start and 700 finish, overall friction is 30%. But the useful analysis is step-by-step: if 200 of those 300 drop-offs happen at account creation, that's where to focus. Time-on-step data and device segmentation add further detail.

Does reducing friction hurt data quality?

It can. RGA's 2024 research showed that frictionless digital applications led to less accurate self-reported health data. The solution isn't to add friction everywhere, but to distinguish between friction that improves accuracy (positive friction) and friction that just creates obstacles (bad friction). Camera-based biometric capture sidesteps much of this tradeoff by measuring data directly instead of asking for it.

What is the insurance not-taken rate?

The not-taken rate is the percentage of applicants who receive an insurance offer but don't activate the policy. LIMRA tracks this metric across the industry. A high not-taken rate often signals problems earlier in the funnel, including screening friction, that reduce the applicant's motivation to follow through. Reducing screening friction tends to improve not-taken rates even though the two metrics measure different parts of the process.

Carriers looking to reduce applicant friction in their digital screening workflows are increasingly turning to contactless vitals technology, which replaces lengthy health questionnaires with a quick camera-based scan. Companies like Circadify are building this capability specifically for insurance underwriting workflows, capturing biometric data in seconds without requiring the applicant to self-report anything. For a deeper look at how screening integrates with underwriting engines, see our post on health screening and underwriting engine integration. You can also read about optimizing digital health assessment completion rates for practical implementation details.

applicant frictioninsurance screening drop-offdigital underwritinginsurance applicant experience
Request a Demo