Phone Camera vs Wearable Device: Which Captures Better Vitals for Underwriting
Comparing phone camera rPPG and wearable devices for insurance underwriting vitals capture, including accuracy, cost, compliance, and applicant experience.

Insurance carriers evaluating digital health screening tools face a real decision: should they capture applicant vitals through a phone camera scan or require a wearable device? Both approaches eliminate the paramedical nurse visit. Both produce biometric data underwriters can use. But they differ in ways that matter for underwriting operations — and the differences run deeper than sensor accuracy. Applicant behavior, compliance burden, and total program cost all shift depending on which path a carrier picks.
A 2023 medRxiv preprint evaluating smartphone-based rPPG vital monitoring (the WellFie study) found heart rate prediction accuracy of 97.34% and respiratory rate accuracy of 84.44% when compared against clinical reference standards — numbers that fall within acceptable ranges for screening-level underwriting decisions.
How phone camera vs wearable vitals underwriting actually works
Phone camera-based screening uses remote photoplethysmography (rPPG). The applicant opens a link on their smartphone, holds the camera to their face for 30 to 60 seconds, and the software extracts cardiovascular signals from subtle color changes in their skin caused by blood flow. No app download required in most implementations. No hardware beyond the phone they already own.
Wearable-based screening requires the applicant to wear a device — typically a smartwatch or fitness tracker — for a period ranging from a few hours to several weeks, depending on the carrier's program design. The device continuously records heart rate, heart rate variability, activity levels, and sleep patterns. Some programs ship devices to applicants; others accept data from devices the applicant already owns.
These are fundamentally different data collection philosophies. One captures a point-in-time snapshot. The other captures a longitudinal profile. Neither is inherently better. The question is which one serves the underwriting decision you actually need to make.
What each method measures
The overlap between phone camera and wearable vitals is narrower than it first appears. Both capture heart rate and heart rate variability, but the depth and context of that data differ significantly.
| Metric | Phone camera (rPPG) | Wearable device |
|---|---|---|
| Resting heart rate | Yes, single measurement | Yes, continuous over days/weeks |
| Heart rate variability | Yes, short-term HRV window | Yes, longitudinal HRV trends |
| Respiratory rate | Yes | Some devices, varies by model |
| Blood oxygen (SpO2) | Yes | Yes, on devices with SpO2 sensor |
| Blood pressure estimate | Yes, via pulse transit time analysis | Rare, most wearables lack this |
| Physical activity data | No | Yes, steps, exercise minutes, calories |
| Sleep architecture | No | Yes, sleep stages and duration |
| Stress indicators | Inferred from HRV snapshot | Inferred from continuous HRV patterns |
| Exercise heart rate | Limited (requires activity during scan) | Yes, captures exercise response |
| Data collection time | 30-60 seconds | Days to weeks |
For underwriting purposes, the phone camera captures what matters most in a single sitting: resting cardiovascular state. Wearables capture behavioral patterns over time. The underwriting question is whether that behavioral data changes the risk classification enough to justify the operational complexity of collecting it.
The applicant completion problem
Programs succeed or fail here, and it has nothing to do with sensor accuracy.
John Hancock's Vitality program, one of the longest-running wearable-based insurance programs in North America, has demonstrated that wearable engagement works for retention and wellness incentives. But Vitality is a post-issue engagement program, not a pre-issue screening tool. The incentive structure is different. Policyholders who already bought coverage have reasons to participate. Applicants who haven't committed yet do not.
LIMRA data consistently shows that friction in the application process drives abandonment. Their 2024 Insurance Barometer Study found that roughly 25% of life insurance applicants abandon applications before completion. Adding a requirement to wear a device for days or weeks before a decision can be made introduces a new dropout window that doesn't exist with phone-based screening.
Phone camera scans complete in under a minute. Carriers using rPPG-based screening report applicant completion rates between 85% and 95%. The applicant clicks a link, scans their face, and the data is available to the underwriting engine immediately.
Wearable programs face a logistics chain: device selection (accept applicant-owned devices or ship one?), onboarding instructions, data sync verification, minimum wear-time enforcement, and data retrieval. Each step is a potential failure point.
Accuracy for underwriting decisions
Both technologies produce data that's accurate enough for screening-level underwriting. The question is whether the additional precision from continuous wearable monitoring changes underwriting outcomes in a meaningful way.
A medRxiv study (2022) evaluating camera-based monitoring against regulated medical devices found acceptable agreement for heart rate and respiratory rate across a diverse subject pool varying in age, height, weight, and skin tone. Blood pressure estimates showed wider variance, which is consistent across both rPPG and consumer wearable blood pressure attempts — neither has solved cuff-free blood pressure measurement at clinical grade.
De Ridder et al. (2025), reviewing 96 studies on rPPG health assessment published in Frontiers in Physiology, confirmed that the core technology for extracting heart rate from facial video is well-established, with research focus shifting toward real-world conditions like varied lighting and movement.
Wearable accuracy varies by device tier. Medical-grade wearables (like those used in clinical trials) offer ECG-quality heart rate data. Consumer-grade fitness trackers, which is what most applicants own, have wider accuracy ranges. A carrier accepting data from applicant-owned devices has to account for the accuracy spread across dozens of device models and firmware versions.
| Accuracy factor | Phone camera (rPPG) | Wearable device |
|---|---|---|
| Heart rate accuracy | ~97% vs clinical reference (WellFie study) | 95-99% for medical grade; 90-97% for consumer grade |
| Consistency across users | Validated across skin tones and ages | Varies by device fit, skin tone, and wrist anatomy |
| Environmental sensitivity | Affected by extreme lighting conditions | Affected by motion artifact during activity |
| Standardization | Same algorithm across all phones | Data quality varies by device manufacturer |
| Blood pressure | Estimate only, wider variance | Estimate only on rare devices, most don't measure |
| Regulatory status | Screening tool classification | Consumer wellness device (most); FDA-cleared (few) |
For accelerated underwriting triage — determining which applicants need further evidence and which qualify for instant or expedited decisions — both produce sufficient signal. The practical difference is that phone camera data is standardized (same algorithm, same measurement conditions) while wearable data requires normalization across device types.
Compliance and data governance
Wearable programs get complicated fast on the compliance side.
WTW announced a collaboration with Klarity Health in August 2025 to build wearable-based risk scoring tools for insurers. Even in their announcement, they acknowledged that data privacy and security remain primary concerns for wearable health data in underwriting.
Phone camera scans collect data in a single, discrete event. The applicant performs one scan, the data is processed, and the raw video is typically discarded — only the extracted vital sign measurements are retained. The data lifecycle is short and well-defined.
Wearable programs involve continuous data collection over extended periods, raising questions that insurers and their compliance teams need to answer:
- How much historical data can you request from an applicant's personal device?
- Does accessing sleep and activity data cross into lifestyle surveillance territory?
- How do you handle data from devices the applicant uses for personal health tracking?
- What happens to the data if the application is declined or withdrawn?
- How do state-level biometric privacy laws (BIPA in Illinois, CCPA in California) apply to continuous wearable data vs. a single-event scan?
HIPAA considerations differ too. Phone camera data captured specifically for the insurance application fits neatly into existing underwriting data governance frameworks. Continuous wearable data that includes sleep patterns, exercise habits, and location-derived activity raises questions about what's "necessary and appropriate" for the underwriting decision.
The consent problem
Wearable data often exists on third-party platforms — Apple Health, Google Fit, Garmin Connect, Fitbit. Accessing that data requires API integrations with those platforms and explicit applicant consent to share data from their personal health accounts. Each platform has its own data sharing policies and consent frameworks that the carrier must navigate.
Phone camera screening sidesteps this entirely. The data is generated specifically for the insurance application, on a carrier-controlled platform, with a single consent event.
Cost comparison for carrier programs
The economics favor phone camera screening at scale, though wearable programs have their own cost structure worth examining.
| Cost factor | Phone camera (rPPG) | Wearable device |
|---|---|---|
| Per-assessment cost | $5-15 per completed scan | $0 if applicant-owned device; $50-300 if carrier-shipped |
| Integration cost | API integration with underwriting engine | Multiple API integrations (device platforms + underwriting) |
| Applicant support | Minimal (self-service scan) | Higher (device setup, sync issues, wear-time questions) |
| Data processing | Instant results | Batch processing after wear period ends |
| Abandoned assessment cost | Near zero (scan takes seconds) | High if device was shipped and not returned |
| Program overhead | Low — software-only | Higher — device inventory, shipping, returns, support |
| Time to underwriting decision | Same day | Days to weeks after application |
For carriers processing high volumes of applications, phone camera screening eliminates the logistics burden entirely. There's no hardware to manage, no device compatibility matrix to maintain, and no waiting period before the underwriting process can begin.
Where wearables have a genuine advantage
Wearables capture something phone cameras cannot: behavior over time.
A two-week wearable monitoring period can reveal patterns that a 60-second scan will miss. Consistently elevated resting heart rate during sleep might indicate unmanaged sleep apnea. Low physical activity levels correlate with higher mortality risk. Irregular heart rate patterns picked up over days could flag undiagnosed arrhythmias.
For high-face-amount policies where the underwriting investment justifies deeper evidence gathering, wearable data adds a dimension that a point-in-time scan doesn't provide. Some carriers are exploring hybrid approaches: phone camera scan for initial triage, wearable monitoring for applicants in borderline risk categories where additional data might change the classification.
RGA's research on health technologies in insurance has highlighted that rPPG-measured heart rate variability is a meaningful indicator of autonomic nervous system health, which correlates with cardiovascular risk. But they've also noted that the real power of these tools is combining multiple data streams — not choosing one over the other.
Which approach fits which underwriting program
The right choice depends on what the carrier is trying to accomplish:
Phone camera (rPPG) works best for:
- Accelerated underwriting triage on term life and whole life under $500K
- High-volume, low-face-amount products where speed matters most
- Direct-to-consumer distribution where applicant experience drives conversion
- Markets where applicants may not own wearable devices
- Programs that need same-day underwriting decisions
Wearable monitoring works best for:
- High-net-worth or jumbo policies where extended evidence gathering is expected
- Post-issue wellness and engagement programs (like Vitality)
- Group benefits programs where employer-provided devices standardize the hardware
- Programs that need longitudinal behavioral data for ongoing risk assessment
Most carriers entering digital health screening in 2026 are starting with phone camera-based solutions because they map more closely to the existing underwriting workflow: applicant provides evidence, underwriter makes decision, policy issues. The wearable approach requires carriers to rethink the timing of their entire decision process.
Current research and evidence
The evidence base for both approaches continues to grow, with research groups publishing on accuracy, bias, and real-world performance.
The WellFie study team (medRxiv, 2023) validated smartphone rPPG against hospital-grade monitors, finding systolic blood pressure prediction accuracy of 93.94%, diastolic blood pressure accuracy of 92.95%, and heart rate accuracy of 97.34%. These were measured in a clinical setting with controlled conditions.
A separate medRxiv study (2023) specifically evaluated rPPG performance across different skin tones, an area where both phone cameras and wearable optical sensors have faced scrutiny. The study found that modern rPPG algorithms have narrowed the accuracy gap across skin tones, though performance still varies under extreme lighting.
Gen Re's 2021 report on remote risk assessment technology examined several rPPG vendors including Vastmindz, noting that the technology uses AI to transform consumer devices into non-invasive diagnostic tools. They framed rPPG as particularly relevant for markets where traditional paramedical infrastructure is limited or expensive.
WTW and Klarity Health's 2025 collaboration represents the insurance industry's growing interest in wearable-derived risk scoring. Their tool aims to convert raw wearable data into underwriting-ready risk scores, addressing one of the biggest challenges in wearable programs: turning continuous data streams into discrete underwriting decisions.
Where vitals capture in underwriting is heading
The trajectory points toward convergence, not competition. Phone camera scans will likely become the default first touch in digital underwriting — fast, frictionless, and sufficient for the majority of applications. Wearable data will supplement rather than replace camera-based screening, reserved for cases where longitudinal monitoring adds underwriting value.
The carriers who move first will build the data sets that train tomorrow's risk models. Whether they start with cameras, wearables, or both, the real advantage is having digital health data flowing into underwriting decisions at all. The paramedical exam isn't going away tomorrow, but its role is shrinking with every carrier that proves digital evidence can support sound underwriting decisions.
Platforms like Circadify are building the infrastructure for camera-based vitals capture in insurance workflows — offering carriers a way to start collecting digital health data without the device logistics that slow wearable programs down.
Frequently asked questions
Is phone camera rPPG accurate enough for life insurance underwriting?
For screening-level underwriting decisions, yes. Studies have shown heart rate accuracy above 97% against clinical monitors. The data is sufficient for accelerated underwriting triage, where the goal is determining which applicants need further evidence rather than making final rating decisions from vitals alone.
Do applicants need to download an app for phone camera screening?
Most current implementations are web-based. The applicant receives a link, opens it in their phone browser, and performs the scan without installing anything. This reduces friction compared to wearable programs that may require pairing apps and device setup.
Can wearable data from consumer devices be used for underwriting?
It can, but carriers need to account for accuracy variation across device models, establish data sharing agreements with platform providers (Apple Health, Google Fit, Garmin), and navigate consent and privacy frameworks. Medical-grade wearables offer more consistent data but add cost and logistics.
How do carriers handle applicants who don't own a wearable device?
This is one of the main limitations of wearable-based programs. Carriers either ship devices (adding cost and delay) or limit the program to applicants who already own compatible devices (creating selection bias). Phone camera screening avoids this problem entirely since nearly all applicants have a smartphone.
