How Actuarial Pricing Models Adjust for Biometric-Only Underwriting
Actuarial pricing models for biometric-only underwriting require new credibility frameworks, mortality assumptions, and data validation. Here's how the math is changing.

Actuarial pricing models for biometric-only underwriting are forcing a question that most pricing actuaries have been able to avoid until now: what happens to your mortality assumptions when you throw out the labs?
The traditional underwriting pricing framework is built on decades of mortality experience tied to specific evidence sources. Blood panels, urine tests, physical measurements taken by trained examiners, medical records pulled from attending physicians. Each data point has a known relationship to mortality outcomes, and actuaries have spent years calibrating loading factors and credits against that evidence base.
Biometric-only programs replace most of that with physiological measurements captured from a smartphone camera or similar device in under a minute. Heart rate, heart rate variability, blood pressure estimates, respiratory rate, and stress indicators. The data is real, it correlates with cardiovascular health, and it arrives instantly. But it's not the same data that existing pricing models were built on, and that gap creates real actuarial work.
Munich Re's accelerated underwriting monitoring program, drawing on 11 years of random audit data from carrier programs through year-end 2023, found that mortality slippage in accelerated cohorts has generally trended downward over time as programs mature and data sources improve. The key variable is how well the data waterfall catches risk that traditional methods would have flagged.
Why existing pricing models don't just work with biometric data
The actuarial pricing models used by most life carriers today are variants of a debit-credit system. An applicant starts at a base mortality rate for their age, sex, and smoking status. Then the underwriting evidence adjusts that rate up or down. A clean lipid panel earns a credit. An elevated A1C earns a debit. The sum of those adjustments produces a risk class, and the risk class maps to a price.
The problem with biometric-only programs isn't that the data is bad. It's that the data doesn't have the same actuarial track record. When an underwriter sees a total cholesterol reading of 240, there's 40 years of mortality studies linking that number to specific outcomes. When the system sees a heart rate variability measurement from a phone camera scan, the actuarial literature connecting that reading to all-cause mortality is much thinner, even though the clinical literature on HRV as a health marker is substantial.
This creates a credibility problem in the formal actuarial sense. How much weight do you give a new data source when pricing a book of business? The answer depends on volume, consistency, and observed-to-expected mortality ratios over time, none of which exist in large quantities for biometric-only cohorts yet.
The credibility gap in practice
Here's how the data maturity looks across evidence sources used in underwriting today:
| Evidence source | Years of mortality data | Credibility in pricing models | Availability speed | Cost per applicant |
|---|---|---|---|---|
| Paramedical exam (labs + physical) | 40+ years | Very high | 5-14 days | $80-$150 |
| Prescription history (Rx) | 15-20 years | High | Instant | $2-$5 |
| MIB records | 30+ years | High | Instant | $3-$8 |
| Motor vehicle records | 20+ years | Moderate | Instant | $2-$4 |
| Electronic health records | 8-12 years | Moderate-growing | 1-7 days | $15-$40 |
| Credit-based mortality scores | 10-15 years | Moderate | Instant | $1-$3 |
| Biometric/rPPG measurements | 3-5 years | Low-emerging | Instant | $1-$3 |
The gap between what biometric data can tell you clinically and what actuaries can prove statistically is where the pricing challenge lives. Dr. Zohair Abbas and colleagues at the Mzuzu University Department of Mathematical Sciences published work in 2024 through the Scientific Research Publishing journal examining AI-augmented actuarial models, noting that integrating real-time biometric data inputs requires new validation frameworks beyond what traditional actuarial methods assume.
How actuaries are adjusting pricing for biometric-only cohorts
The carriers running biometric-only programs aren't waiting for 20 years of mortality experience before pricing their books. They're using several approaches to bridge the credibility gap, each with trade-offs.
Blended credibility weighting
The most common approach treats biometric data as one input in a multi-source model rather than the sole basis for pricing. The biometric reading gets partial credibility weight, combined with traditional proxy sources like prescription history and MIB checks.
In practice, this means a carrier might weight biometric data at 15-25% of the total risk assessment in year one, increasing that weight as observed mortality experience accumulates. The Society of Actuaries' ongoing research into accelerated underwriting outcomes supports this graduated approach, with their product development section publishing analysis in August 2024 showing that carriers using layered data waterfalls generally see mortality results stabilize faster than those relying on any single alternative source.
Mortality loading for uncertainty
Some pricing actuaries add an explicit loading factor to biometric-only cohorts to account for the credibility gap. This is conceptually similar to how carriers price new products in markets with limited experience data. You assume the worst within a reasonable band and let actual experience prove you wrong.
The loading typically ranges from 5-15% above what the same cohort would be priced at with full traditional evidence, declining as the program matures. The tricky part is setting a loading that's high enough to protect against adverse selection without being so high that it makes the biometric-only product uncompetitive.
Parallel shadow studies
Willis Towers Watson and Klarity Health announced a collaboration in 2025 focused on using wearable and biometric health data to provide more personalized risk evaluations, moving beyond traditional metrics to incorporate real-time health insights. This kind of partnership reflects a broader industry pattern where carriers and reinsurers run biometric programs alongside traditional underwriting for a period, comparing outcomes before adjusting pricing.
In a shadow study, applicants go through both the biometric assessment and traditional underwriting. The policy gets priced on traditional evidence, but the biometric data is captured and stored. Over time, the actuarial team compares what the biometric model predicted against what the traditional model predicted, and both against actual claims experience. This dual-track approach builds the credibility base without putting pricing at risk.
What the mortality tables actually look like
Translating biometric readings into mortality assumptions requires mapping physiological measurements to risk classifications. This is where actuarial pricing models for biometric-only underwriting get into territory that most pricing manuals don't cover.
A traditional risk classification might use five classes: preferred plus, preferred, standard plus, standard, and substandard with table ratings. Each class has an expected mortality ratio relative to the base table. Preferred plus might be 50-60% of the 2015 VBT (Valuation Basic Table), while standard sits at 100%.
For biometric-only programs, the challenge is building classification rules that produce similar risk stratification from different inputs. Instead of sorting applicants by cholesterol, blood pressure from a cuff, and BMI from a physical exam, the model sorts by resting heart rate, heart rate variability patterns, estimated blood pressure from rPPG signals, and respiratory rate characteristics.
| Risk class | Traditional evidence criteria (simplified) | Biometric-only criteria (emerging) | Expected mortality ratio (2015 VBT) |
|---|---|---|---|
| Preferred Plus | BP <130/80, cholesterol <200, no Rx flags, clean labs | HRV in top 20th percentile for age, resting HR <65, BP estimate <125/78, no Rx flags | 50-60% |
| Preferred | BP <140/85, cholesterol <240, minimal Rx, clean labs | HRV in top 40th percentile, resting HR <72, BP estimate <135/82, minimal Rx | 65-80% |
| Standard Plus | BP <145/90, cholesterol <260, some Rx, borderline labs | HRV in top 60th percentile, resting HR <78, BP estimate <140/88, some Rx flags | 85-95% |
| Standard | Within normal limits, managed conditions | HRV average for age, resting HR <85, BP estimate within normal, managed Rx | 100% |
| Substandard | Elevated markers, significant medical history | Below-average HRV, elevated resting HR, unfavorable BP patterns, significant Rx | 125-200%+ |
The mapping isn't one-to-one, and nobody pretends it is. But early carrier programs are showing that biometric measurements can stratify risk meaningfully, even if the specific boundaries between classes need refinement as experience data grows.
Where reinsurers stand on this
Reinsurers are the actuarial backstop for biometric-only underwriting programs, and their position matters enormously for pricing. If a reinsurer won't treaty a biometric-only cohort at standard rates, the ceding carrier either absorbs the extra cost or passes it to the consumer, which undermines the economics of the whole approach.
Munich Re has been the most publicly active reinsurer in this space, with their life US division running ongoing monitoring of accelerated underwriting programs. Their data through 2023, published via the SOA's Product Development Section newsletter, suggests that well-designed accelerated programs, including those with biometric components, can achieve mortality outcomes within acceptable ranges of traditional underwriting, though they emphasize that "acceptable" depends heavily on the program's data waterfall design and triage rules.
Swiss Re has taken a more measured public stance, generally supporting the use of alternative data in underwriting while emphasizing the need for rigorous validation periods before adjusting reinsurance pricing assumptions.
The practical impact is that most reinsurance treaties for biometric-only programs include experience refund provisions and mortality corridors. If the biometric cohort's actual mortality deviates from expected by more than an agreed-upon margin, the pricing adjusts retroactively. This shared-risk approach lets carriers launch programs without waiting for full actuarial credibility while giving reinsurers protection against model risk.
Adverse selection and anti-selection modeling
One pricing concern that keeps coming up in biometric-only programs is adverse selection. The theory goes like this: applicants who know they have health issues might prefer a biometric-only process because it's less thorough than a full paramedical exam. An applicant with undiagnosed hypertension might get caught by a blood test but slip through a 30-second camera scan.
The counterargument, supported by some early program data, is that biometric measurements actually catch certain cardiovascular risks that traditional questionnaire-based triage misses. An applicant who truthfully reports no known medical conditions and takes no prescription medications will pass every traditional screen. But if their heart rate variability is in the bottom 10th percentile for their age, the biometric model flags something that the traditional model never would have seen.
Pricing for anti-selection risk in biometric programs typically uses one of two approaches:
-
Conservative initial classification: The program uses tighter cutoffs for preferred classes in the first few years, gradually relaxing them as experience data accumulates. This means fewer applicants get the best rates initially, but the mortality experience of the cohort stays cleaner.
-
Dynamic repricing triggers: The pricing model includes automatic adjustment mechanisms tied to observed-to-expected ratios. If the biometric cohort's claims experience exceeds expected by more than a predefined threshold (often 10-15%) in any rolling 12-month period, the classification cutoffs tighten automatically for new business.
Current Research and Evidence
The actuarial profession is actively developing frameworks for incorporating biometric data into pricing models, though much of the work remains in early stages.
The SOA's Predictive Analytics and Futurism Section has been exploring how machine learning models trained on biometric data can supplement traditional actuarial pricing. Their research agenda includes examining whether continuous physiological signals provide incremental predictive power over snapshot measurements, and how to validate that power against traditional mortality studies.
The NAIC's Accelerated Underwriting Working Group published draft regulatory guidance in June 2024 addressing the use of non-traditional data in underwriting, including biometric measurements. Their guidance emphasizes transparency in how alternative data sources influence pricing and classification decisions, which has direct implications for how actuaries document and justify biometric-based pricing adjustments.
The clinical literature on rPPG technology continues to grow, with researchers at institutions including those studying remote photoplethysmography publishing on the reliability and validity of camera-based physiological measurements. The actuarial question isn't whether rPPG works clinically, but whether the measurement precision and population-level predictive value meet the standards required for insurance pricing.
The Future of Actuarial Pricing for Biometric Underwriting
The long-term trajectory is toward biometric data carrying more weight in pricing models, not less. Every year that a biometric-only cohort matures without adverse mortality results adds to the credibility base. Every carrier that runs a parallel study builds evidence that actuaries can use to justify tighter credibility weights.
The transition period is where it gets interesting. Carriers running biometric programs today are essentially building the mortality databases that will make biometric-only pricing actuarially defensible five or ten years from now. The early movers take on more pricing uncertainty, but they also get first access to that proprietary mortality experience.
What we're likely to see over the next several years is convergence. The mortality loading for biometric-only cohorts will shrink as experience data accumulates. The credibility weight assigned to biometric inputs will increase. And at some point, probably within the next decade, biometric measurements will have enough actuarial track record that they're treated with the same confidence as a standard blood panel.
For carriers considering biometric-only programs now, the pricing question is less about whether the actuarial math can work and more about how much uncertainty you're willing to carry during the credibility-building period. The reinsurance market has shown willingness to share that risk, and the operational cost savings from eliminating paramedical exams provide a meaningful buffer against modest mortality slippage.
Companies like Circadify are working on the technology side of this equation, developing rPPG-based measurement systems that capture the physiological data carriers need for biometric underwriting programs. For carriers and actuarial teams exploring how biometric data fits into their pricing models, Circadify's insurance solutions provide a starting point for understanding what's technically possible today.
Frequently Asked Questions
How long does it take for biometric underwriting data to become actuarially credible?
Most actuaries estimate that biometric-only cohorts need 7-10 years of mortality experience at sufficient volume before the data reaches full credibility for pricing purposes. In the interim, carriers use blended credibility approaches, combining biometric inputs with traditional proxy data sources like prescription history. The timeline can be shorter for specific age and face amount segments where claim frequency is higher and experience emerges faster.
Do biometric-only programs result in worse mortality outcomes?
Not necessarily. Munich Re's monitoring data through 2023 shows that well-designed accelerated underwriting programs, including those using biometric inputs, have generally produced mortality outcomes within acceptable ranges. The results depend heavily on program design, particularly how the data waterfall is structured and where biometric data sits in the triage sequence. Poorly designed programs with insufficient knockout criteria can produce adverse results regardless of data quality.
How do reinsurers price treaties for biometric-only cohorts?
Most reinsurers use experience-rated treaties with mortality corridors for biometric programs. The initial pricing assumes a loading of 5-15% above standard accelerated underwriting rates, with provisions for adjustment based on actual observed mortality. Some treaties include profit-sharing arrangements where the ceding carrier benefits if mortality comes in better than the loaded assumption, creating an incentive alignment between the carrier and reinsurer.
Can actuarial models validate biometric measurements without lab comparisons?
Not entirely, at least not yet. Shadow studies comparing biometric predictions against traditional lab-based classifications remain the primary validation method. The actuarial standard of practice requires that pricing assumptions be supportable by credible evidence, and for biometric data, that evidence currently comes from demonstrating concordance with traditional measurements. As biometric-specific mortality databases grow, direct validation against claims experience will become increasingly viable.
