Digital Health Underwriting Compliance: A Step-by-Step Guide
A practical guide to navigating digital health underwriting compliance, covering NAIC AI guidelines, biometric privacy laws, and state-level regulatory frameworks for insurers.

The digital health underwriting compliance guide you actually need probably doesn't exist yet. Carriers are adopting smartphone-based vitals capture, AI-driven risk scoring, and fluidless application workflows faster than regulators can write rules about them. The result is a patchwork. Twenty-four states have adopted some version of the NAIC's Model Bulletin on AI use by insurers as of early 2026, each with its own interpretation and enforcement posture. Meanwhile, biometric privacy statutes like Illinois' BIPA carry per-violation penalties that can run into the hundreds of millions.
None of this means digital underwriting is unworkable from a compliance standpoint. It means carriers need to think about compliance architecture the same way they think about underwriting models — with specificity, documentation, and a willingness to get into the weeds.
The NAIC's December 2023 Model Bulletin on the Use of Artificial Intelligence by Insurance Companies established the first national framework for responsible AI governance in insurance, though adoption and enforcement remain state-by-state. — National Association of Insurance Commissioners
Where Digital Health Underwriting Runs Into Regulation
Traditional underwriting compliance was relatively contained. You had state insurance codes governing unfair discrimination, HIPAA for protected health information, and Fair Credit Reporting Act requirements for consumer reports. The boundaries were clear because the data sources were familiar: lab results, medical records, prescription histories, motor vehicle reports.
Digital health data — particularly camera-based vitals captured through rPPG — cuts across multiple regulatory domains simultaneously. A single 30-second smartphone scan can generate heart rate, respiratory rate, blood oxygen estimates, and heart rate variability data. That data touches:
- State insurance law (unfair discrimination, rate adequacy)
- Federal health privacy (HIPAA, depending on data flow)
- Biometric privacy statutes (BIPA in Illinois, CCPA in California, and similar laws in Texas, Washington, and others)
- AI governance frameworks (NAIC Model Bulletin, Colorado SB 21-169)
- Consumer protection (state unfair trade practices acts)
The compliance challenge isn't any single regulation. It's the overlap.
The NAIC Model Bulletin: What It Actually Requires
The NAIC issued its Model Bulletin on the Use of Artificial Intelligence by Insurance Companies in December 2023. By February 2026, twenty-four states had adopted it in some form, according to tracking by Quarles & Brady — including Alaska, Connecticut, Illinois, Kentucky, Maryland, Nevada, New Hampshire, Pennsylvania, Rhode Island, Vermont, Virginia, and Wisconsin, among others.
The bulletin doesn't ban anything. What it does is establish governance expectations. Carriers using AI in underwriting decisions need to demonstrate:
- An AI governance framework with documented policies and procedures
- Risk management processes proportional to the risk created by the AI system
- Internal controls including audit trails and testing protocols
- Third-party oversight for vendor-supplied AI tools and models
- Ongoing monitoring and validation of AI system outputs
Wilson Elser's 2025 analysis noted that the bulletin emphasizes outcomes over prescriptive technical standards. Regulators aren't telling carriers which algorithms to use. They're asking carriers to prove they know what their algorithms are doing and can show the work.
For digital health underwriting specifically, this means documenting how vitals data feeds into risk scoring models, what weight it carries relative to traditional data sources, and how you've tested for disparate impact.
| Compliance Area | NAIC Requirement | What It Means for Digital Health |
|---|---|---|
| Governance structure | Documented AI policies with cross-functional oversight | Actuarial, legal, compliance, and IT must jointly own digital health data governance |
| Risk management | Risk assessment proportional to AI system impact | Higher-impact underwriting models need more rigorous testing and documentation |
| Audit trail | Records of AI system decisions and data inputs | Every vitals scan feeding an underwriting decision must be logged and traceable |
| Third-party management | Oversight of vendor AI tools | Carriers using third-party rPPG SDKs must audit the vendor's model documentation |
| Fairness testing | Demonstrate no unfair discrimination | Test vitals-based scoring across demographic groups for disparate impact |
| Consumer transparency | Notice to consumers about AI use | Disclose that digital health data is used in underwriting decisions |
Biometric Privacy Laws: The Part That Keeps General Counsels Up at Night
Illinois' Biometric Information Privacy Act has generated over $1 billion in settlements since its enactment in 2008. The law requires informed written consent before collecting biometric identifiers, provides a private right of action (meaning individuals can sue directly), and allows statutory damages of $1,000 per negligent violation and $5,000 per intentional violation.
For carriers deploying camera-based health screening in the application process, the question is whether rPPG data constitutes a "biometric identifier" under BIPA. The law covers "a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry." Facial geometry scans used in rPPG processing could fall within that definition, depending on how the technology works and what intermediate data is generated.
California's CCPA and its successor CPRA define biometric data broadly as "physiological, biological, or behavioral characteristics" that can be used to establish individual identity, including "imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns." California doesn't provide a private right of action for biometric violations specifically, but the regulatory enforcement risk is real.
Here's the current landscape across states with biometric privacy statutes:
| State | Law | Private Right of Action | Consent Required | Penalties |
|---|---|---|---|---|
| Illinois | BIPA (2008) | Yes | Written, informed | $1,000–$5,000 per violation |
| Texas | CUBI (2009) | No (AG enforcement) | Before collection | Up to $25,000 per violation |
| Washington | HB 1493 (2017) | No | For commercial purposes | AG enforcement |
| California | CCPA/CPRA | Limited | Opt-out rights | $2,500–$7,500 per violation |
| Colorado | CPA (2023) | No | Consent for sensitive data | AG enforcement |
| Virginia | VCDPA (2023) | No | Consent for sensitive data | AG enforcement |
The practical upshot: carriers deploying digital health screening need consent frameworks that satisfy the strictest applicable statute, which in most cases means BIPA-level informed written consent before any camera-based data capture.
Colorado SB 21-169: The AI-Specific Insurance Law
Colorado stands out because it passed legislation specifically targeting AI in insurance. SB 21-169, effective November 2023, requires insurers to test their AI systems for unfair discrimination and to submit governance reports to the Colorado Division of Insurance.
This goes beyond the NAIC bulletin. Colorado doesn't just ask carriers to have governance frameworks — it requires them to actively test whether their AI models produce discriminatory outcomes and to report the results. For digital health underwriting, that means running vitals-based scoring models against demographic breakdowns and demonstrating that the technology doesn't disproportionately affect protected classes.
The Colorado law also introduced requirements around consumer disclosure. Applicants must be informed when AI is used in decisions that affect their coverage or pricing. That notification must be clear and specific, not buried in page 47 of a terms-of-service document.
HIPAA and the Protected Health Information Question
Whether digital health screening data falls under HIPAA depends on who collects it and how it flows. If a covered entity (a health plan or healthcare provider) is involved in the data chain, HIPAA applies. If the carrier collects vitals data directly from the applicant through its own app, HIPAA may not apply — but state health privacy laws might.
The wrinkle is that many carriers partner with third-party health screening vendors. If those vendors qualify as business associates under HIPAA, the entire data flow needs a Business Associate Agreement, encryption in transit and at rest, minimum necessary standards for data sharing, and breach notification protocols.
Even outside HIPAA, carriers handling health-related data face requirements under state insurance privacy regulations and the Gramm-Leach-Bliley Act, which governs how financial institutions handle consumer information.
Building a Compliant Data Architecture
The compliance-friendly approach separates concerns:
- Data collection layer: Capture vitals with informed consent, minimal data retention, and clear disclosure of purpose
- Processing layer: Generate health metrics without retaining raw biometric data (facial images, video frames)
- Underwriting layer: Receive only derived health metrics, not raw biometric inputs
- Audit layer: Maintain decision logs that can reconstruct the underwriting rationale without re-exposing raw data
This architecture limits biometric privacy exposure because the underwriting system never touches the raw biometric data. It only sees derived vital signs — heart rate numbers, respiratory rate values, variability metrics.
State-by-State Variations That Actually Matter
Beyond the big frameworks, individual states have regulatory quirks that catch carriers off guard:
New York's Circular Letter No. 1 (2019) — Requires insurers using external data and algorithms to demonstrate that their models don't discriminate. This applies even when the carrier can't fully explain why the model produces a given result, which makes black-box AI particularly risky in New York.
Connecticut's AI disclosure requirements (adopted 2024) — One of the early NAIC bulletin adopters, Connecticut added specific disclosure requirements around automated decision-making in insurance.
Maryland's algorithmic accountability expectations — Maryland's adoption of the NAIC bulletin included guidance around algorithmic accountability that goes beyond the model text.
The practical approach is building compliance programs around the strictest applicable standard, then adjusting disclosures and consent mechanisms state by state. A carrier operating in all 50 states can't maintain 50 different compliance frameworks. But it can maintain one framework built to the highest standard with state-specific overlays.
Current Research and Evidence
Research into regulatory approaches for digital health in insurance is still developing, but several notable contributions have shaped the conversation.
Kennedys Law published a 2025 analysis of the NAIC Model AI Bulletin that stressed the importance of cross-functional governance structures. Their assessment found that carriers treating AI governance as a purely legal or purely IT function were more likely to have gaps in their compliance programs. The recommendation was to build governance committees that include actuarial, underwriting, legal, compliance, IT, and data science representation.
Vertafore's 2026 compliance trends report tracked 757 regulatory changes across U.S. insurance markets in 2025 alone. Their analysis projected the pace would not slow in 2026, with additional updates expected around AI transparency, data privacy, and consumer notification.
Munich Re's white paper on privacy regulation impacts analyzed how overlapping privacy frameworks — GDPR, CCPA, BIPA, and state-specific laws — create compliance complexity for insurers handling biometric and health data across jurisdictions.
The Joint Commission partnered with the Coalition for Health AI (CHAI) in September 2025 to release guidance for responsible AI adoption across health systems. While aimed at clinical settings rather than insurance, the framework's emphasis on validation, monitoring, and documentation mirrors what insurance regulators are asking for.
The Future of Digital Health Underwriting Compliance
Expect three developments over the next 18–24 months.
First, more states will adopt the NAIC Model Bulletin. The current count of twenty-four will likely reach thirty or more by the end of 2027. Each adoption creates another jurisdiction where carriers need to demonstrate AI governance.
Second, federal legislation may consolidate some of the patchwork. Several bills addressing AI regulation in financial services have been introduced in Congress. Whether any pass is uncertain, but the direction of travel is toward more oversight, not less.
Third, compliance technology will mature. Today, most carriers manage AI governance through spreadsheets, internal memos, and manual audit processes. Dedicated AI governance platforms for insurance are emerging, and carriers that invest early will have an advantage when regulators start asking more pointed questions.
The carriers that get ahead of this will treat compliance not as a cost center but as a competitive advantage. Being able to demonstrate a rigorous governance framework becomes a selling point with reinsurers, distribution partners, and increasingly, with applicants themselves.
Frequently Asked Questions
Does rPPG facial scanning count as biometric data under BIPA?
It depends on the implementation. If the technology captures or analyzes facial geometry — even temporarily — it could fall under BIPA's definition of biometric identifiers. The safest approach is to treat it as biometric data and obtain BIPA-compliant written consent before any camera-based capture. Some implementations process video frames entirely on-device and only transmit derived metrics, which may reduce exposure, but legal counsel should evaluate the specific data flow.
Do we need separate consent for each state where we operate?
Not necessarily separate consent forms, but your consent mechanism needs to satisfy the strictest applicable standard. Build a baseline consent process that meets BIPA requirements (written, informed consent with specific purpose disclosure and data retention policies), then add state-specific language where needed. A single well-drafted consent form can work across jurisdictions if it addresses all relevant requirements.
What happens if a state regulator audits our AI underwriting system?
Under the NAIC Model Bulletin framework, regulators expect to see documented governance policies, risk assessments, testing results (including fairness testing), audit trails, and third-party vendor oversight records. Colorado goes further and requires proactive reporting. The key is having everything documented before the audit — reconstructing governance documentation after the fact is both difficult and unconvincing to regulators.
Can we use digital health data for underwriting without triggering HIPAA?
If the carrier collects vitals data directly from the applicant without involving a covered entity, HIPAA may not apply. But state health privacy laws, insurance privacy regulations, and GLB Act requirements still govern how that data is handled. And if a third-party vendor involved in data processing qualifies as a healthcare provider or business associate, HIPAA could apply to part of the data flow. Map the entire data chain before concluding HIPAA doesn't apply.
The compliance landscape for digital health underwriting is complex, but it's navigable. Carriers already managing traditional underwriting compliance have the institutional muscle for this — the frameworks just need updating. Solutions like Circadify are building digital health screening with compliance architecture in mind, designing data flows that separate biometric capture from underwriting decisions and support the governance documentation regulators want to see.
For more on how digital health data integrates with existing underwriting systems, see our analysis of how digital health data integrates with Rx and MIB checks.
