From GDPR requirement to U.S. State risk-assessment mandates, DPIAs and PIAs are essential governance tools for the modern data economy.

By Axel Spies and Lisa Zolidis

What Is a Data Privacy Impact Assessment?

A Data Privacy Impact Assessment (DPIA)—often referred to with a broader remit in the United States as a Privacy Impact Assessment (PIA)—is a structured, documented, forward-looking process designed to identify, assess, and mitigate privacy risks before a new data processing activity begins. The concept is most closely associated with Article 35 of the EU General Data Protection Regulation (GDPR), which requires organizations to carry out a Data Protection Impact Assessment when processing is “likely to result in a high risk to the rights and freedoms of natural persons.” Essentially, under GDPR Article 35, a DPIA must describe the contemplated processing activity, assess its necessity and proportionality, evaluate risks to individuals, and identify measures to address those risks. Importantly, the GDPR positions the DPIA not as a defensive document prepared after a problem arises, but as a preventive compliance mechanism embedded early in product design and operational planning.

While DPIAs are often viewed as a distinctly European concept, the concept has become increasingly influential in the United States. In fact, privacy impact assessments represent one of the clearest examples of U.S. state legislatures borrowing a GDPR-inspired governance model and adapting it to U.S. legal traditions and regulatory structures.  While many U.S. companies have long implemented PIAs in one form or another since at least the advent of the GDPR, many U.S. state laws now follow suit by adding PIAs to legal obligations.

California Leads the U.S. with a Risk-Based Model

California’s privacy regime—the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA)—contains the most developed U.S. regulatory analogue to the GDPR DPIA requirements. California has essentially documented in its regulations the dominant standard practice of well-advised data privacy programs across government and corporate entities.  The California privacy regulator CalPrivacy (formerly known as CPPA) is tasked to enforce the rules.

California law does not use the term "Privacy Impact Assessments" explicitly. However, Article 10 of the updated CCPA Regulations that took effect January 2026 requires a risk assessment when a processing activity presents “significant risk” to the consumer’s privacy. The Regulations further require that businesses must conduct and update risk assessments as soon as feasibly possible, but no later than 45 calendar days, whenever material changes related to the processing activity occur. Businesses are required to review and update their risk assessments at least once every three years, aside from any material changes in between.  11 Cal. Code Regs. §§ 7150 – 7156.

What “Significant Risk” Means Under the CCPA

The Regulations explicitly define what categories of processing present significant risk.  Under the Regulations, processing that constitutes significant risk includes (a) selling or sharing personal information; (b) processing “sensitive personal information” (with stated exceptions); (c) automated decision-making (ADMT) for a “significant decision” about a consumer; (d) use of ADMT to infer or extrapolate any of a number of conclusions about a consumer from systematic observation of the consumer or from the consumer’s location; or (e) processing personal information with the intention to train an ADMT for a significant decision concerning a consumer or “to train a facial-recognition, emotion-recognition, or other technology that verifies a consumer’s identity, or conducts physical or biological identification or profiling of a consumer.” 11 Cal. Code Regs. § 7150(b).

  • “Sensitive personal information” is defined as personal information that includes SSNs, driver’s licenses, state identification card, passport number, account log-in, financial account, debit card, credit card number in combination with any required security code), precise geolocation, racial or ethnic origin, the contents of an individual’s mail/e-mail/text messages (unless the business is the intended recipient of the communication), biometric information, information about an individual’s health and sex life, and information about an individual that a business has actual knowledge is less than 16 years of age. 11 Cal. Code Regs. 7001(bbb).
  • “Significant decision” is defined as a decision that determines access (or denial) of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice (e.g., posting of bail bonds), employment or independent contracting opportunities or compensation, healthcare services, or essential goods and services (e.g., groceries, medicine, hygiene products, or fuel). 11 Cal. Code Regs. 7001(ddd).
  • ADMT is defined in the Regulations as “any technology that processes personal information and uses computation to replace human decision making or substantially replace human decision making.” 11 Cal. Code Regs. 7001(e). Inclusion of various forms of ADMT processing among those that pose significant risk reflects the CPPA’s focus on algorithmic systems that materially affect rights and opportunities without meaningful human oversight.

Together, these enumerated triggers provide a clearer regulatory framework than many prior U.S. privacy laws for determining when a formal risk assessment is required. Rather than leaving “significant risk” undefined, the Regulations in 11 Cal. Code Regs § 7150(b) give companies concrete processing categories that automatically require evaluation before they proceed with the activity.  The Regulation also provides illustrative examples.

How California’s Model Compares to GDPR Article 35

California’s approach clearly echoes GDPR Article 35, but it also diverges in important ways. Both regimes embrace a risk-based approach and require organizations to evaluate privacy impacts before launching high-risk processing. However, the GDPR relies on a broad and flexible “high risk” standard tied to fundamental rights and freedoms, whereas California opts for more regulatory specificity, identifying defined triggers that automatically require an assessment. Oversight mechanisms also differ. Under the GDPR, a DPIA may need consultation with a supervisory authority if residual high risk remains. Under California law, risk assessments must be submitted to CalPrivacy on a scheduled basis and produced upon request, transforming them into a tool of regulatory accountability rather than a purely internal compliance document.

California’s risk-assessment framework takes on particular significance in the context of artificial intelligence and automated decision-making technologies. CalPrivacy final regulations addressing ADMT were approved and filed with the Office of Administrative Law on September 23, 2025.  These rules require businesses using ADMT to make significant decisions to assess risks related not only to data privacy, but also to fairness, bias, transparency, and consumer autonomy. In effect, unlike the GDPR, California has already embedded AI governance into its privacy risk-assessment framework, positioning the CCPA as one of the most advanced U.S. laws governing automated decision making.

Other U.S. States Also Require PIAs

California is not alone. Several other comprehensive state privacy laws require some form of data protection assessment for high-risk processing activities such as targeted advertising, profiling with significant effects, or processing sensitive data. Many of these states also have or are contemplating new legislation seeking to limit bias or other potentially harmful effects of ADMT driven by use of artificial intelligence tools.

As of the end of 2025, 18 states have passed laws that require a privacy impact assessment, referred to in some states as data protection assessments or risk assessments.  These 18 states include: California, Colorado, Connecticut, Delaware, Florida (in a limited law applicable only for certain large technology companies), Indiana, Kentucky, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, Oregon, Rhode Island, Tennessee, Texas, and Virginia.

While the individual rules may vary, all 18 states require a PIA when a covered entity (a data controller) processes personal information that presents a heightened risk of harm to a consumer. Processing that presents a heightened risk of harm is typically defined as:

  • Processing personal information or targeted advertising purposes
  • “Selling” personal information (as defined in the relevant laws and generally interpreted fairly broadly)
  • Processing personal information for profiling purposes where profiling presents a reasonably foreseeable risk of:
    • Unfair or deceptive treatment of, or unlawful disparate impact on, consumers
    • Financial, physical, or reputational injury to consumers
    • A physical or other intrusion upon the solitude or privacy of consumers that would be offensive to a reasonable person
    • Other substantial injury to consumers
  • Processing of sensitive personal information

See, e.g. Conn. Gen. Stat. § 42-522.

In another difference from the GDPR approach, U.S. state privacy laws tend to detail the analysis required to be included in an acceptable PIA. For example, in Delaware, a PIA must weigh the benefits that may flow, directly and indirectly, from the personal data processing to the controller, the consumer, other stakeholders, and the public against the potential risks to the rights of the consumer associated with such processing, as mitigated by safeguards that can be employed by the controller to reduce such risks. 6 Del. C. § 12D-108.

Unique variations among state laws may include the threshold for the requirement to conduct a PIA, the timing of when to conduct a PIA, and how a state’s attorney general may request or demand a PIA.  Only California currently has a stand-alone privacy regulator (CalPRivacy) separate from the state’s attorney general. While these laws vary in terminology and detail, they reflect a growing consensus that impact assessments are a necessary mechanism for managing privacy risk in complex data ecosystems.

From Theory to Practice: Templates and Practical Guidance

A frequent question for organizations and businesses subject to these requirements is how to operationalize them. U.S. privacy laws generally do not mandate the use of a specific DPIA or PIA template, but several authoritative resources are available. At the federal level, agencies such as the U.S. Department of Justice publish publicly available Privacy Impact Assessment templates that walk through data collection, use, sharing, retention, and risk mitigation. For companies with global operations, European resources are available from various regulators. For example, DPIA templates issued by the UK Information Commissioner’s Office and guidance from the European Data Protection Board outline clear, step-by-step methodologies.

Many companies prefer to create their own PIA questionnaires and standards of review that align both to Fundamental Information Privacy Principles that form the foundation of modern data privacy law and to the companies’ own operational or functional design.  For example, some companies may use different PIA questionnaires for marketing teams from those directed at product development or research & development.  Several privacy technology providers offer customizable tools and templates based on global or individual regulatory standards that companies may choose based on their industries and needs. 

As a general rule, U.S. private-sector PIAs and DPIAs under the GDPR are not required to be published publicly. However, California’s new regulations require that private businesses must submit risk assessments (or summaries and attestations) to CalPrivacy, with the first submission due by April 1, 2028, covering assessments completed for calendar years 2026 and 2027.  11 Cal. Code Regs § 7157.  These submissions are not automatically public, but they must be sufficiently detailed to withstand regulatory scrutiny.

Enhance PIA Requirements Now and Build for Scale

The trajectory is clear. Privacy impact assessments have long been good business practice and not just a European requirement. They are essential to build privacy by design, to identify and manage privacy risks and to honor a range of regulatory requirements.  Consistently including PIAs among standard procedural checklists and documenting both the analyses and actions taken to mitigate risks identified in PIAs serves to meet regulatory compliance and to prevent potential harms from going undetected.

The trend to include PIAs as part of necessary compliance continues to expand.  Led by California’s significant-risk framework and reinforced by numerous other state laws, the United States legal environment has steadily embraced the practical utility of PIA obligations as a core element of privacy compliance.  This tool is now particularly relevant for protecting sensitive data and helping to set protective guardrails around AI-driven decision making.

Media Contact

Holland Goodrow

Marketing Communications Manager
hgoodrow@potomaclaw.com

Practice Areas

Recent News

Jump to Page

By using this site, you agree to our updated Privacy Policy and our Terms of Use