top of page

Privacy & Security Test

Evaluates whether user data is protected and the system has safeguards against breaches or misuse.

For communities already over-surveilled or underserved, the stakes are higher. A leak or misuse can compromise safety. AI systems often collect sensitive data at scale and use third-party tools, creating new attack surfaces. Federal regulators are tightening requirements (FTC's COPPA updates, HHS rules for health data), making strong privacy and security practices both an ethical imperative and a compliance necessity.

What Good Looks Like

Clear, plain-language opt-in consent with easy-to-access opt-out mechanisms

Data minimization: collecting only what's genuinely needed for the stated purpose
Encryption of data both in transit and at rest
Role-based access controls limiting who can see sensitive data
Audit logs tracking who accessed what and when
Third-party security audit completed within the last 12 months
Clear data retention and deletion policies with specific timelines
Separate consent for AI use (not bundled with general service agreement)
Proper agreements in place: DPA at minimum, BAA for HIPAA contexts
Documented incident response plan with user notification procedures

What to Watch Out For

Vague promises about data protection ("we take security seriously") without specifics
No mention of encryption, access controls, or security audits
Collecting more data than necessary for the stated purpose
Sharing data with third parties without clear, separate consent
No incident response plan for when breaches occur
Missing data retention/deletion policies or unclear timelines
Consent bundled with service access (users can't use service without agreeing to AI)
Using third-party AI tools without Data Processing Agreements

Tests To Apply

□ Is user data encrypted both in transit and at rest?
□ Do users have rights to access, correct, and delete their data?
□ Is there a clear data retention and deletion policy with specific timelines?
□ Has the system undergone third-party security audit in the last 12 months?
□ Are there documented breach notification procedures with user notification timelines?
□ Is consent obtained separately for AI use (not bundled with service access)?
□ Is there a Data Processing Agreement (DPA) or Business Associate Agreement (BAA) if applicable?
□ Are there role-based access controls limiting who can see sensitive data?
□ Are audit logs maintained showing who accessed what data and when?

Key Questions to Ask

  • What personal data are you collecting and why is each piece necessary?

  • Who has access to user data and what controls prevent unauthorized access?

  • How are you getting informed consent from users, especially minors or non-English speakers?

  • If there's a data breach, what's your response plan and timeline for user notification?

  • Have you completed a third-party security audit, and can we see the results?

Apply the Cross-Cutting Lenses

​After evaluating the core criteria above, apply these two additional lenses to assess equity outcomes and evidence quality.

Equity & Safety Check

When evaluating Privacy & Security through the equity and safety lens, assess whether protections are equally strong across all user groups and whether breaches could disproportionately harm vulnerable communities.

Gate Assessment:

🟢 CONTINUE: Subgroup parity demonstrated, incidents rare and quickly resolved, protections proven in pilot
 

🟡 ADJUST: Monitoring exists and gaps detected, mitigation plan in progress with timeline
 

🔴 STOP: Privacy gaps by subgroup with no mitigation plan, or vulnerable groups systematically excluded from protections

Check for:

□ Are privacy outcomes tracked separately by relevant subgroups (language, disability status, device type, geography)?

□ Could a breach disproportionately harm certain users (e.g., undocumented immigrants, people in domestic violence situations, minors)?

□ Are consent processes accessible to users with limited literacy, non-English speakers, or those using assistive technologies?

□ Is there a named owner responsible for monitoring equity gaps in privacy outcomes?

□ Are there rollback triggers if certain groups experience higher rates of privacy violations?

□ Do incident response plans account for varying levels of harm to different communities?

Evidence & Uncertainty Check

When evaluating Privacy & Security through the evidence and uncertainty lens, assess whether security claims rest on verified testing and whether limitations are transparently acknowledged.

Quality Grade:

🅰️ A (Strong): Third-party security audit completed, causal comparison to baseline, documented incident response with track record
 

🅱️ B (Moderate): Internal security testing, reasonable evidence of protections, plan to conduct formal audit in next phase

🅲 C (Weak): No independent testing, vague security claims, major unknowns; requires deeper investigation 

Check for:

□ Has security infrastructure been independently audited by a third party?


□ Is there baseline data on current privacy risks before AI deployment?


□ Are breach/incident rates tracked with frequency and severity, not just "zero incidents" claims?


□ Are known vulnerabilities documented with severity ratings and mitigation timelines?


□ Is there a causal plan showing AI maintains or improves privacy compared to status quo?


□ Are they transparent about what they CAN'T protect (e.g., "encryption protects data at rest but not data in use")?


□ Do they acknowledge uncertainty in emerging threats (e.g., prompt injection risks)?

bottom of page