Implementability
Evaluates whether the project has the people, data, and systems needed to actually launch and sustain the tool.
Building has never been more accessible, but it's much easier to sketch a great AI idea than to run one. Most AI projects don't fail due to the AI's functionality—they fail because of missing people, data, or systems needed to actually launch and sustain the tool. If an MVP doesn't fit the languages, accessibility needs, or workflows of intended users, benefits go to the easiest-to-reach groups while others wait, often exacerbating divides. Feasibility for equity-centered AI means proving the solution works in real contexts, not just demos.
What Good Looks Like
✓ Named owners for product/operations, data privacy/security, and frontline users
✓ Data readiness documented: available fields, quality snapshots, access rights, consent status, update frequency
✓ Embedded systems and operations: monitoring for quality/safety/equity/cost, incident response plan, rollback mechanisms
✓ Practical scope: prioritizes low-risk tasks for initial deployment, proves accuracy before high-stakes automation
✓ Simple, contextually-aware UX with little-to-no barrier to entry
✓ Works on users' actual devices (not just developer laptops) and meets them in proper contexts
✓ Supports relevant languages and assistive technologies from day one
✓ User feedback mechanisms and clear path to reach a human
What to Watch Out For
✗ No named owner for product, operations, or data privacy
✗ Vague statements about data availability without specifics on quality, access rights, or completeness
✗ No plan for staff training or change management
✗ Accessibility is an afterthought (e.g., "we'll add Spanish later")
✗ Starting with high-stakes automation before proving AI works in low-stakes contexts
✗ No documented incident response or rollback plan
✗ Only tested with staff or developers, not actual end users in real environments
Tests To Apply
□ Are there named owners for: product/operations, data privacy/security, and frontline users?
□ Have they documented data readiness: what fields exist, quality level, access rights, consent status, update frequency?
□ Is there a monitoring plan for quality, safety, equity, and cost with alert thresholds?
□ Does it work on users' actual devices (not just developers' laptops)?
□ Does it support relevant languages and assistive technologies from day one?
□ Is scope limited to low-risk tasks initially, with high-stakes automation only after safety is proven?
□ Is there an incident response plan and rollback mechanism?
□ Are there user feedback mechanisms embedded in the interface?
Key Questions to Ask
-
Who specifically owns this system end-to-end, and who decides when to pause or roll back?
-
What data do you actually have access to right now, and what's its quality?
-
How will you train staff to use this, and what if they don't adopt it?
-
Does this work for users with disabilities, limited English, or older devices?
-
Have you tested this with actual end users in their real environment?
Apply the Cross-Cutting Lenses
After evaluating the core criteria above, apply these two additional lenses to assess equity outcomes and evidence quality.
Equity & Safety Check
When evaluating Implementability through the equity and safety lens, assess whether the system works equally well for all users and whether implementation gaps could harm vulnerable populations.
Gate Assessment:
🟢 CONTINUE: Tested with diverse users, works across contexts, frontline staff validated feasibility
🟡 ADJUST: Works for some populations, gaps identified with mitigation timeline
🔴 STOP: System only works for easy-to-reach users, accessibility gaps with no plan, no community input
Check for:
□ Was feasibility tested with actual users from all relevant subgroups (not just convenient populations)?
□ Does the system work on devices actually used by target communities (not just new smartphones)?
□ Are all necessary languages and accessibility features included from day one (not "we'll add later")?
□ Could implementation challenges lead to excluding harder-to-reach populations?
□ Is there a named owner responsible for ensuring equity in access and usability?
□ Are there rollback triggers if adoption/satisfaction is much lower for certain groups?
□ Do frontline staff from affected communities have input on feasibility (not just technical staff)?
Evidence & Uncertainty Check
When evaluating Implementability through the evidence and uncertainty lens, assess whether feasibility claims are backed by real-world testing and whether gaps are acknowledged.
Quality Grade:
🅰️ A (Strong): Real-world testing completed, quantified adoption data, all dependencies verified and available
🅱️ B (Moderate): Pilot testing with some users, reasonable evidence of feasibility, known gaps with mitigation plan
🅲 C (Weak): Only lab testing or demos, dependencies unverified, many unknowns - high implementation risk
Check for:
□ Was the system tested in actual user environments (not just lab conditions)?
□ Is there quantified data on adoption/usability from pilot testing (not just "users liked it")?
□ Are known implementation gaps documented with severity and mitigation plans?
□ Do they acknowledge data quality issues and how they'll affect performance?
□ Is there evidence that necessary staff/systems/data are actually available (not just planned)?
□ Are infrastructure requirements validated (not just vendor specs)?
□ Do they acknowledge what could go wrong during implementation and have contingency plans?
