AI Readiness Checklist: 30 Questions to Assess Before Your First AI Project
Why You Need an AI Readiness Checklist (Not Just a Strategy)
Most companies that fail at AI don’t fail because they picked the wrong model or the wrong vendor. They fail because they weren’t operationally ready to use what they built.
An MIT study found that 95% of generative AI implementations showed no measurable impact on profit and loss. The technology worked fine; the organizations around it weren’t ready to use the output. Data was scattered. Teams weren’t prepared. Governance didn’t exist. The AI sat there doing exactly what it was designed to do, and nobody could use the output.
Strategy documents don’t catch these problems. They describe where you want to go. A checklist forces honest assessment of where you actually are, across the dimensions that determine whether an AI project delivers results or quietly dies after the pilot.
This checklist maps to Fountain City’s 5-Stage AI Readiness Model, a framework we developed from building and deploying autonomous AI systems across multiple industries. Each of the 30 items below contributes to a score that tells you which stage you’re at and what to work on next.


How to Use This Checklist
Score each item on a 0-1-2 scale:
- 0 — Not in place. You haven’t started on this.
- 1 — Partially in place. Some progress, but gaps remain.
- 2 — Fully in place. Documented, operational, and current.
Total your score across all 30 items (maximum: 60). The scoring section below maps your total to a readiness stage with specific next steps.
One recommendation: do this as a cross-functional team exercise. AI readiness is not a technology question. Operations, finance, and the people who will actually work alongside AI every day all need to be in the room. A CTO scoring everything a 2 while the operations team would score it a 0 is a problem you want to find now, not three months into a pilot.
The Checklist: 30 Questions Across 6 Dimensions
Dimension 1: Business Strategy and Use Cases (Items 1–5)
1. Do you have specific business problems you want AI to solve?
Not “we need AI” but “we need to reduce quote turnaround from 3 days to 3 hours” or “we need to process 500 support tickets per day without adding headcount.” AI without a defined problem is a science project.
2. Have you identified 2-3 concrete use cases with measurable success criteria?
Each use case should have a clear metric: cost saved, time reduced, throughput increased. If you can’t measure success, you can’t prove it worked.
3. Can you estimate the financial impact of solving these problems?
Even rough estimates help. If automating invoice processing saves $80K per year in labor, that frames the investment conversation. If you can’t estimate the value, the project will struggle to get budget when competing priorities arise.
4. Is there executive sponsorship with budget authority?
AI projects that live in the innovation lab without a budget owner tend to stay in the innovation lab. Someone with authority to allocate resources and remove organizational obstacles needs to own this.
5. Have you benchmarked how competitors are using AI in your space?
Understanding where your industry stands helps calibrate expectations and urgency. If competitors are already automating processes you do manually, the readiness timeline matters more.
Dimension 2: Data Readiness (Items 6–10)
6. Is your core operational data digitized and accessible?
If critical business data lives in spreadsheets, paper files, or individual employees’ heads, AI has nothing to work with. Data that exists but can’t be accessed programmatically is effectively invisible.
7. Do you have at least 12 months of historical data for your target use cases?
Most AI systems need historical patterns to generate useful outputs. Twelve months is a practical minimum for capturing seasonal variation, typical workflows, and enough examples to identify patterns.
8. Is your data quality documented?
Completeness, accuracy, and freshness all matter. Multiple industry reports consistently identify data quality as the top barrier to successful AI deployment. If you don’t know the state of your data, you don’t know what your AI will produce.
9. Can your data be accessed programmatically?
APIs, database queries, automated exports from your data warehouse. If the only way to get data out of a system is a manual CSV export, that bottleneck will slow everything down.
10. Do you have data governance policies?
Who owns the data? Who can access it? How long is it retained? These aren’t theoretical questions. They determine what AI can legally and practically use.
Dimension 3: Technology Infrastructure (Items 11–15)
11. Can your current systems integrate with external APIs and services?
AI agents and applications rarely operate in isolation. They pull data from one system, process it, and push results to another. If your tech stack is a collection of closed systems, integration becomes the bottleneck.
12. Do you have cloud infrastructure or a plan to adopt it?
On-premise infrastructure can work, but cloud environments offer the flexibility and scaling that most AI workloads need. At minimum, you need a path to running AI services somewhere beyond a single server under someone’s desk.
13. Is your cybersecurity posture documented and current?
AI systems process sensitive business data. Before connecting them to your operations, you need to know your security baseline. An undocumented security posture creates risk you can’t quantify.
14. Do you have IT staff or partners capable of managing AI integrations?
Someone needs to maintain the connection between AI systems and your business tools. This doesn’t require a data science team. It requires competent technical support that understands APIs, authentication, and monitoring.
15. Have you evaluated whether your current tech stack can support AI workloads?
Some older systems don’t expose the data or integration points AI needs. Knowing this upfront prevents the mid-project discovery that your ERP from 2008 can’t talk to anything built after 2015.
Dimension 4: People and Skills (Items 16–20)
16. Does your team have basic AI literacy?
They don’t need to build models. They need to understand what AI can and can’t do, how to evaluate AI outputs, and when to trust or override automated decisions. Executive AI training fills this gap quickly when it’s focused on practical application rather than theory.
17. Do you have internal champions willing to pilot AI in their workflows?
The first AI project needs people who are genuinely interested in making it work, not people who were assigned to it. Champions find workarounds, provide better feedback, and pull the rest of the team forward.
18. Have you assessed which roles will be augmented versus automated?
This isn’t about replacing people. It’s about knowing which tasks in each role are repetitive enough to automate, freeing people for the judgment-heavy work AI can’t do. Clarity here reduces the anxiety that slows adoption.
19. Is there a training plan for employees who will work alongside AI?
People working with AI outputs need to know how to review them, when to escalate, and how to provide feedback that improves the system. A training plan doesn’t need to be elaborate, but it does need to exist.
20. Have you addressed employee concerns about AI and job impact?
Unaddressed anxiety becomes passive resistance. Teams that understand how AI changes their work (and doesn’t eliminate it) adopt faster and provide better feedback during pilots.
Dimension 5: Processes and Workflows (Items 21–25)
21. Are your core business processes documented?
If the way work gets done lives entirely in tribal knowledge, you can’t automate it. Documentation doesn’t mean binders full of flowcharts. It means someone can explain the steps, decision points, and exceptions clearly enough to design a system around them.
22. Have you identified which processes are most repetitive and rule-based?
These are your best AI candidates. High-volume, rule-based processes with clear inputs and outputs respond well to automation. Creative, exception-heavy processes are harder and should come later.
23. Do you measure process performance with KPIs today?
If you don’t measure it now, you can’t prove AI improved it. Baseline metrics make the difference between “this feels faster” and “processing time dropped 40%.”
24. Can you test changes to processes without disrupting production?
AI pilots need room to run alongside existing workflows. If your operation has zero tolerance for parallel processes or testing, the pilot environment becomes an obstacle.
25. Do you have a track record with technology rollouts?
Past change management experience, good or bad, tells you a lot about how AI adoption will go. Organizations that have successfully rolled out new systems before have the muscles for this. Organizations that haven’t should factor in the learning curve.
Dimension 6: Governance and Risk (Items 26–30)
26. Do you have policies for AI decision-making accountability?
When AI makes a recommendation or takes an action, who is responsible for the outcome? This question matters more as AI moves from generating reports to making operational decisions.
27. Have you identified regulatory requirements affecting AI in your industry?
Healthcare, financial services, legal, and other regulated industries have specific constraints on AI use. Knowing these upfront shapes what you can build and how you can deploy it.
28. Do you have an approach to AI ethics?
Bias, fairness, and transparency aren’t abstract concerns. They show up in hiring algorithms that discriminate, pricing models that are unexplainable, and customer-facing AI that gives inconsistent answers based on who’s asking.
29. Have you assessed vendor and platform risk?
Lock-in, data portability, and what happens if your AI vendor goes away or changes pricing are practical questions. Building on a platform you can’t leave creates a dependency that compounds over time.
30. Do you have a plan for monitoring AI outputs and correcting errors?
AI systems drift. Data changes, edge cases appear, and outputs degrade. A monitoring plan, even a simple one, is how you catch problems before they become expensive. This is one of the key reasons AI pilots fail: they launch without anyone watching the outputs.


Scoring Your Results
Add up your scores across all 30 items. Your total maps to a stage in Fountain City’s 5-Stage AI Readiness Model:
| Score | Stage | What to Do Next |
|---|---|---|
| 0–12 | Stage 1: Awareness | Start with education. Build AI literacy across leadership and key teams. Identify 1-2 business problems worth solving and document the data you already have. Don’t buy anything yet. |
| 13–24 | Stage 2: Exploration | Identify your strongest pilot use case. Fix the data gaps your checklist uncovered. Assign an executive sponsor with budget authority. A focused pilot with clear success metrics is the right next step. |
| 25–36 | Stage 3: Planning | Build an implementation roadmap. Secure budget. Prioritize which AI projects to start based on business impact and feasibility. You’re ready for a department-level initiative. |
| 37–48 | Stage 4: Implementation | Launch pilots with defined KPIs. Measure results against baselines. Iterate based on real performance data. Focus on one or two high-impact use cases before expanding. |
| 49–60 | Stage 5: Scaling | Expand successful AI across additional departments and use cases. Build internal centers of excellence. Your organization has the infrastructure, skills, and governance to run AI at scale. |


The ranges aren’t pass/fail. A score of 14 doesn’t mean you’re behind; it means you have specific items to address before a larger initiative. The checklist tells you where to invest effort, not whether to start.
Common Gaps We See
Three gaps show up in almost every mid-market AI readiness assessment we do.
Data Readiness Is Almost Always the Bottleneck


The data usually exists, scattered across the CRM, the ERP, the accounting system, and someone’s spreadsheet on a shared drive. The problem is that it’s scattered, undocumented, and inconsistent across systems. A customer record in HubSpot doesn’t match the customer record in QuickBooks, and neither matches the spreadsheet the operations team maintains manually.
Fixing this doesn’t require a multi-year data warehouse project. It often starts with documenting what data lives where, identifying the 2-3 data sources your first AI use case needs, and cleaning those specifically. Solve the data problem for one use case, not for the whole company at once.
AI Literacy Gaps Create Silent Resistance


When leadership gets excited about AI but the team doesn’t understand what it actually does, you get a specific kind of failure. Nobody openly opposes the project. They just don’t engage with it. Outputs go unreviewed. Feedback doesn’t happen. The pilot runs for three months and produces results that nobody looked at.
Practical AI training focused on what the technology can and can’t do, with real examples from the team’s own workflows, closes this gap faster than any amount of top-down messaging.
Change Management Gets Skipped
Companies that have successfully rolled out new technology before, ERP systems, CRM platforms, new tooling, understand that adoption is its own workstream. Companies doing their first major technology change often underestimate this. They build the AI system, deploy it, and wonder why usage is low.
A lightweight change management plan covers three things: who is affected, how their daily work changes, and what support they’ll get during the transition. That covers it. The plan doesn’t need to be complicated, but it does need to exist.
FAQ
How long does an AI readiness assessment take?
Self-assessment with this checklist takes 1-2 hours with a cross-functional team. A formal readiness assessment with a consultant typically runs 2-4 weeks, depending on the organization’s size and the number of use cases being evaluated.
What’s the minimum readiness score to start an AI project?
Stage 2 (score 13 or higher) is enough for a focused pilot. Stage 3 (25 or higher) supports a department-level initiative. Don’t wait for a perfect score. Readiness improves through doing, and a well-scoped pilot will strengthen the dimensions where you scored low.
Should we do the checklist as IT or as a cross-functional team?
Always cross-functional. AI readiness is a business question, not a technology question. Include operations, finance, and the team that will use AI daily. IT might score data infrastructure highly while operations knows the data quality is unreliable. Both perspectives need to be in the room.
How often should we reassess readiness?
Quarterly during active AI implementation. Annually once AI systems are operational and stable. Your scores will change as you build capabilities, and reassessment helps you identify which dimensions need attention as you scale.
What if we score low? Should we wait to start AI?
No. Low scores identify what to fix, not reasons to delay. Start with a pilot that plays to your strengths, the dimensions where you scored highest, while building capacity in weaker areas. Organizations that wait for perfect readiness often find themselves waiting indefinitely while competitors move forward.
From Checklist to Action


This checklist gives you a snapshot: a score, a stage, and a set of dimensions ranked by how much work they need. The value is in what you do with that information.
If you scored in Stages 1-2, focus on the basics: define your use cases, document your data, and build AI literacy. These are the foundations that make everything else possible.
If you scored in Stages 3-5, the work shifts to execution. Prioritize your use cases by business impact, launch a focused pilot, measure what matters, and iterate.
Wherever you land, the checklist should surface specific gaps, not vague concerns. Each 0 or 1 you scored is a concrete item you can address. That specificity is the difference between “we need to get ready for AI” and actually getting ready.




