AI Strategy Starts With a Question Nobody Has a Good Framework For

6 min read
Data StrategyArtificial Intelligence Data ManagementAIStrategyDigital Transformation

A large CPG company came to us not long ago with a straightforward ask: help us build an AI strategy. They wanted to understand what their data organization needed to look like in order to work with AI -- not just use AI tools, but actually operate with AI as a core capability.

It sounded familiar. I'd done data strategy work for years. So I went to the standard playbooks.

DAMA-DMBOK is the bible of data management. Solid foundations, comprehensive coverage of governance, quality, metadata, and stewardship. But it was designed to answer a different question: how do you manage enterprise data well? Not: are you structurally ready to operate with AI?

TOGAF gives you enterprise architecture discipline. Useful for structuring technology decisions, but too abstract to tell a data leader what specific capabilities they need to build.

Data Mesh gives you a philosophy -- domain ownership, data as a product, federated governance. Important ideas, but not an assessment tool.

None of them, individually or combined, gave me a clean answer to what my client was actually asking: across our entire data organization, what do we have, what are we missing, and what do we need to build to get to AI?

So I built something.

The real problem with most AI readiness assessments

When organizations try to answer the AI readiness question, they usually look in the wrong places. They audit their tech stack. They assess their data science team. They benchmark their cloud infrastructure.

Those things matter. But they're symptoms of a deeper question that rarely gets asked with enough rigor: do you have the organizational and technical capabilities -- across the full data function -- to support AI at scale?

In my experience, the answer is almost always: partially. Organizations are strong in two or three areas and largely blind to the rest. And the gaps that actually kill AI programs are rarely in the AI tooling. They're in data quality. In master data management. In governance structures that were never designed to handle AI model inputs and outputs. In integration layers that can't move data at the speed AI requires.

You can't see those gaps without a comprehensive capability map.

A framework built for this question

Over time, working across multiple engagements, I developed a capability framework specifically designed to answer the AI readiness question. It draws on DAMA, TOGAF, Data Mesh, MLOps, NIST, and cloud-native architecture principles -- but organizes them around a single purpose: understanding what a data organization needs to be capable of in order to operate with AI.

The framework covers 10 capability domains. More importantly, it organizes them into four architectural layers that reflect how data actually flows through an organization.

(See framework visual below)

The Foundation Layer is where most organizations underinvest. Data Strategy & Governance, Architecture & Design, Cloud & Hybrid Architecture, and Security & Privacy. These aren't glamorous. But without them, everything built above is unstable. AI programs that fail at scale almost always trace back to gaps here.

The Ingestion & Processing Layer covers how data moves and gets prepared: Integration & Interoperability, Engineering & Operations, and AI/Automation & DataOps. This is where the operational plumbing lives. It's also where AI-specific requirements -- vector databases, embedding pipelines, real-time inference feeds -- start to stress traditional architectures.

The Management & Storage Layer is about integrity: Data Storage & Management and Data Quality & MDM. This is the layer that determines whether your AI models are working with data that can be trusted. In practice, this is where most organizations have the largest gap between what they think their maturity is and what it actually is.

The Consumption Layer -- Data Consumption & Analytics -- is where business value gets delivered. Dashboards, predictive models, GenAI applications, self-service analytics. This is usually where investment is highest and visibility is clearest. It's also the layer that breaks first when the layers below it are weak.

How I apply it

When I bring this framework into an engagement, the methodology is straightforward.

Start with value. What is the organization actually trying to achieve with AI? The capability assessment only means something relative to a specific business ambition. A company trying to automate supply chain decisions has different readiness requirements than one trying to build a customer-facing GenAI product.

Then do an honest current state. Map existing capabilities against the 10 domains. Not a technology audit -- a capability audit. What can this organization actually do, reliably, at scale?

From there, design the future state based on where they need to go. Identify the gaps. Prioritize them based on what's blocking value, not what's easiest to fix. Build a roadmap.

The framework makes that current state assessment rigorous instead of anecdotal. Without a structured capability map, these assessments tend to reflect whoever is loudest in the room.

What you usually find

A few patterns show up consistently across engagements.

Foundation gaps are almost universal. Governance frameworks designed for reporting and compliance, not for AI. Data architecture decisions made five years ago that weren't built to support real-time inference or large-scale model training.

MDM and data quality are the silent killers. Organizations invest in AI tooling and then discover their customer data has a 30% duplication rate, or that product hierarchies are inconsistent across business units. The model is only as good as what you feed it.

AI and machine learning work is already happening -- but it's living on individual laptops and in team-specific tools. Data scientists are running models, building pipelines, experimenting with algorithms. The capability exists in pockets. What doesn't exist is the enterprise infrastructure to scale it: no feature stores, no model registries, no governance around what models are running in production or what data they're consuming. The work is real. The foundation underneath it isn't.

And almost everyone overestimates their consumption layer maturity. Having Tableau or Power BI deployed is not the same as having a functioning analytics capability. Deploying a ChatGPT wrapper is not the same as having an enterprise AI capability.

An open question, not a final answer

I want to be clear about what I'm claiming here and what I'm not.

I'm not saying this framework is complete or that it's the only way to think about this problem. Data management is too context-dependent for any single framework to be universal. What works for a CPG company with a mature data warehouse looks different from what works for a financial services firm rebuilding from scratch.

What I am saying is that the conversation needs to happen at this level of specificity. "Do you have a data strategy" is not the right question anymore. "Are your 10 core data capability domains mature enough to support the AI outcomes you're trying to achieve" is closer.

If you're doing this kind of work -- assessing data organizations for AI readiness, building capability frameworks, figuring out what good looks like in this new environment -- I'd genuinely like to compare notes. This is a space where the field is still being defined, and I think we build better answers together than separately.

Related Articles