
Every large enterprise I have worked with over the last two years is building AI. The pace is real. The investment is real. The urgency is real.
What is not real, in most cases, is a shared direction.
What I see instead is fragmentation. Business units moving fast, each with their own data platform, their own agent frameworks, their own definitions of what a good AI solution looks like. Nobody made a bad decision. Nobody coordinated. And now the organization has dozens of agents that cannot talk to each other, pipelines that duplicate work already done two floors away, and capabilities that live and die inside the team that built them.
That is not an AI problem. It is a governance problem.
The Pattern Repeats
I have seen this in organizations of different sizes and industries, but it is most visible at scale. Large enterprises, especially those that grew through acquisition, tend to inherit a culture of segment autonomy. Every division has its own technology leadership, its own budget, its own priorities. That independence works well for many things. It does not work well for building AI infrastructure.
When each segment builds independently, you get duplication. The same agent gets built three times by three teams who did not know about each other. The same data gets ingested into three platforms with three different schemas. When someone eventually asks, "Can we share this across the organization?" the answer is almost always no, because the decisions that would have made sharing possible were never made.
The problem compounds with agentic AI. Agents are not static reports. They act. They call tools. They call other agents. They read from data sources and write back to systems. When those agents are built without a shared protocol layer, they become islands. An agent built on one framework cannot hand off work to an agent built on another. A tool built for one agent cannot be reused by the next team that needs the same capability. The organization rebuilds the same thing over and over, and every rebuild is slightly different, which makes the next integration harder.
What Interoperability Actually Requires
Interoperability is not a technology choice. It is an organizational commitment expressed through technology choices.
At minimum, that commitment has to show up in four places.
How agents talk to data. The Model Context Protocol (MCP) is an emerging standard for connecting agents to tools and data sources through a consistent interface. The practical value is modularity. When an agent connects to a data platform through an MCP server rather than a direct integration, the agent is decoupled from the platform. You can swap the data platform, update the tool, or reuse the agent in a different context without rebuilding the connection. If you hardwire agents to platforms instead, you end up with agents that are brittle, platform-specific, and impossible to reuse. The cost of that becomes clear when the second team tries to do the same thing and discovers they have to start over.
How agents talk to each other. Agent-to-agent (A2A) communication is the other half of this. In a multi-agent architecture, agents delegate to other agents. An orchestrating agent breaks down a task and routes subtasks to specialized agents. That routing only works if the agents share a protocol. A2A standards define how agents describe their capabilities, how they pass context, and how they hand off work. Without this, multi-agent systems either collapse into monoliths or require custom integration code for every pair of agents that need to cooperate. Neither scales.
How data flows between platforms. Data contracts define the agreement between whoever produces data and whoever consumes it. The producer commits to a schema, a format, a quality standard. The consumer builds against that commitment. In practice, this often means adopting a table format standard like Apache Iceberg, or enforcing contracts through a catalog layer like Unity Catalog in Databricks, so that data can be accessed across platforms without duplication. The goal is that a team can build on their platform of choice without re-ingesting data that already exists somewhere else, as long as they follow the contract.
How agents understand what data means. Protocols and contracts govern how data moves. They do not govern what data means. Each business unit owns its ontology: its terms, its entities, its definitions, its process logic. That ownership is appropriate and should stay at the business unit level. What cannot stay at the business unit level is the format in which that meaning is captured. An agent that reads a semantic layer or data dictionary to reason over data needs to find that layer in a predictable, consistent structure regardless of which business unit's data it is working with. If every unit documents its ontology differently, agents become brittle at domain boundaries. The same agent that works fluently in one segment has to be rebuilt, or at minimum retrained, to work in the next. Standardizing the structure of the semantic layer, while leaving the content to each business unit, is what makes agents portable across the enterprise.
None of these layers alone is sufficient. Together, they form a foundation worth building on.
One Platform Is the Cleanest Answer
The cleanest solution to fragmentation is consolidation. One platform, shared infrastructure, a single stack that every segment builds on. When an organization can get there, it eliminates an entire category of interoperability problems. There is no need to negotiate data contracts between platforms that do not exist. There is no need to route agents across incompatible environments.
The challenge is execution, not logic. Large federated enterprises, especially those that grew through acquisition, carry deep segment autonomy. Multiple technology leaders, separate budgets, years of independent investment. Full consolidation is the right destination. Getting there takes time, and in the interim, the organization still has to build.
That is where the standards layer becomes critical. Whether the path leads to one platform or runs through a transition period with multiple platforms, the protocol decisions have to be made now. MCP as the interface layer between agents and tools. A2A as the standard for agent-to-agent communication. Data contracts that govern how data is published and consumed regardless of where it lives. A shared capability library that captures what gets built so it does not get rebuilt.
These standards do not compete with the consolidation goal. They accelerate it. An organization that builds to shared protocols today arrives at a unified platform with far less migration debt than one that let every segment do its own thing. The floor you define now is the foundation you build on later.
Governance Is the Decision, Not the Paperwork
When I say governance here, I do not mean a data catalog or a policy document. I mean the organizational act of making decisions with purpose and then holding to them.
Most of the fragmentation I described earlier did not happen because nobody knew better. It happened because the decision was never made. Teams moved fast because they had to. Nobody stopped to ask, "How will this connect to what the next team builds?" The question felt premature. It was not.
The governance conversation in enterprise AI is not about slowing things down. It is about making one hard decision early so you do not have to make a hundred harder decisions later. Should we adopt MCP as our standard interface layer? Should we require A2A-compatible agent design? Should we mandate Iceberg as our table format? Should we standardize how business units document their semantic layer? These decisions feel abstract when you are standing up your first agent. They feel obvious when you are trying to connect your fortieth.
The organizations that will get this right are not necessarily the ones moving fastest today. They are the ones that stop long enough to align on the standards that will let them move fast at scale, and then build with that alignment in place.
What Needs to Happen
Segment flexibility is not the problem. It is appropriate for the complexity of large enterprises. The problem is flexibility without a floor.
The floor looks like this: shared protocol standards for agent-to-tool communication (MCP), shared standards for agent-to-agent communication (A2A), data contract standards that govern how data is published and consumed across platforms, and a standardized structure for the semantic layer that lets agents reason over any business unit's data without being rebuilt. A maintained library of reusable agents and tools. And a governance process that decides what belongs on the floor and enforces it, not through bureaucracy, but through architectural standards that teams build against from the start.
This is not the most exciting part of an AI strategy. It does not make for a good demo. But it is the part that determines whether the work done today compounds into something the organization can actually use at scale, or whether it compounds into technical debt that eventually has to be unwound.
The architecture does not have to be rigid. It just has to be designed.