Skip to main content
Build trusted data with Ethyca.

Subject to Ethyca’s Privacy Policy, you agree to allow Ethyca to contact you via the email provided for scheduling and marketing purposes.

Data Consumption at AI Speed: Why the Enterprise Operating Model Must Be Re-Engineered

AI systems don't wait for approval. They stream data continuously, make decisions at machine speed, and operate around the clock but most enterprises still govern access at human pace. The result: bottlenecks, shadow access, and AI investments that underdeliver. The operating model itself has to change.

Authors
Ethyca Team
Topic
Privacy Practice
Published
Feb 25, 2026
A sleek, modern train speeds through an urban station under a vaulted roof. Motion blur highlights its rapid movement. A skyscraper stands in the background.
Introduction

An employee swipes their business card. They need: Reimbursement for lunch with a potential client (swipe), payment to a new vendor they just finished evaluating (swipe), and funds for their team’s airfare to an industry conference.

A few years ago, each swipe would have gotten tangled in a web of approvals, and different people would have been responsible for thumbing up every purchase. Now, with AI, every swipe can be streamlined.

Well, not so fast. Even today, AI systems can’t always cut through the red tape. A major improvement becomes a minor improvement: The employee can sometimes delegate approval to the AI system, but a human on the other end still slows down the process.

It’s like we’ve built self-driving cars but deployed them on roadways built for traditional vehicles. Despite the advanced tech, the self-driving cars are still getting held up in the same traffic patterns as human drivers.

Enterprises, however, don’t have to tear up tar and pavement to rethink how approval workflows work. The core problem is that enterprises architected their data organizations for sequential, human-paced access – one request, one dataset, and one review cycle at a time.

AI systems just don’t work that way, and traditional approval workflows inevitably break down. When agents, applications, and analytics pull data continuously across domains, regions, and systems, conventional approval gates become bottlenecks.

The momentum that AI builds up is halted, the self-driving car gets stuck in traffic, and the employee can’t take the flight they needed. Or, worse: Shadow access and compliance violations become commonplace as employees do what they need to do, regardless of approvals. The determined employees get access anyway. The rule-following employees get stuck in gridlock.

This isn't solved by adding more pipelines or hiring more reviewers. The operating model itself must change.

Before AI

Yesterday's Data Consumption Was Human-Paced

The old model of data consumption was not a bad idea or a bad design. In the same way that we don’t look back on server racks after the cloud, we don’t consider human-paced data consumption models a mistake.

We only had humans then, so why would we do anything else? That question, however, gets at the new reality: We have more than humans now, so why would we stick with what we did before?

The Old Model

The old model was a simple cycle: Requests for data were sent to reviewers, who reviewed the requests and determined whether to provision the data or access according to a set of pre-determined policies. At a small startup, this might look like pinging the one IT person on Slack. At a large enterprise, this might mean submitting a ticket to a portal that takes weeks to return a result.

Every quarter, teams might review access policies. Are developers frequently asking for AWS credentials? Maybe we should be less stringent with those credentials. Have data breaches risen in our industry recently? Maybe we should be more strict with our data sources.

All the while, the fundamental form remained the same: the approval chain was manual, and humans worked as gatekeepers at multiple points along the chain.

AI's Reality

The old model was built to human scale, but we no longer operate at human scale.

Agentic workflows, for example, operate 24/7. AI agents don’t need to eat, drink, or sleep – why would we limit them to human working hours? It sounds silly on its face, but that’s exactly what we do if we run AI within the old data consumption model.

Think about streaming data, too. Many AI systems continuously stream data from multiple sources. If we want those systems to make inferences across domains, which is precisely what makes RAG so compelling, then those systems need to rely on always-open streams and instant access from domain to domain.

Machine-speed decisions can’t wait for meetings. If we maintain the old model, we’re merely entering a new era of “hurry up and wait.”

Why Traditional Stage-Gates Implode

Approval latency, when limited by human, manual approval, is measured in days or weeks. Along the way, context is lost between the request and the provision. Employees can wait a week for approval only to get a message to the effect, “What was this about again?”

And this is often the best case. Many controls assume stable, predictable access patterns, but access needs are often unpredictable.

The temptation is to avoid the issues the old model produces by speeding up the workflows or hiring more gatekeepers. It’s a similar misconception that leads cities to widen highways, even though this actually increases congestion.

A new model needs a new foundation. The old governance model was designed for exception handling. The new model needs to be designed for continuous operation.

A modern data center with rows of illuminated server racks on the right. Reflective surfaces and bright lighting convey a high-tech atmosphere.
It’s like we’ve built self-driving cars but deployed them on roadways built for traditional vehicles. Despite the advanced tech, the self-driving cars are still getting held up in the same traffic patterns as human drivers.

Ethyca Team

POST-AI

The New Operating Model: Centralize Risk and Federate Delivery

The new operating model, one designed with AI and AI users first and foremost, centralizes risk rather than treating each risk as an exception, and federates delivery rather than gatekeeping access.

What the Research Shows

As of now, research indicates that AI usage and data consumption needs are increasing, and governance is one of the most effective ways to ensure you can actually capture value from AI.

  • In one McKinsey study, 78% of respondents say their organizations use AI in at least one business function, but “only 1 percent of company executives describe their gen AI rollouts as ‘mature.’”
  • Research from Stanford and Accenture shows that the top two risks associated with AI were privacy and data governance (51%) and cybersecurity (47%).
  • McKinsey research shows that “out of 25 attributes tested for organizations of all sizes, the redesign of workflows has the biggest effect on an organization’s ability to see EBIT (Earnings Before Interest and Taxes) impact from its use of gen AI.”

The need for change is clear: AI usage is increasing, but its implementations are not yet mature. Parallel to this growth, organizations remain wary of data governance issues, despite workflow redesign being proven to have the greatest impact on realizing the potential of AI.

Think of it this way: Data consumption needs are rising, but if the governance bottleneck remains tight, then the value will be limited. It’s like forcing water through a kinked hose.

Successful AI organizations, in contrast, centralize their risk, compliance, and data governance functions, building a central control that grants access without delay. Simultaneously, these organizations federate talent and solution delivery to ensure that implementation is close to the business context.

Research from MIT indicates that the pressure to develop this new model is only intensifying. According to a study of C-suite executives:

  • 83% of executives state that their "organization has identified numerous sources of data that must be brought together to enable AI initiatives."
  • 45% report that the most challenging aspect of data integration is managing data volumes.
  • 41% report that the most challenging aspect is enabling real-time access.

These fundamental issues repeat across pipelines. Consistency of control and proximity of delivery are the necessary first principles of an AI-first operating model. Until organizations build with those principles in mind, bottlenecks will continue to emerge.

Why This Architecture Works

This architecture works for three core reasons:

  • Centralized risk: By pooling risk into a central function, organizations can ensure uniform policy enforcement regardless of where AI is deployed. Risk shifts from a constant problem, solved on an ad hoc basis, to a continuous process solved at scale.
  • Federated build: In contrast to risk, by federating development, organizations can ensure that implementation always occurs within the relevant business context and is targeted to on-the-ground use cases. The closer to the context, the higher fidelity the solution.
  • Clear accountability: The combination of centralized risk and federated build ensures that risk lives within centralized governance, while execution lives with domain teams. This is a righting of the scales – risk is an organization-level issue, and execution is a domain-level issue.

These three design principles have to work in concert. If an organization focuses on just one of the three, the new operating model can fail in ways similar to the previous ones.

Fully centralized delivery creates bottlenecks. Domain teams need to work in their specialized contexts; a distant, centralized team will inevitably be slower.

Fully federated risk creates compliance chaos. The domain teams can move fast, but compliance and security risks make wrangling that chaos a nightmare.

Either direction constitutes a middle ground that trades speed for safety. In the end, no one wins.

Eliminating the Data Access Bottleneck

Make Policy Executable at Runtime: ABAC and PBAC

This isn’t the first time data access has been identified as a bottleneck issue, nor the first time a solution has emerged. Role-based access control (RBAC) was formalized in 2000, and it was, at the time, a huge improvement over Identity-Based Access Control (IBAC).

RBAC streamlines access by tying approval to roles, rather than individual identities. This is better, but not enough. Role-based access is still too coarse-grained to capture the nuances behind many permissions requests, and RBAC is still bottlenecked by manual approvals, making it unsuitable for AI consumption. Furthermore, RBAC policies often become trapped in PDFs and lists, rather than being embedded in the systems that grant access.

That’s why we’re seeing the rise of Attribute-Based Access Control (ABAC) and Purpose-Based Access Control (PBAC).

ABAC and PBAC

ABAC evaluates user attributes, data attributes, geography, time, and context to determine permissions. This is much more nuanced than RBAC, which focuses solely on a user’s role within the organization. Instead of decisions sitting in approval queues gathering dust, decisions happen in the data path – no context loss and latency minimized.

ABAC also aligns with NIST standards for dynamic access control, which state that “dynamic access control approaches rely on runtime access control decisions facilitated by dynamic privilege management, such as attribute-based access control.”

PBAC enforces policies around why data is being accessed, not just who is accessing it. Like ABAC, it aligns with regulatory requirements, especially GDPR, which states that: Data must be “collected for specified, explicit and legitimate purposes” and that data must be “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.”

PBAC prevents scope creep in a way that ABAC, and certainly not RBAC, cannot. The approval of data in one domain does not mean it can be used in another domain. Data approved for fraud detection, for example, can’t be repurposed for marketing.

Runtime Enforcement

The through-line is runtime enforcement. In the old model, approval workflows were inevitably bottlenecks, and every request was examined in isolation.

In the new model, every request is evaluated in its context and at the moment of use. Policies shift from checklists that are manually scanned and checked to policies that execute as code. In the new approach, speed isn’t a matter of speeding up old processes; speed is the necessary result of a system that executes at runtime.

As a result, successful approvals disappear in the face of automation. Standard requests following standard patterns don’t need human attention. Access is immediate. Humans only arrive to handle exceptions, and iteration makes those exceptions rarer and rarer over time.

What Good Looks Like

Design for Regulated Sharing

Data consumption and sharing are fundamentally different in the AI era, and previous models and assumptions can’t be ported over and reused.

Despite the need for internal change, however, external rules remain in place and continue to evolve. That means we need a model for data sharing that’s flexible enough to support new levels of internal consumption and controllable enough to satisfy external requirements.

AI Consumption Crosses Boundaries

AI consumption crosses boundaries in ways we couldn’t have conceived of beforehand.

Internally, AI consumes data across business units, regions, and cloud environments. In fact, much of AI’s best potential lies in supporting these crossings. If you’re building a RAG feature to help employees answer business questions, for example, that feature only becomes dependable if it can actually find and analyze information across all domains.

Externally, AI consumes data across partners, suppliers, fintechs, and third-party AI vendors. The software supply chain was already dizzyingly complex before the advent of AI, and now, data consumption can extend in many more directions and to greater depths.

Again, a similar pattern applies: more data leads to better-informed AI features, but more data access can also introduce risk.

The Sharing Challenge

The risk emerges from data sharing. How do you share data in a world criss-crossed by complex, strict regulatory regimes?

At a high level, there are the GDPR and CCPA, two broad data protection regulations that dictate how data can be shared and used across broad regions of the world.

Step down a level and, depending on your industry, there can be even stricter rules. Healthcare and finance, for example, both have sector-specific requirements with high stakes for compliance.

Another level down lies individual sets of contractual obligations that can vary by counterparty. What does this particular client or partner require? What do the rest of them need? How can you juggle all those balls in the air while enabling data sharing that supports your AI systems? Every business and technical integration point multiples the surface area for risk.

Next-Generation Data Sharing Requirements

The next generation of data sharing will require:

  • Consistent policy enforcement regardless of organizational boundaries.
  • Granular permissioning that travels with data, rather than blocking it.
  • Audit trails that span internal and external systems.
  • Mechanisms to revoke access dynamically as conditions change.

These new demands require a new operating model, one that prioritizes sharing as a goal, rather than limiting it by default or outright preventing it.

Technical controls will need to be able to operate across federation boundaries to ensure federation provides autonomy without overruling central control. Similarly, legal and compliance functions will need to be integrated into data provisioning, rather than layered on top.

If data sharing is the goal and a fundamental principle of the operating model’s design, then the way we control and regulate data can’t be an afterthought. They need to be just as fundamental as data sharing itself.

If your approvals, policies, and lineage checks rely on people and PDFs, AI will outrun them. Every improvement on that level is illusory.

Ethyca Team

A New Way Forward

Blueprint: The AI-Speed Operating Model

The AI-speed operating model comprises four components: The control plane, which is centralized; the execution plane, which is federated; the observability layer, which is continuous; and the human role, which is “on-the-loop” but not in the loop. Each layer works in concert with the others, and their structures, whether they’re centralized, federated, or continuous, are critical to making the model come together as a whole.

1. Control Plane (Centralized)

The control plane is centralized, allowing organizations to build risk management and mitigation into a single function. The control plane includes a policy repository, which acts as a single source of truth for access, usage, and retention rules.

This centralization enables organizations to unify taxonomy and metadata requirements across domains and execute approvals as code. Approvals are immediate, and exception patterns are captured programmatically, instead of being lost in email threads and Slack conversations.

With the control plane, organizations can ensure policy enforcement – both that it is performed and how – no matter where AI is deployed (which is, increasingly, everywhere).

2. Execution Plane (Federated)

The execution plane is federated, allowing domain teams within organizations to ship governed data products with embedded controls. The execution plane includes automatically enforcing policies that evaluate and execute at the point of access.

This federation ensures that permissioning travels with data products, rather than permissioning and approvals becoming blockers that limit domain teams from working at their own pace. Developers can provision data within policy boundaries and without manual gates, allowing them to self-service without worrying about disrupting guardrails.

3. Observability Layer (Continuous)

The observability layer is continuous, ensuring that data sharing can be as continuous as needed to support the levels of consumption that AI systems require without allowing that consumption to become invisible.

Real-time audit trails log every access decision with complete context, and dashboards show request patterns, latency, and exceptions. Teams can take a bird’s eye view of access patterns, allowing them to determine which policies work, which don’t, and which throw off too many exceptions. They can even establish governance SLOs, allowing them to monitor access latency, exception rates, and policy coverage over time.

4. The Human Role

The human role shifts from gatekeeper to exception handler. Humans still play a part, but routine approvals are automated, ensuring manual approval work is only necessary for exceptions and edge cases.

Rather than being “in-the-loop,” managing the approvals as they come through, perhaps aided by automation, human monitors are “on-the-loop.” They take a truly supervisory role, allowing them to monitor and intervene but only when necessary. Otherwise, the loop is hands-off.

The loop and its exceptions, however, are not static. With telemetry, organizations can refine policies and reduce friction over time. As the loop spins, exceptions get even rarer and human intervention less necessary.

Salesforce, for example, built out a system similar to this philosophy when its Data Spaces and Data Governance teams implemented new security and governance capabilities into its Data Cloud features. They aimed, as we suggest here, for comprehensive ABAC and policy-based governance.

Over time, says Sapna Vasant Pandit, Director of Software Engineering at Salesforce, “the focus has shifted to more granular security. This involves defining policies at the object, field, and record levels to restrict access based on user roles using the ABAC model.”

Efforts like this, from companies like Salesforce, demonstrate that even large-scale platforms are moving toward attribute-based enforcement. Access patterns are more diverse now than old access models can handle, and this diversity will only increase with the growth of AI.

Measuring Success

Metrics That Matter

An AI-operating speed model built in line with the blueprint above is still only as good as its iteration over time. Building the system is one step, but refining it is when you really approach its full potential.

The problem is that traditional metrics tend to miss the point. Consider these typical metrics:

  • Number of policies documented.
  • Compliance audit pass rate.
  • Annual training completion rate.

In the old model, these all made sense. In the new model, these metrics don’t pinpoint where you can improve the system. The traditional assumptions are baked into the metrics: Data consumption and approvals are something tacked on at the end.

Instead, AI-speed metrics need to take AI into account as well as a holistic view of the system as a whole. Consider:

  • Access-request lead time: Measure the lead time from request to data availability.
  • Automated policy coverage: Measure the percentage of requests served without human review.
  • Exception queue backlog: Track the unresolved edge cases waiting for human decision.
  • Lineage coverage: Measure the percentage of critical datasets with complete data lineage.
  • Policy drift incidents: Count the cases where deployed systems diverged from approved controls.

These metrics matter because they directly correlate with AI delivery speed and surface bottlenecks in real-time. Throughout, they measure governance as an enabler, not an obstacle. Approvals are a natural outcome of this system working as intended, not as a limitation to normal functioning.

How to Make AI More Effective

Ethyca's Approach: Controls in Motion

At Ethyca, we approach AI from the perspective of constraints. What can we enable or free that can improve AI systems and make developers using AI more effective? Today, we don’t see AI limited primarily by models. We see AI constrained by the ability to govern data access and the flexibility to audit data use at scale.

With this constraint in mind, we focus on:

  • Encoding policy and purpose into the data path itself.
  • Enabling the runtime evaluation of every access request based on attributes and context.
  • Offering integration with existing warehouses and catalogs. No rip-and-replace.

With this approach, organizations can implement the operating model we’ve described above.

Domains can move fast because controls are automatic, not bureaucratic. This means implementing new tools, experimenting with new ideas, and developing new features – all without running into constant limitations.

Risk stays consistent because policies are automatically reinforced and enforced uniformly across all access points. This means that trust can be centralized and risk managed from one high-leverage angle.

Policies convert from documents into executable decisions. Organizations need to operate at enterprise scale and move at AI speeds; this is only possible when static documents become programmable policies. Scale and speed need to be priorities.

The operational impact we’ve seen so far has been significant. Companies that have worked with us have achieved:

  • Lower access latency.
  • Shrinking exception queues.
  • Efficiency from faster approvals and greater freedom to innovate.

The unlocking effect stems from implementing an operating model that finally matches how AI consumes data.

A high-speed printing press produces colorful, detailed magazines, showcasing images and text in a dynamic, industrial setting with a focused, modern tone.
Organizations that re-engineer their operating model for AI-speed consumption will turn governance from a bottleneck into a competitive advantage — accelerating safely while competitors stall in review cycles.

Ethyca Team

Operationalizing Success

The Operating Model Is the Differentiator

Data consumption is a systems-level problem, not a feature-level problem.

In the end, we take a similar approach to what Eliyahu Goldratt did for operations management, what Toyota did for automobile manufacturing, and what Gene Kim, Kevin Behr, and George Spafford did for DevOps. In the last example, the authors put it well, in The Phoenix Project, writing, “Any improvement not made at the constraint is just an illusion.”

This is where we’re at with AI. Adding more data pipelines won't solve a structural problem. If your approvals, policies, and lineage checks rely on people and PDFs, AI will outrun them. Every improvement on that level is illusory.

The only path forward is a structural transformation, a new operating model that takes AI as its starting point. Centralize risk and policy. Federate build speed across domains. Implement runtime, purpose-aware controls so that decisions happen in the flow, and not the queue.

Because this is such a significant and almost universal constraint, the new operating model will serve as a significant differentiator. Organizations that re-engineer their operating model for AI-speed consumption will turn governance from a bottleneck into a competitive advantage—accelerating safely while competitors stall in review cycles.

Speak with us

If your data governance model is still built around manual approvals and human-paced review cycles, the operational question is simple: can it actually keep up with the speed at which your AI systems consume, share, and act on data?

Ethyca helps teams re-engineer their data operating model — encoding policy into the data path, enabling runtime access decisions, and giving governance teams the visibility to stay in control without becoming a bottleneck. If you're rethinking how your organization governs data at AI speed, speak with us today - we’re happy to compare notes.

Share