Bring order to AI with MuleSoft
AI & API operations with enterprise control.
Today we're announcing the general availability of AI Gateway’s LLM capabilities — giving platform, FinOps, and security teams a single governed control plane for every Large Language Model (LLM) interaction across the enterprise, complimenting our existing Gateway capabilities for the Model Context Protocol (MCP) and Agent-to-Agent (A2A) governance.
IT organizations have long built managed gateways to secure their API infrastructure.
As AI becomes as fundamental to enterprise operations as APIs, the governance layer has to extend to every LLM call, every AI interaction. Most teams start by reaching for a lightweight open source gateway. But enterprise scale demands enterprise governance– that means cost controls, audit trails, model portability, or policy enforcement across business units.
The reality for most organizations today is that AI adoption has happened faster than the infrastructure to support it. Teams are racing to keep pace, hastily standing up their own model access, their own spend tracking, and their own guardrails — most haven't even taken this step. The result is an AI estate that's difficult to audit, expensive to optimize, and hard to trust at scale. Every request defaults to the most expensive model regardless of the task. Costs accumulate in cloud bills nobody saw coming. When something goes wrong, there's no single place to look.
MuleSoft’s AI Gateway LLM capabilities changes that by empowering teams to create a truly multi-model LLM infrastructure with intelligent routing, unified multi-provider access, and full token usage accountability.
Combined with our existing support for MCP and A2A agent governance, MuleSoft now provides a single control plane across the full spectrum of enterprise AI activity.
The operational gap in enterprise AI
The pace at which teams have adopted AI models has outrun the infrastructure to manage them. Access patterns are inconsistent, spend accountability is loose, and governance — where it exists — lives inside individual applications rather than at a shared control point. According to Gartner, 70% of organizations building multi-LLM applications will use AI gateway capabilities to optimize cost and performance outcomes by 2027, up from less than 5% in 2024. The gap between where enterprises are and where they need to be is closing fast, and the organizations closing it deliberately are the ones that will scale AI with confidence.
Managing APIs, MCP, and LLM traffic on separate infrastructure is an operational tax most teams can't afford. Consolidating them means unified visibility, consistent policy enforcement, and one less platform to maintain. MuleSoft's AI Gateway brings all three onto a single governance layer; the same technology securing enterprise APIs today, with no additional overhead. That means extending the same policies, identity configurations, and observability for LLM, MCP & A2A traffic.
Govern, Route, and Optimize Every LLM Interaction
Without routing logic in place, every request defaults to the most expensive model available, regardless of what the task actually demands. Semantic routing rectifies this by automatically matching each prompt to the right model based on content, so routine tasks and complex workloads are handled and priced accordingly.
Behind a single governed endpoint, platform teams control which models are available, to whom, and under what conditions — with automatic fallbacks on provider outages and new models coming online without touching application code. Token budgets and rate limits enforced at the gateway tie consumption directly to the business groups and applications that own it, while prompt governance, PII detection, and content safety apply uniformly across every team rather than being left to individual implementation.
For existing MuleSoft customers on Flex Gateway, all of this is available today at no additional cost on Platinum, Titanium, Unlimited, and Integration Advanced tiers.
Built for the Agentic Enterprise
As organizations move from AI assistance to AI agency, the governance problem grows beyond model access. Through native support for MCP and A2A protocols, AI Gateway extends policy enforcement, auditability, and access controls to every agent-to-agent and agent-to-application interaction.
For teams looking to put existing systems within reach of agents, MCP Bridge provides a practical path forward. Rather than rebuilding integrations for every AI use case, existing enterprise APIs are exposed as agent-ready tools directly through the gateway. Access controls, audit logging, and policy enforcement are inherited automatically, removing the primary bottleneck between proof-of-concept and production.
MuleSoft pioneered the management of APIs and the idea of composability, and this governed control of AI interactions extends into the future with the agentic enterprise; one control plane, one audit trail, one set of policies whether the request originates from a developer, an application, or an agent acting autonomously on your systems.
Use cases
Cost-optimized AI across teams True multi-model management means every prompt is automatically matched to the most cost-appropriate model, with token budgets and rate limits enforced per team before overages occur.
Governing a multi-vendor AI strategy Running OpenAI, Azure, and Gemini simultaneously shouldn't mean three separate governance problems. AI Gateway enforces consistent policy across every approved provider, with the flexibility to swap or onboard models without application changes.
Compliant enterprise AI deployments Governance applied at the gateway means prompt protections, content safety, and audit logging are consistent by default — removing the compliance bottleneck that stalls most AI projects before they reach production.
Putting enterprise systems within reach of agents With MCP Bridge and support for MCP and A2A protocols, agents can act on approved enterprise systems with access controls and auditability inherited at the gateway — no backend modifications required.
What’s Next
The enterprises successfully scaling AI today are moving fast not because of the inherent accelerating factor of AI; it's because they've taken the time to layer a foundation of operational governance that's now paying dividends at the strategic level.
When every LLM interaction is governed, every agent action is traceable, every cost is accountable, and every policy applies consistently across models, tools, and protocols, the operational risk that typically forces organizations to slow down or halt AI rollouts stops accumulating.
That foundation is exactly what AI Gateway is built to provide across the full enterprise AI stack — from LLM traffic and MCP tool access to A2A interactions — and for most MuleSoft customers, it's already within reach.
For customers on Flex Gateway, these capabilities are available today at no additional cost for Platinum, Titanium, Unlimited, and Integration Advanced tiers. The policies, identity configurations, and observability tooling already in place extend directly to LLM, MCP, and agent traffic — no new vendor, no parallel governance stack, no additional contract.
We'll continue to extend AI Gateway as the demands of enterprise AI operations evolve.
To learn more about the state of agentic transformation and the future of AI, download this year's Connectivity Benchmark Report and subscribe to our newsletter, Technically Speaking .
Extend your AI capabilities with MuleSoft.
Start your trial.
Try MuleSoft Anypoint Platform free for 30 days. No credit card, no installations.
Talk to an expert.
Tell us a bit more so the right person can reach out faster.
Stay up to date.
Get the latest news about integration, automation, API management, and AI.



