We believe agents are the future of software, that agents run on APIs, and that agents are limited only by the tools you give them. The promise of general-purpose agents is tantalising, but increasing agent capabilities by stuffing tool information into the context window isn't scaling. Whether you are handcrafting tool information into your system prompt, loading tool definitions into your LLM's tool API or loading an MCP manifest, you don't get far. You start feeling the decline in tool calling accuracy surprisingly early, after adding just a handful of tools. After this point, you face a perverse situation in which trying to improve tool calling reliability by providing extra tool detail backfires. The core reasons are:
One way to navigate these issues is to lower ambitions, and to only build agents that use a few tools. That's fine, but it leaves a lot on the table. Another approach is to build a multi-agent architecture that splits your agent up into mini-agents that each specialize on a subset of tools. But this just kicks the problem down the road until you hit new context window limits when you try to get your mini-agents to route internal tool calls or intents. And in any case, it's a shame to significantly complicate your architecture just to scale tool use.
MCP is a big step forward for agent tooling, but does not solve these problems. Developers (and presumably some hackers) are posting hundreds of new MCP servers every day on to MCP directories like Smythery and Glama (5000+ MCPs and counting). Are AI devs supposed to front-load all these into their agents? This doesn't scale for the reasons stated above, but has other downsides like:
"MCP is useful when you want to bring tools to an agent you don’t control." - Harrison Chase, CEO of Langchain
MCP provides a valuable, standard mechanism to connect an agent to an external system. But it's a "USB-C" port, not a hard drive; the protocol, not the knowledge layer. It is great at what it is designed for, and performs poorly as a universal knowledge schema. Its natural-language descriptions and basic parameter lists are ergonomic for LLMs, but cannot represent the detailed information described in established API schemas like OpenAPI. Reliable agentic planning and execution might require on-demand details of authentication flows, error handling, complex data types, data governance policies, pricing, rate limits, and workflow logic. This level of detail could not be represented in MCP without compromising its simplicity and ease of use. What makes MCP good for agents makes it poor as a universal API schema.
We believe agents will compliment (not replace) the vast distributed infrastructure of websites and web services that already exist. MCP will play an important role connecting agents to the knowledge layer, but will not itself be the knowledge layer. The canonical documentation for each web service should be maintained in whatever format allows relevant detail and nuance to be expressed in an open and machine-readable format, such as OpenAPI for REST services, Arazzo for workflows and maybe even new formats like A2A, ACP or AGNTCY for agents. On the client side, MCP will connect agents to this knowledge.
The knowledge layer is the foundation for high-performing, highly-capable and reliable agents. That’s why we launched OAK—the Open Agentic Knowledge repository. OAK is an open-source, declarative knowledge layer that stores canonical, schematized representations of APIs, workflows, and other machine interfaces. It allows agents to access detailed, AI-optimized information on-demand, while enabling developers and the broader community to collaboratively expand, refine, and govern the tool knowledge agents rely on. It's the missing substrate for scalable, capable and reliable tool use.
And now you can easily plug your agents into OAK over MCP using Jentic. For more information, check out the launch post or head over to our installation instructions.