
How many agents does it take to change a light bulb?

Sean Blanchfield

Reviewed by
Michael Cordner
Estimated read time: 5 min
Last updated: May 26, 2025
There's a growing trend in AI agent development towards "agent-to-agent" communication protocols — such as A2A, ACP, AGNTCY, and MCP — where client-side agents send natural language requests to vendor-hosted server-side agents. I think this agent-to-agent approach is misguided for most real-world use cases.
Workflows are a better and simpler alternative to the agent-to-agent architecture - one that allows your agent to efficiently and reliably get its work done, even when it spans multiple API vendors. Think of workflows like "muscle memory" for your agents.
Just Use the API
Instead of sending natural language requests to a vendor's agent, hoping it'll correctly interpret your intent and successfully execute it, your client-side agent can perform operations directly. There's usually not much to be gained by having multiple AIs involved in the process, and it introduces a lot more scope for failure, cost and slowness.
Let's explore getting your client-side agents to do the work, along with a series of straightforward refinements:
-
Call API operations directly. Most APIs are already described in detail using the widely adopted standard OpenAPI format, providing machine-readable descriptions for each available operation. If you point your LLM at the relevant OpenAPI spec, you'll generally find it will one-shot perfect API calls, even if the JSON format is a bit expensive on tokens. With a little light tooling, you can do some last-minute filtering of relevant portions of the OpenAPI spec and translate it into token-efficient markdown.
-
Prefer high-level API operations that correspond more directly with your agent's current intent. Many well-designed APIs provide operations at multiple levels of abstraction, exposing primitive low-level operations and high-level workflow operations. For example, instead of using Stripe's charges API you can use the Payments Intents API. As agents become the primary consumer of APIs, expect more API vendors to expand their APIs with intent-level operations.
-
Tell your agent to read the docs. If no high-level API operation matches your intent, your agent will need to string together low-level operations to achieve its goal. Give your agent the API docs (or tell it to search the web) to help it solve this problem. Loading workflow knowledge from the web allows your agent to more reliably orchestrate any workflow that is documented online.
-
Store workflows in a RAG. To speed up common workflows or to describe novel workflows, you can describe the steps as a bullet list in a document in a RAG, and tell your agent to search the RAG before the web. Think of it as "standard operating procedures" for agents.
-
Embrace Arazzo. The ultimate refinement is for your agent to retrieve API workflows as detailed, machine-readable schemas, as a compliment to the relevant OpenAPI spec. It can then run the workflow in regular code without your LLM. This is deterministic, cheap, fast and 100% reliable. The OpenAPI Initiative's Arazzo specification is designed for this: a standardized, declarative format specifically designed to define API workflows, potentially spanning vendors, all built on top of OpenAPI. We hope to see a surge in API vendors providing official Arazzo workflows for agents. This is a powerful, interoperable and agent-first approach that is preferable to expanding APIs with intent-oriented operations.
At Jentic we have fully embraced Arazzo, and have published the largest collection of Arazzo specifications to-date in the Open Agentic Knowledge (OAK) repository. In addition, our open-source oak-runner library (available in the "OAK" repository) allows your agent to deterministically execute any Arazzo workflow or OpenAPI operation in a single step. Finally, our hosted service allows your agent to search and load these workflows and operations through a single MCP integration.
Don't Forget Multi-Vendor Workflows
Agent-to-agent architectures fall apart with multi-vendor workflows. Many agents need to work with multiple vendors to do their work. Perhaps the agent needs to to check a payment in Stripe and update the corresponding customer in HubSpot. In an agent-to-agent architecture, client-side agents end up performing multi-agent orchestration, coordinating workflows in natural language between multiple remote vendor agents. This means multiple layers of non-deterministic LLM reasoning, a drastic increase in potential errors, slower execution, higher costs, opaque black-box debugging and poor auditability.
In contrast, a client-side agent that runs deterministic Arazzo workflows is cheap, fast and repeatable, and executes multi-vendor workflows as easily as single vendor workflows.
When Server-Side Agents Still Add Value
Server-side agents will be valuable when they know something your client-side agents doesn't. Perhaps the server-side agent was fine-tuned on proprietary data - e.g., fraud detection in Stripe's new Payments Foundation Model, or customer support in the case of Intercom's Fin). However, most of the time the agent with the best context to solve the problem will be the client-side agent.
Therefore, we expect server-side agents ultimately to compliment, but not replace, existing API infrastructure.
The Future
At Jentic, we are pushing towards a future in which:
- The OAK repository provides easy discovery and reuse of millions of open-source deterministic workflows spanning all major API vendors
- Agents dynamically discover and run workflows via a standard library like oak-runner.
- A flywheel of high-quality verified workflows drives global agent capability and reliability, by progressively encoding common workflows into a collective LLM "muscle memory" that can be replayed in regular deterministic code.
At Jentic, we're building everything needed for this future: an ecosystem where agents access unlimited open-source deterministic workflows, and execute them reliably, cheaply, quickly, and securely, without unnecessary intermediaries.