
Jentic CTO AMA: Building Reliable AI Agents

Rod Rivera
Estimated read time: 4 min
Last updated: September 12, 2025
Our CTO Michael Cordner recently joined an Ask Me Anything session in our Jentic Discord Community. He shared insights from over two decades of engineering experience and his work building Jentic's agent infrastructure. Here are the key takeaways for teams building production AI agents.
What Actually Is an Agent?
Michael cut through the complexity:
"An agent is any simple piece of code that can call an API and can also call an LLM that can decide when to call these functions."
This definition strips away the hype and focuses on the core functionality. The Standard Agent embodies this philosophy. It's deliberately simple because simplicity is its reason for being.
Single Agents Before Multi-Agent Orchestration
When asked about multi-agent frameworks, Michael offered a contrarian view:
"I don't understand how anyone can be excited about multi-agent orchestration when most places can't even deploy a single agent reliably or usefully."
His recommendation: Start with focused, granular agents. Build a "receipt filing agent" and "email reading agent" rather than attempting a general-purpose "assistant agent." Master single-agent deployment before tackling orchestration challenges.
Making Agents Production-Ready
The biggest insight from the AMA centered on reliability. Michael explained how Jentic approaches this challenge:
"Run agents in development or testing mode against simulated APIs or sandboxes. Let the agents figure out new workflows in a place where they're allowed to fail. When they succeed, capture those workflows where they're auditable, repeatable, verifiable."
This approach uses AI where it excels, in development environments where iteration is expected, while ensuring production systems get deterministic, tested workflows.
The Problem with Prompt-Driven Glue Logic
Traditional agent approaches rely heavily on prompts to connect different systems. Michael highlighted the core issue:
"Prompt-driven glue logic gives you unrepeatable results, which are near impossible to test."
Jentic's solution involves capturing successful agent interactions as Arazzo workflows. These become auditable, repeatable tools that agents can reliably execute in production environments.
Just-in-Time Tooling (JITT)
Context management emerged as a critical challenge. Michael explained JITT's approach:
"Don't load up the context you're feeding into an LLM with all sorts of tools that you don't need for the problem you're currently trying to solve."
This contrasts with frameworks that dump extensive tool collections into context, hoping something will be useful but often creating confusion instead.
Choosing Your Agent Stack
With the proliferation of agent frameworks, Michael advised engineering judgment over feature complexity:
"Keep things simple and use your engineering judgment. Don't pick heavy, complicated, opinionated stacks because the opinions baked into them are too early. Stick to open-source and standards everywhere."
For newcomers, his recommendation was clear: start with first principles or use the Standard Agent to understand the fundamentals before adding complexity.
Practical Advice for CS Grads
Michael shared three key points for new developers entering the AI space:
- Be better than the AI: LLMs excel at average work. Differentiate yourself by going beyond average
- Show genuine passion: Build personal projects and contribute to open source beyond course requirements
- Learn to combine with AI: Use AI for routine tasks while maximizing your unique contributions
The Future of Business Automation
Rather than replacing SaaS entirely, Michael sees agents enabling "sovereign stacks," internally developed tools tailored to company workflows that keep data centralized and accessible for AI systems. This approach requires dedicated technical talent but offers the promise of keeping data out of silos while making it more useful for agentic AI applications.
Testing Multi-Step Agent Tasks
For complex workflows, Jentic's approach focuses on capturing successful agent reasoning as deterministic workflows through Arazzo. This makes testing straightforward:
"Recorded workflows are easy to audit and test. Testing an agent that will do things differently every time is hard."
Getting Started
For teams beginning their agent journey, Michael's advice emphasizes starting simple and building understanding before adding complexity. The Standard Agent provides a minimal foundation that takes "maybe an hour before you realize it really is this simple."
You can explore the Standard Agent or the Arazzo Engine and contribute to Jentic's open-source projects on GitHub, or learn more about our approach to agent infrastructure at jentic.ai.
Michael Cordner is Co-Founder and CTO at Jentic, leading development of the Standard Agent and Arazzo Engine. With over 20 years of experience in distributed systems, blockchain, and identity management, he focuses on building reliable infrastructure for production AI agents.