A Practical Python Toolkit for Working with OpenAPI Document

A Practical Python Toolkit for Working with OpenAPI Document

Erik Wilde

Erik Wilde

Estimated read time: 2 min

Last updated: January 21, 2026

If you're building anything serious on top of OpenAPI, you'll quickly discover that "having an OpenAPI document" and "being able to work with it programmatically" are two different things.

In this video, Vladimir Gorej talks about Jentic OpenAPI Tools — an open source Python toolkit that exists for a simple reason: in the Python ecosystem, there hasn't really been a strong, widely accepted "standard" library for processing OpenAPI descriptions with the quality and completeness that real-world tooling needs.

If you're working in Python and you've ever found yourself thinking "why is it so hard to just load and work with OpenAPI properly?", this one is for you:

Why this exists (and why it matters)

One of the most interesting parts of the discussion is the origin story: when Vladimir joined Jentic, OpenAPI parsing and processing logic existed in multiple places, implemented slightly differently across repositories. That's a very common situation in growing systems — and it's also a long-term maintenance trap.

The goal behind OpenAPI Tools was to standardize how OpenAPI (and related descriptions like Arazzo) are processed, so that anything built on top of these descriptions starts from a consistent foundation.

What's in the toolkit?

OpenAPI Tools is a monorepo with a few core building blocks that cover the main "plumbing" you need when you want to treat OpenAPI descriptions as structured data rather than static files:

  • Parser: a standardized way to parse OpenAPI documents (including OpenAPI-specific YAML requirements)
  • Validator: validate parsed OpenAPI documents
  • Linting: linting support via tools like Spectral and Redocly
  • Transformer: common transformations such as dereferencing and bundling
  • Data model: a low-level semantic model (AST-like) for analyzing OpenAPI documents in detail
  • Traversal utilities: tooling to work with the data model effectively

At the moment, the repo focuses on a low-level model (close to the underlying structure), with the option to add a higher-level abstraction later if needed.

What can you build with it?

Once you have reliable parsing and a solid internal representation, you can stop reinventing the same "OpenAPI handling layer" and focus on what you actually want to build.

Some examples we touch on in the video:

  • Building validators and linters on top of OpenAPI descriptions
  • Building analysis tooling (including scoring systems for "AI readiness" such as the Jentic API Scorecard)

Resources

Stay Updated

Join our mailing list for agentic & AI news