DEV Community

Leo Pessoa
Leo Pessoa

Posted on

The Schema IS the Prompt: Rethinking AI-Native Software Design

Every few years, a new abstraction arrives that makes the previous one feel embarrassing in retrospect.

Raw SQL gave way to ORMs. Callback hell gave way to async/await. Manual memory management gave way to garbage collection. Each shift wasn't just about writing less code — it was about thinking differently about the problem.

We're at one of those moments with AI.


The way we learned to talk to LLMs

Most AI-powered applications today follow the same pattern:

  1. Write a prompt string
  2. Interpolate some variables into it
  3. Call the API
  4. Parse the response (pray it's valid JSON)
  5. Map it to your data model
  6. Validate
  7. Handle the cases where it isn't valid
  8. Retry

This is prompt-centric thinking. The prompt is the program. The data model is just a container waiting at the end of a pipeline.

It works. But it's the callback hell of AI development — functional, verbose, and fragile in ways that only show up in production.


What if we inverted the relationship?

Here's the question that changes everything:

What if the data model declared its own intent — and the prompt was just an implementation detail?

Instead of:

prompt → LLM → parse → validate → object
Enter fullscreen mode Exit fullscreen mode

What if it was:

object schema + natural language → typed instance
Enter fullscreen mode Exit fullscreen mode

This is the inversion. The schema is the prompt. The structure of your data model encodes what you're asking the LLM to produce. You stop writing instructions about what you want and start defining the shape of what you need.


Why this matters architecturally

In prompt-centric design, the prompt is the source of truth. It lives in a string, often scattered across the codebase, version-controlled poorly, and duplicated whenever a new model needs similar behavior.

In schema-centric design, the source of truth is the data model — the same artifact that drives your validation, your database schema, your API contracts. The LLM becomes a filling mechanism, not an oracle you interrogate with carefully crafted language.

When requirements change, you update the schema. The prompt adapts automatically. There's no string to hunt down across three files.

Malformed responses don't propagate — Pydantic catches them at the boundary before they reach your application logic. A new developer reading the codebase understands what the system produces just by reading the class definition, the same way they always have.

And because the schema is the interface, switching providers is a one-line environment change — not a prompt rewrite.


The analogy that makes it click

Think about what ORMs did for databases.

Before SQLAlchemy, you wrote SQL strings. You interpolated variables, concatenated conditions, and hoped the output was valid. The database schema lived in migration files; the queries lived in strings scattered across your application.

ORMs moved the source of truth to the Python class. The class is the contract. Queries become method calls. The database is still there, doing its job — but you stopped thinking in SQL and started thinking in objects.

exomodel does the same thing for LLMs.

The class is the contract. The LLM is still there, doing its job — but you stop thinking in prompt strings and start thinking in schemas.

from exomodel import ExoModel

class Proposal(ExoModel):
    client: str = ""
    project_title: str = ""
    budget: float = 0.0
    timeline_weeks: int = 0
    summary: str = ""

p = Proposal.create("Draft a proposal for Tesla — AI dashboard integration, 6 weeks")
Enter fullscreen mode Exit fullscreen mode

No prompt string. No JSON parsing. No retry logic. The model knows what it needs to produce because you've already described it — in Python, with types, the way you always define data.


What this means for AI-native software

We're still in the early phase where most AI applications are built around the LLM call. The prompt is the center of gravity. Everything else orbits it.

The next phase looks different. The data model is the center of gravity. The LLM is infrastructure — powerful, essential, but abstracted away behind an interface that speaks the language of your application, not the language of tokens and completions.

This isn't a prediction. It's a design choice you can make today.

Define your schemas. Let the objects fill themselves.


How are you structuring LLM output in your projects? Drop a comment — I'm curious whether others are moving away from prompt-centric patterns.


Try it

pip install "exomodel[google]"
Enter fullscreen mode Exit fullscreen mode

If exomodel saves you from writing a prompt string today, a ⭐ on GitHub goes a long way.

Top comments (0)