DEV Community

Wan Satya
Wan Satya

Posted on

Building a Graph-Native Multi-Agent Runtime with Supabase Edge Functions

Most AI workflow builders today are still “chatbot chains.”

Linear.
Provider-locked.
Hardcoded.
Not designed for real multi-agent execution.

I wanted something different:

  • graph-native execution
  • recursive agent orchestration
  • BYOK (Bring Your Own Key)
  • multi-provider support
  • visual workflows
  • Supabase-native backend
  • portable JSON runtime

The result is an agent swarm runtime powered by:

  • React Flow
  • Supabase Edge Functions
  • TypeScript
  • recursive DAG execution
  • provider adapters


The Core Idea

Instead of hardcoding workflows into backend logic, the workflow itself becomes the runtime definition.

Example:

{
  "agents": [
    {
      "id": "ceo",
      "provider": "openrouter",
      "model": "anthropic/claude-sonnet-4",
      "instructionRef": "ceo.md"
    }
  ],

  "connections": [
    {
      "source": "ceo",
      "target": "market-agent"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

This graph defines:

  • execution topology
  • agent hierarchy
  • orchestration flow
  • provider routing

The backend simply executes the graph.


Why Graph-Native Matters

Most “AI agents” are still sequential pipelines:

Prompt A → Prompt B → Prompt C
Enter fullscreen mode Exit fullscreen mode

But real collaborative reasoning looks more like this:

CEO
 ├── Market Research
 ├── Competitor Analysis
 │     ├── Pricing Worker
 │     └── Location Worker
 └── Regulation Analysis
Enter fullscreen mode Exit fullscreen mode

This is a DAG (Directed Acyclic Graph).

That means:

  • workers can execute in parallel
  • parent nodes synthesize child outputs
  • workflows become composable

Architecture

React Flow
    ↓
Workflow JSON
    ↓
Supabase Edge Function
    ↓
Graph Compiler
    ↓
Execution Scheduler
    ↓
Parallel Node Executors
    ↓
Provider Router
    ↓
LLM APIs
Enter fullscreen mode Exit fullscreen mode

BYOK (Bring Your Own Key)

One important decision:

Users own their API keys.

Not us.

That changes everything.

Instead of becoming an inference reseller, the platform becomes:

  • orchestration infrastructure
  • execution runtime
  • agent operating system

Workflow JSON only stores:

{
  "credentialId": "openrouter-main"
}
Enter fullscreen mode Exit fullscreen mode

Credentials are encrypted separately.

This allows:

  • OpenAI
  • Anthropic
  • Groq
  • Gemini
  • Ollama
  • OpenRouter
  • self-hosted endpoints

all inside the same workflow.


Recursive Execution

The runtime works recursively.

Each node:

  1. executes children first
  2. collects outputs
  3. synthesizes results
  4. returns upstream

Example:

async function executeNode(nodeId) {
  const children = await Promise.all(
    node.children.map(executeNode)
  )

  const output = await llm.generate({
    prompt: buildPrompt(children)
  })

  return output
}
Enter fullscreen mode Exit fullscreen mode

This single pattern unlocks:

  • swarm reasoning
  • parallel execution
  • hierarchical synthesis

Why Supabase Edge Functions?

Because the architecture fits surprisingly well.

We use:

  • Edge Functions for execution
  • Postgres for workflow persistence
  • Realtime for live updates
  • RLS for ownership isolation

The result:

  • serverless execution
  • scalable orchestration
  • no dedicated infra initially

Provider Router

Every provider behaves differently.

Some support:

  • streaming
  • tools
  • JSON mode
  • reasoning tokens
  • vision

So the runtime uses adapters:

interface ProviderAdapter {
  generate(input): Promise<Output>
}
Enter fullscreen mode Exit fullscreen mode

Adapters:

  • OpenAI
  • Anthropic
  • Groq
  • Gemini
  • OpenRouter
  • Ollama

The graph runtime doesn’t care which provider executes the node.


Parallelism Is The Superpower

This is where the system starts feeling alive.

These workers can execute simultaneously:

Pricing Worker
Location Worker
Fleet Worker
Market Worker
Enter fullscreen mode Exit fullscreen mode

Then a higher-level agent synthesizes everything into strategy.

Latency drops dramatically compared to sequential chains.


Shared Memory Bus

Agents shouldn’t operate in isolation.

Each node can publish summaries into shared memory:

memory.push({
  nodeId,
  summary
})
Enter fullscreen mode Exit fullscreen mode

Later agents can retrieve relevant context.

This creates emergent collaboration behavior.


Deterministic DAG vs Planner Mode

Most workflows do NOT need an orchestrator LLM.

For simple graphs:

execute children
then synthesize parent
Enter fullscreen mode Exit fullscreen mode

is enough.

Planner agents are only useful for:

  • dynamic routing
  • retries
  • adaptive decomposition
  • auto-spawning agents

This keeps costs low.


What This Actually Becomes

The architecture starts looking less like “AI workflow builder” and more like:

Temporal + Kubernetes + Airflow
for AI agents
Enter fullscreen mode Exit fullscreen mode

Where:

  • React Flow = visual programming
  • Supabase = orchestration backend
  • provider router = universal inference layer
  • workflows = portable execution graphs

Biggest Lesson

The moat is probably not:

  • prompts
  • models
  • UI

The moat is:

  • portable execution runtime
  • graph orchestration
  • provider neutrality
  • recursive multi-agent execution

The future AI stack may look less like “chat apps”
and more like distributed operating systems for agents.


Building CampShure

We’re building this architecture as part of CampShure — an AI-native platform for graph-based multi-agent workflows, swarm execution, and BYOK orchestration.

If you’re exploring:

  • agent infrastructure
  • visual orchestration
  • recursive AI systems
  • workflow runtimes
  • AI operating systems

we’d love to connect.

Top comments (1)

Collapse
 
steampixel profile image
SteamPixel • Edited

Hey. I’m just going to follow you for a bit to learn exactly how you guys do things. To be honest, my own approach is quite different; I try to handle things like this almost entirely on the frontend. I really like the approach of using open JSON exports! Feel free to reach out if you’d like to know how I run a graph natively as a program. I think you guys have similar approaches.