Open Source

The Data Layer for Your AI Agents

Receive events from any source, filter noise, normalize formats, and deliver clean data to AI agents at scale. Route results to Slack, webhooks, or downstream systems.

View on GitHubGet Started
agent.py
@app.post("/process")
async def process(request: Request):
    records = await request.json()        # pre-filtered, normalized
    results = await enrich(records)        # your AI logic
    gf.send_output(payload=results)        # GlassFlow routes the rest
    return {"status": "ok"}

This is your entire agent. GlassFlow handles everything else.

The Missing Layer in Your AI Stack

Every agent framework assumes data arrives as a function argument. In production, it arrives as OTLP telemetry, webhooks, streams, and events — continuously, from many sources, in formats your agent doesn’t control.

Agent Frameworks

LangChain, CrewAI, AutoGen

Help agents reason and use tools

Don't solve how data reaches the agent

Orchestrators

LangGraph

Coordinate multi-step reasoning

Assume data is already delivered in the right shape

Durable Execution

Temporal, Restate

Ensure workflows survive failures

The workflow still needs something to act on

All of these solve how agents think and act. None solve what agents act on — the continuous flow of real-world data.

How It Works

Four steps. No custom infrastructure code.

1

Data flows in

Your applications send events to GlassFlow via standard protocols — OTLP, HTTP/JSON, or webhooks. No SDK needed on the producer side.

2

GlassFlow filters and normalizes

Expression-based rules drop irrelevant events before they reach your agent. Transforms reshape fields so your agent gets exactly the structure it expects.

3

Your agent does its job

It receives a clean JSON batch via HTTP POST. No queue consumers, no retry logic, no parsing — just your AI logic. Any language, any framework.

4

Results flow out

Your agent returns results with one API call. GlassFlow routes them to Slack, webhooks, or downstream systems based on per-project configuration.

Built for Production AI Pipelines

Everything you need to deliver data to agents at scale.

No Ingestion Code

GlassFlow receives events from OTLP, HTTP/JSON, and webhooks. Your agent never touches raw data plumbing.

Expression-Based Filtering

Drop irrelevant events before they reach your agent, reducing noise and LLM cost with simple rules.

Format Normalization

Transforms reshape and rename fields so your agent gets exactly the structure it expects, regardless of source.

Framework Agnostic

Works with LangChain, CrewAI, AutoGen, or plain HTTP endpoints. GlassFlow doesn't care what runs behind the URL.

Durable Streaming

Built on NATS JetStream with per-project isolation. No data is lost, even if your agent is temporarily down.

Automatic Output Routing

Results dispatch to Slack, webhooks, or downstream agents automatically based on per-project configuration.

Per-Project Pipelines

Isolated filter rules, transforms, agent endpoints, and sinks for each project. Multi-tenant by default.

Self-Hosted & Open Source

Deploy on your infrastructure with Helm charts. Full control over your data — nothing leaves your cluster.

Architecture

Separate control and data planes. Five stateless services connected by NATS JetStream.


  ┌─────────────────────────────────────────────────────────────────────┐
  │                         Data Sources                                │
  │   OTLP Telemetry  ·  HTTP/JSON Events  ·  Webhooks  ·  APIs        │
  └──────────────────────────────┬──────────────────────────────────────┘
                                 │
                                 ▼
  ┌──────────────────────────────────────────────────────────────────── ┐
  │  Receiver                                                          │
  │  Validates API keys · Publishes to NATS raw stream                 │
  └──────────────────────────────┬─────────────────────────────────────┘
                                 │
                          NATS JetStream
                       (per-project streams)
                                 │
                                 ▼
  ┌────────────────────────────────────────────────────────────────────┐
  │  Pipeline                                                          │
  │  Filter (expr-lang) → Transform → Batch → Forward to Agent         │
  └──────────────────────────────┬─────────────────────────────────────┘
                                 │
                                 ▼
  ┌────────────────────────────────────────────────────────────────────┐
  │  Your AI Agent                                                     │
  │  LangChain · CrewAI · Custom code · Any HTTP endpoint              │
  └──────────────────────────────┬─────────────────────────────────────┘
                                 │
                          NATS JetStream
                        (output streams)
                                 │
                                 ▼
  ┌────────────────────────────────────────────────────────────────────┐
  │  Sink                                                              │
  │  Slack · Webhooks · Downstream systems                             │
  └────────────────────────────────────────────────────────────────────┘
Backend
Go
Frontend
Next.js
Streaming
NATS JetStream
Database
PostgreSQL
Deploy
Helm / Docker Compose
License
Open Source

Stop building data plumbing. Start shipping agents.

Open source. Self-hosted. Deploy with Docker Compose in minutes or Helm on Kubernetes.

View on GitHubRead the Docs
Quick Start
git clone https://github.com/glassflow/glassflow-ai-runtime.git
cd glassflow-ai-runtime
docker compose up -d