fluxaOS

An OS for AI workflows

Configure. Orchestrate. Observe. — AI pipelines that run the way you designed them.

Everything you need to orchestrate AI

A config-driven engine that puts the pieces together — whatever those pieces are.

Pipeline orchestration

Multi-stage pipelines with configurable stages, retry logic, and sequential execution. Define the workflow once, run it on every issue.

Provider-agnostic routing

Route to any AI provider — Anthropic, OpenAI, Ollama — via config, not code changes. Fallback chains handle failures automatically.

Gate-controlled quality

A rules engine evaluates conditions between stages. Auto-approve, hold for human review, rework, or abort — per stage, per rule.

Configurable personas

Define agent personalities, skills, and routing rules. Scope them globally or per-project with inheritance, forking, and overrides.

Real-time observability

Stream every stage output live. Track tokens, costs, and success rates across all runs. Event-sourced from the ground up.

Self-hosted & open source

Docker Compose deployment. Your data stays on your infrastructure. AGPLv3 licensed — inspect, modify, and contribute.

How it works

Three steps from configuration to insight.

01

Configure

Define pipelines, personas, skills, and routing rules through the web UI or CLI. Everything is stored in the database — no config files to sync.

02

Orchestrate

fluxaOS routes work to the right provider, materializes skills to the workspace, executes stages, and evaluates gates between them.

03

Observe

Watch runs stream in real-time. Track costs per provider, per model, per project. Measure outcomes and iterate on your configuration.

Open source. Self-hosted. Yours.

Get running in minutes with Docker Compose.

terminal
git clone https://github.com/fluxaOS/fluxaos.git
cd fluxaos
cp .env.example .env
docker compose up