For investors

The trust layer
for enterprise AI.

Businesses want AI agents. They can’t deploy them without governance. LumenFlow makes every AI action auditable, cost-tracked, and traceable — so enterprises say yes instead of “not yet.”

01Why now

AI agents are ready.
Trust infrastructure isn’t.

Foundation models crossed the capability threshold in 2025. Every enterprise wants autonomous agents handling email, scheduling, procurement, and customer support. But CISOs, compliance teams, and CFOs block deployment without answers to three questions:

What did the AI do, and can we prove it?
What did it cost, and who approved it?
Can we trace and investigate it if something goes wrong?

LumenFlow answers all three. We are the governance layer that turns “not yet” into “ship it.”

02Product

One governed platform.

Sidekick

The AI agent that manages email, schedules meetings, runs workflows, and executes tool calls — all with per-action audit trails and real-time cost tracking. Three operator modes let users choose their level of control: supervised, guided, or fully autonomous.

Ship → Prove → Trust

Governance Engine

The control plane underneath. Spend limits, action approvals, fleet management, action logging, workspace-level isolation, and a control-plane SDK for governed external agents. Every AI action flows through governance before it executes.

Govern → Audit → Scale
03Shipped

Not a pitch. A product.

LumenFlow is live in beta with real users. Here is what’s in production today.

Conversational AI agent

Multi-turn chat with tool execution, context memory, and streaming responses via MCP-native connectivity.

Action logging

Tool calls logged with timestamp, tool name, action type, and detail. Per-action cost tracking with model and token breakdown.

Per-action cost tracking

Real-time spend visibility per conversation, per tool call, per model. Budget controls at the workspace level.

Managed inference & BYOK

Start with managed inference for zero-config setup, or connect your own model API keys for full provider choice. Keys encrypted at rest with AES-256.

Three operator modes

Supervised (approve everything), guided (auto-routine, flag anomalies), and autonomous (full autonomy within guardrails).

Open-source kernel

AGPL-licensed governance kernel. Auditable by anyone. Commercial licence available for enterprises that need it.

04Thesis

Governance becomes the platform.

Every previous computing wave created a governance layer that became a platform: AWS for cloud, Okta for identity, Datadog for observability. AI agents are the next wave, and the governance layer doesn’t exist yet.

LumenFlow is building it. We sit between the AI models and the business systems they touch. Every action flows through us. That position compounds: more actions mean more data, more data means better policies, better policies mean more trust, and more trust means more actions.

The company that governs AI actions sits at the most strategic position in the enterprise stack.

05Traction

Early stage. Real product.

We’re pre-revenue and honest about it. What we do have is a production system that works.

Live in beta

Real users running real workflows with full governance. Every action audited and cost-tracked since day one.

Organic adoption

Zero paid acquisition. Users find us through word of mouth and try it because the product speaks for itself.

Production infrastructure

AES-256 at rest, TLS 1.3 in transit. Bring-your-own model keys. Workspace isolation. Not a prototype.

Shipping weekly

Trunk-based development with automated quality gates. Production features shipping every week.

06Market

AI governance is a new category.

No one owns AI agent governance yet. The window is 6–12 months before incumbents bolt it on or the category commoditises.

Complementary, not competitive

Agent frameworks like LangChain and CrewAI build agents. We govern them. Agents built with any framework run inside LumenFlow. We complete the stack, not compete with it.

No direct incumbent

Cloud providers offer model access. Observability tools watch logs. Nobody governs AI actions at the execution layer with audit trails, cost tracking, and policy enforcement in one kernel.

Category-defining window

Enterprise AI adoption is accelerating faster than governance tooling. The company that establishes the trust standard now will define the category. Detailed market sizing available in our investor deck.

07Moat

Four compounding advantages.

Governance data flywheel

Every governed action generates policy training data. More usage means smarter defaults, fewer false positives, and higher trust — creating a compounding moat that competitors cannot replicate without equivalent action volume.

Position in the action path

We sit between the model and the business system. Every AI action flows through LumenFlow before it executes. Switching costs increase with every policy configured and every audit trail accumulated.

Per-action cost visibility

Every AI action is cost-tracked at the provider, model, and token level. As usage grows, this data enables spend optimisation insights that single-provider tools cannot offer.

Open-source trust

The governance kernel is AGPL-licensed and auditable by anyone. Enterprises that need to verify what governs their AI actions can read every line. Trust through transparency, not marketing.

08Business model

Usage-based with platform leverage.

Three revenue streams: hosted control plane subscriptions (per-workspace, per-action metering), commercial kernel licences for enterprises that can’t use AGPL, and enterprise support contracts with SLA guarantees.

Free — $0

1 workspace, 1K governed events/day, 7-day audit retention. $1 managed inference credit included. Full governance features. Built to earn trust.

Team — $49/mo

Up to 5 workspaces, 50K events/day, 90-day retention. Metered pay-as-you-go inference, priority support, bring-your-own model key.

Enterprise — custom

Unlimited workspaces and events, 365-day retention. Volume discounts, dedicated support, audit exports, enterprise governance controls, evidence workflows. SSO planned.

09Technical edge

Hard-to-replicate infrastructure.

The governance kernel contains technical decisions that compound over time. Each one takes months to build correctly.

Workspace-scoped policies

Shipped

Autonomy policies, spend limits, and approval requirements configured per workspace. Fail-closed enforcement for tools requiring human approval.

Evidence receipts

Shipped

Structured records of what each AI action did, what it cost, and what model was used. Timestamped and stored alongside the action log.

Per-action cost tracking

Shipped

Every tool call and model invocation tracked at the token level with provider, model, and cost breakdown. Real-time spend visibility.

Workspace isolation (RLS)

Shipped

Row-level security across all tables. Each workspace is a hard trust boundary enforced at the database layer.

MCP-native connectivity

Built on an open standard

First-class Model Context Protocol support. Any MCP-compatible tool works inside LumenFlow governance without custom integration.

Open-source kernel

AGPL-licensed

The governance kernel is source-available and auditable. Enterprises can verify what governs their AI actions by reading the code.

Let’s talk

Interested in the future of governed AI?

We’re raising our seed round. Request our investor deck or schedule a conversation.