Workspace AI source: managed inference vs BYOK

Every workspace has a single AI source that decides who pays for and operates the model that powers Sidekick. New workspaces default to managed inference (LumenFlow runs the model). You can switch to BYOK to point Sidekick at your own provider key. This is workspace-level — it's separate from per-conversation model selection.

Two sources, one decision per workspace#

SourceWho runs the modelWho paysWhen to use
Managed inferenceLumenFlowLumenFlow (passed through your plan)Default for new workspaces; fastest setup
Bring your own key (BYOK)Your provider accountYou, directlyCompliance, custom routing, or you already have credit

Switch in Sidekick → Settings → Workspace AI source (owner or admin only).

Managed inference#

LumenFlow operates the model on your behalf using the routing your plan covers. You don't configure providers, keys, or models — Sidekick just works. Managed inference is the default for new workspaces and the fastest way to get from sign-up to first action.

Bring your own key (BYOK)#

In BYOK mode you provide a provider API key (Anthropic, OpenAI, or other supported providers) and choose which model to use. Sidekick routes every workspace request through your key. You see the provider's invoicing directly; LumenFlow doesn't mark up the model spend.

Set keys in Sidekick → Settings → BYOK.

Workspace-level vs per-conversation#

The AI source is workspace-wide — it controls who pays and which provider is used. Per-conversation model selection (Haiku for cheap fast turns, Opus for hard reasoning) happens within whichever source you've chosen and doesn't change the source itself.

Migration#

Switching sources doesn't replay history. New conversations use the new source; existing evidence stays attached to whichever source ran them.

info See Model config providers for the supported provider list and BYOK setup for key handling.