Two sources, one decision per workspace#
| Source | Who runs the model | Who pays | When to use |
|---|---|---|---|
| Managed inference | LumenFlow | LumenFlow (passed through your plan) | Default for new workspaces; fastest setup |
| Bring your own key (BYOK) | Your provider account | You, directly | Compliance, custom routing, or you already have credit |
Switch in Sidekick → Settings → Workspace AI source (owner or admin only).
Managed inference#
LumenFlow operates the model on your behalf using the routing your plan covers. You don't configure providers, keys, or models — Sidekick just works. Managed inference is the default for new workspaces and the fastest way to get from sign-up to first action.
Bring your own key (BYOK)#
In BYOK mode you provide a provider API key (Anthropic, OpenAI, or other supported providers) and choose which model to use. Sidekick routes every workspace request through your key. You see the provider's invoicing directly; LumenFlow doesn't mark up the model spend.
Set keys in Sidekick → Settings → BYOK.
Workspace-level vs per-conversation#
The AI source is workspace-wide — it controls who pays and which provider is used. Per-conversation model selection (Haiku for cheap fast turns, Opus for hard reasoning) happens within whichever source you've chosen and doesn't change the source itself.
Migration#
Switching sources doesn't replay history. New conversations use the new source; existing evidence stays attached to whichever source ran them.
info See Model config providers for the supported provider list and BYOK setup for key handling.