Model Configuration Overview

Use managed inference by default, or connect your own model keys for advanced control.

Managed first#

LumenFlow workspaces can start on managed inference. That is the default path for trying Sidekick quickly: no provider account, no API keys, and no model routing decisions required up front.

Bring your own model key#

When you need a specific provider, a specific model family, or want usage billed directly under your existing provider agreement, you can add a bring your own model key (BYOK).

OptionBest for
Managed inferenceFastest setup, zero-config evaluation, simple operator onboarding
BYOKProvider choice, direct provider billing, internal model policy requirements

How routing works#

  1. Workspaces can use managed inference immediately when entitled
  2. You can add BYOK provider settings in Settings → Model Configuration
  3. When a BYOK config is present, LumenFlow routes through that provider
  4. Billing and usage surfaces still show the governed action and cost outcome

When to choose BYOK#

  • you need a provider LumenFlow is not hosting for you
  • you want usage billed directly to OpenAI, Anthropic, or another provider
  • you have internal policy requirements around approved model vendors

success Your API keys are encrypted at rest with workspace-specific protection. They are never logged, cached, or sent to any service other than your chosen provider.