Common Issues

Solutions for the most frequently encountered problems in LumenFlow.

"Model configuration required"#

Cause: Managed inference is unavailable for the workspace and no bring your own model key is configured.

Fix:

  1. Ask a workspace admin to enable managed access if your plan allows it
  2. Or go to Settings → Model Configuration
  3. Select a provider (OpenAI, Anthropic, or Google)
  4. Paste your API key and save the workspace config

"Connection expired"#

Cause: The OAuth token for a connected service has expired.

Fix:

  1. Go to Settings → Connections
  2. Find the service showing "Expired"
  3. Click Reconnect
  4. Re-authorize in the OAuth flow

"Action blocked by governance"#

Cause: A governance rule is preventing the action.

Fix:

  1. Check Settings → Governance for matching rules
  2. Either modify the rule or approve the specific action manually
  3. If you're not an Admin/Owner, ask your workspace admin

"Budget exceeded"#

Cause: Your workspace or user budget has been reached.

Fix:

  1. Go to Settings → Billing → Budgets
  2. Increase the monthly budget or wait for the next cycle
  3. Consider switching to a model with lower per-token costs

"Rate limited"#

Cause: Too many API requests in a short period.

Fix:

  1. Wait for the duration in the Retry-After header
  2. Reduce request frequency in your integration
  3. Consider upgrading your plan for higher limits

Sidekick not responding#

Cause: Multiple possible causes.

Checklist:

  1. Check your LLM provider's status page
  2. Verify your API key is valid in Settings → Model Configuration
  3. Check token budget hasn't been exceeded
  4. Try a new conversation (the current context may be too large)

info Most issues can be diagnosed from the activity feed in Observe → Activity. Check for error events around the time the issue occurred.