"Model configuration required"#
Cause: Managed inference is unavailable for the workspace and no bring your own model key is configured.
Fix:
- Ask a workspace admin to enable managed access if your plan allows it
- Or go to Settings → Model Configuration
- Select a provider (OpenAI, Anthropic, or Google)
- Paste your API key and save the workspace config
"Connection expired"#
Cause: The OAuth token for a connected service has expired.
Fix:
- Go to Settings → Connections
- Find the service showing "Expired"
- Click Reconnect
- Re-authorize in the OAuth flow
"Action blocked by governance"#
Cause: A governance rule is preventing the action.
Fix:
- Check Settings → Governance for matching rules
- Either modify the rule or approve the specific action manually
- If you're not an Admin/Owner, ask your workspace admin
"Budget exceeded"#
Cause: Your workspace or user budget has been reached.
Fix:
- Go to Settings → Billing → Budgets
- Increase the monthly budget or wait for the next cycle
- Consider switching to a model with lower per-token costs
"Rate limited"#
Cause: Too many API requests in a short period.
Fix:
- Wait for the duration in the
Retry-Afterheader - Reduce request frequency in your integration
- Consider upgrading your plan for higher limits
Sidekick not responding#
Cause: Multiple possible causes.
Checklist:
- Check your LLM provider's status page
- Verify your API key is valid in Settings → Model Configuration
- Check token budget hasn't been exceeded
- Try a new conversation (the current context may be too large)
info Most issues can be diagnosed from the activity feed in Observe → Activity. Check for error events around the time the issue occurred.