Supported providers#
LumenFlow supports the following LLM providers:
OpenAI#
| Model | Context | Best for |
|---|---|---|
| GPT-4o | 128K | General tasks, coding, analysis |
| GPT-4o mini | 128K | Fast responses, simple tasks |
Anthropic#
| Model | Context | Best for |
|---|---|---|
| Claude Opus 4 | 200K | Complex reasoning, long documents |
| Claude Sonnet 4 | 200K | Balanced speed and quality |
Google#
| Model | Context | Best for |
|---|---|---|
| Gemini 2.5 Pro | 1M | Very long documents, multimodal |
| Gemini 2.5 Flash | 1M | Fast, cost-effective tasks |
Choosing a provider#
Consider these factors:
- Task complexity — advanced reasoning favors larger models
- Speed — smaller models respond faster
- Cost — check your provider's pricing page
- Context needs — long documents need large context windows
Provider-specific features#
Some capabilities depend on the provider:
- Vision — image understanding (GPT-4o, Claude, Gemini)
- Tool use — function calling for connections (all providers)
- Streaming — real-time response display (all providers)
info LumenFlow normalizes the API interface across providers. Switching providers doesn't require any changes to your conversations or workflows.