Models & Providers
Kognar Platform connects to multiple AI providers, giving you access to dozens of language models — all from one interface.
Selecting a model
Click the model selector in the chat header to browse available models. You can filter by category:
| Filter | Use case |
|---|---|
| All | Show every available model |
| General | Everyday tasks and general-purpose use |
| Code | Programming, debugging, and code review |
| Image | Image generation and editing |
| Research | Web-connected and analysis-focused models |
Each model entry shows:
- Name and provider
- Context window size (how many tokens it can process)
- Tool support — whether the model can use agents and tools
You can only change the model before sending the first message in a conversation. Start a new conversation to use a different model.
Default model
Set your preferred default model in Settings > General > Default Models. This model will be pre-selected every time you start a new conversation.
You can set separate defaults for:
- Chat model — used for text conversations
- Image generation model — used when generating images
- Image editing model — used when editing existing images
Supported providers
Models are sourced from multiple providers, organized into groups in the selector:
- OpenAI — GPT-4o, o1, o3, and more
- Google — Gemini 2.5 Pro, Flash, and variants
- Anthropic — Claude Opus, Sonnet, Haiku
- AWS Bedrock — Amazon Nova, Meta Llama, Mistral
- Kognar — Kognar Engine (specialized for coding tasks)
- Other providers connected via your backend configuration
Model classification badges
Each model in the input area is tagged with a classification badge:
| Badge | Meaning |
|---|---|
| smart | High-capability model, best for complex tasks |
| fast | Optimized for speed and lower latency |
| thinking | Extended reasoning mode, slower but deeper |
Hover over the badge to see a description.
Context window management
Each model has a maximum context window. As your conversation grows, the context meter in the header fills up. When you approach the limit:
- Kognar estimates the token count before sending
- If the message would exceed the limit, a Context Overflow dialog appears
- You can choose to compress the conversation or cancel
Compression reduces the conversation size using AI summarization while preserving the key information. If the conversation is still too large after one compression, a second, more aggressive compression is applied automatically.
Image models
When you ask Kognar to generate or edit an image for the first time in a conversation, a model selection dialog appears. Your choice is remembered for that conversation.
Supported image capabilities:
- Generation — Create images from text descriptions
- Editing — Modify or transform existing images
Generated images can be saved via right-click: Save to Downloads or Save As....