Which Model Should I Use?
| Mode | What works | Recommendation |
|---|---|---|
| Chat Mode | Any model, including local | Ollama or Gemini Flash |
| Agent Mode | Cloud models only | Claude Opus 4.5 |
Cloud Providers
Connect to powerful AI models using your API keys. Your keys stay on your machine — requests go directly to the provider.Gemini (Free)
Gemini (Free)
Gemini Flash is fast and free. Google gives you 20 requests per minute at no cost.Get your API key:
- Go to aistudio.google.com
- Click Get API key in the sidebar
-
Click Create API key and copy it

-
Go to
chrome://browseros/settings - Click USE on the Gemini card
-
Set Model ID to
gemini-2.5-flash-preview-05-20 - Paste your API key
-
Check Supports Images, set Context Window to
1000000 -
Click Save

Claude (Best for Agents)
Claude (Best for Agents)
Claude Opus 4.5 gives the best results for Agent Mode.Get your API key:
- Go to console.anthropic.com
- Click API keys in the sidebar
-
Click Create Key and copy it

-
Go to
chrome://browseros/settings - Click USE on the Anthropic card
-
Set Model ID to
claude-opus-4-5-20250514 - Paste your API key
-
Check Supports Images, set Context Window to
200000 -
Click Save

OpenAI
OpenAI
GPT-4.1 is solid for both chat and agent tasks.Get your API key:
- Go to platform.openai.com
- Click settings icon → API keys
-
Click Create new secret key and copy it

-
Go to
chrome://browseros/settings - Click USE on the OpenAI card
-
Set Model ID to
gpt-4.1 - Paste your API key
-
Check Supports Images, set Context Window to
128000 -
Click Save

OpenRouter
OpenRouter
Access 500+ models through one API.Get your API key:
Add to BrowserOS:
- Go to openrouter.ai and sign up
- Copy your API key from the homepage
anthropic/claude-opus-4.5).
-
Go to
chrome://browseros/settings - Click USE on the OpenRouter card
- Paste the model ID and your API key
- Set Context Window based on the model
-
Click Save

Local Models
Run AI completely offline. Local models are free, private, and your data never leaves your machine. Perfect for Chat Mode with sensitive data.
Ollama
Ollama
The easiest way to run models locally.Setup:
- Download from ollama.com
-
Pull a model:
-
Start Ollama:

-
Go to
chrome://browseros/settings - Click USE on the Ollama card
-
Set Model ID to
llama3.2 -
Click Save

llama3.2, qwen3:8b, mistralLM Studio
LM Studio
Nice GUI if you don’t want to use the terminal.Setup:
- Download from lmstudio.ai
- Open LM Studio → Developer tab → load a model
-
It runs a server at
http://localhost:1234/v1/
-
Go to
chrome://browseros/settings - Click USE on the OpenAI Compatible card
-
Set Base URL to
http://localhost:1234/v1/ - Set Model ID to the model you loaded
- Set Context Window to match your LM Studio config
-
Click Save

Switching Between Models
Use the model switcher in the Assistant panel to change providers anytime. The default provider is highlighted.
