AI Command Bar
Describe what you want in plain English and get a ready-to-run shell command — without leaving your terminal. Press Ctrl+I at any time to open it.
All API calls happen inside the Rust backend. Your API keys are never exposed to the renderer process or sent to any Zync server.
Bring your own key
Quick Start
No provider configuration is required if you have Ollama running locally — it works out of the box with llama3.2.
AI Providers
Zync supports four AI providers. You can switch between them — and change the model — directly inside the command bar without opening Settings.
| Provider | Cost | Own API key required? | Default model | Get key |
|---|---|---|---|---|
| Ollama | Free | No | llama3.2 | ollama.com |
| Gemini | Free tier available | Yes | gemini-2.0-flash | aistudio.google.com |
| OpenAI | Paid | Yes | gpt-4o-mini | platform.openai.com |
| Claude | Paid | Yes | claude-haiku-4-5 | console.anthropic.com |
Setting up Ollama (free, no API key)
Ollama runs models locally — no internet required after the initial model download, and no API key needed.
- Install Ollama from ollama.com.
- Pull a model:
ollama pull llama3.2 - Ollama starts automatically. Zync detects it on
http://localhost:11434.
Missing a model?
ollama pull command needed to get started.To use a remote Ollama instance, go to Settings → AI and set a custom base URL.
Setting up cloud providers
- Open Settings → AI.
- Select your provider from the dropdown.
- Paste your API key into the key field.
- Optionally override the default model.
- Close Settings — the next query uses the new provider immediately.
API keys are stored locally in your settings.json file. They are never sent to any Zync server.
Dual-Mode Responses
You do not need to switch modes or use special prefixes. Zync automatically detects your intent from how you phrase your request:
| Mode | When triggered | Example query |
|---|---|---|
| Command | You want to do something in a terminal | list all docker containers |
| Answer | You want an explanation or have a question | what does chmod 755 mean? |
Safety Classification
Every command is automatically classified before you can run it. This prevents accidental execution of destructive operations.
| Level | Badge | Enter key behaviour | Examples |
|---|---|---|---|
| SAFE | Safe | Executes immediately | ls, cat, ps, df, grep |
| MODERATE | Moderate | Executes immediately | mkdir, chmod, apt install, git commit |
| DANGEROUS | Dangerous | Blocked — must click "Run anyway" | rm -rf, dd, mkfs, DROP TABLE |
Features
Editable commands
AI-generated commands appear in an editable $ block before you run them. Change anything — an edited badge appears to indicate the command has been modified from the original suggestion. You stay in full control of what runs.
Save to Snippets
Click the Save button on any result to bookmark the command in the Snippets panel under the AI Generated category. Saved snippets persist across sessions. For scope rules, quick access, and management details, see Snippets.
Retry & Make Safer
- Retry — Re-submits the exact same query to the AI.
- Make Safer — Re-submits with a safety-first instruction appended. The AI will choose less destructive flags and alternatives (e.g.
--dry-run,-ifor interactive confirmation, avoiding-rf).
Query history
The last 50 queries are remembered per session. Use ↑ / ↓ in the input to cycle through them. A position badge (e.g. 2/8) shows where you are in your history. When the input is focused and empty, your recent queries appear as clickable chips for quick re-use.
Chat history
Previous Q&A pairs within the same session are preserved above the current response — your queries appear on the right, AI responses on the left. Scroll up to review earlier exchanges.
Contextual awareness
Every query is automatically enriched with information about your current session so the generated commands are accurate for your environment:
| Context | Source |
|---|---|
| OS | SSH sessions → Linux; local sessions → auto-detected |
| Shell | Inferred from OS (bash / zsh / powershell) |
| Working directory | Live CWD from the active SSH connection or local shell |
| Recent terminal output | Last 20 lines from the terminal buffer |
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
| Ctrl+I | Open / close AI Command Bar |
| Enter | Submit query (input non-empty) or execute command (result shown, input empty) |
| Escape | Close the bar |
| ↑ / ↓ | Navigate query history (single-line input only) |
The Ctrl+I shortcut is configurable. Go to Settings → Shortcuts → AI Command Bar to change it.
Settings
Open Settings → AI to configure:
- Provider — Ollama, Gemini, OpenAI, or Claude
- API Key — per-provider, stored locally only
- Model — override the default model for any provider
- Ollama URL — set a custom base URL (e.g. a remote Ollama instance)
You can also switch provider and model directly inside the command bar using the dropdowns in the bottom-right corner, without opening Settings.
Privacy & Security
- Keys stay on your machine. API keys are stored in your local
settings.jsonfile. They are never sent to any Zync server. - All calls go directly to the provider. The Rust backend makes API calls directly to Ollama / Gemini / OpenAI / Anthropic. There is no Zync proxy or intermediary.
- Context is limited. Only the last 20 lines of terminal output (up to 500 characters) and basic session metadata are sent. Full terminal history is never transmitted.
- Ollama keeps everything local. Using Ollama means your queries never leave your machine at all.
Known Limitations
- Closing the bar while a response is streaming does not cancel the in-flight request. The request completes in the background and the result is discarded.
- Chat history is per-session only and is not persisted when the app restarts.
- API keys are stored in plain
settings.json. OS keychain integration is planned for a future release.