New in 2.5.0

AI Command Bar

Describe what you want in plain English and get a ready-to-run shell command — without leaving your terminal. Press Ctrl+I at any time to open it.

All API calls happen inside the Rust backend. Your API keys are never exposed to the renderer process or sent to any Zync server.

Bring your own key

Zync does not provide AI access or pay for API usage on your behalf. For cloud providers (Gemini, OpenAI, Claude), you supply your own API key in Settings → AI. Ollama users need no key at all — it runs entirely on your machine.

Quick Start

1
Open any terminal session (local or SSH).
2
Press Ctrl+I to open the AI Command Bar.
3
Type your request in natural language — for example: show disk usage by folder.
4
Press Enter to submit. The response streams token-by-token in a clean, readable format.
5
Review the generated command, then press Enter again (or click Execute) to run it in your terminal.

No provider configuration is required if you have Ollama running locally — it works out of the box with llama3.2.

AI Providers

Zync supports four AI providers. You can switch between them — and change the model — directly inside the command bar without opening Settings.

ProviderCostOwn API key required?Default modelGet key
OllamaFreeNollama3.2ollama.com
GeminiFree tier availableYesgemini-2.0-flashaistudio.google.com
OpenAIPaidYesgpt-4o-miniplatform.openai.com
ClaudePaidYesclaude-haiku-4-5console.anthropic.com

Setting up Ollama (free, no API key)

Ollama runs models locally — no internet required after the initial model download, and no API key needed.

  1. Install Ollama from ollama.com.
  2. Pull a model: ollama pull llama3.2
  3. Ollama starts automatically. Zync detects it on http://localhost:11434.

Missing a model?

If you select a model in Zync that you haven't pulled yet, Zync will detect the error and provide you with the exact ollama pull command needed to get started.

To use a remote Ollama instance, go to Settings → AI and set a custom base URL.

Setting up cloud providers

  1. Open Settings → AI.
  2. Select your provider from the dropdown.
  3. Paste your API key into the key field.
  4. Optionally override the default model.
  5. Close Settings — the next query uses the new provider immediately.

API keys are stored locally in your settings.json file. They are never sent to any Zync server.

Dual-Mode Responses

You do not need to switch modes or use special prefixes. Zync automatically detects your intent from how you phrase your request:

ModeWhen triggeredExample query
CommandYou want to do something in a terminallist all docker containers
AnswerYou want an explanation or have a questionwhat does chmod 755 mean?

Safety Classification

Every command is automatically classified before you can run it. This prevents accidental execution of destructive operations.

LevelBadgeEnter key behaviourExamples
SAFESafeExecutes immediatelyls, cat, ps, df, grep
MODERATEModerateExecutes immediatelymkdir, chmod, apt install, git commit
DANGEROUSDangerousBlocked — must click "Run anyway"rm -rf, dd, mkfs, DROP TABLE

Features

Editable commands

AI-generated commands appear in an editable $ block before you run them. Change anything — an edited badge appears to indicate the command has been modified from the original suggestion. You stay in full control of what runs.

Save to Snippets

Click the Save button on any result to bookmark the command in the Snippets panel under the AI Generated category. Saved snippets persist across sessions. For scope rules, quick access, and management details, see Snippets.

Retry & Make Safer

  • Retry — Re-submits the exact same query to the AI.
  • Make Safer — Re-submits with a safety-first instruction appended. The AI will choose less destructive flags and alternatives (e.g. --dry-run, -i for interactive confirmation, avoiding -rf).

Query history

The last 50 queries are remembered per session. Use / in the input to cycle through them. A position badge (e.g. 2/8) shows where you are in your history. When the input is focused and empty, your recent queries appear as clickable chips for quick re-use.

Chat history

Previous Q&A pairs within the same session are preserved above the current response — your queries appear on the right, AI responses on the left. Scroll up to review earlier exchanges.

Contextual awareness

Every query is automatically enriched with information about your current session so the generated commands are accurate for your environment:

ContextSource
OSSSH sessions → Linux; local sessions → auto-detected
ShellInferred from OS (bash / zsh / powershell)
Working directoryLive CWD from the active SSH connection or local shell
Recent terminal outputLast 20 lines from the terminal buffer

Keyboard Shortcuts

ShortcutAction
Ctrl+IOpen / close AI Command Bar
EnterSubmit query (input non-empty) or execute command (result shown, input empty)
EscapeClose the bar
/ Navigate query history (single-line input only)

The Ctrl+I shortcut is configurable. Go to Settings → Shortcuts → AI Command Bar to change it.

Settings

Open Settings → AI to configure:

  • Provider — Ollama, Gemini, OpenAI, or Claude
  • API Key — per-provider, stored locally only
  • Model — override the default model for any provider
  • Ollama URL — set a custom base URL (e.g. a remote Ollama instance)

You can also switch provider and model directly inside the command bar using the dropdowns in the bottom-right corner, without opening Settings.

Privacy & Security

  • Keys stay on your machine. API keys are stored in your local settings.json file. They are never sent to any Zync server.
  • All calls go directly to the provider. The Rust backend makes API calls directly to Ollama / Gemini / OpenAI / Anthropic. There is no Zync proxy or intermediary.
  • Context is limited. Only the last 20 lines of terminal output (up to 500 characters) and basic session metadata are sent. Full terminal history is never transmitted.
  • Ollama keeps everything local. Using Ollama means your queries never leave your machine at all.

Known Limitations

  • Closing the bar while a response is streaming does not cancel the in-flight request. The request completes in the background and the result is discarded.
  • Chat history is per-session only and is not persisted when the app restarts.
  • API keys are stored in plain settings.json. OS keychain integration is planned for a future release.