Browse Source

feat: provider-aware 6-agent fallback chains with first-run subscription setup (#115)

* feat: add reusable OpenCode free-model selection flow

* docs: keep antigravity guide focused on antigravity setup

* feat: add Chutes provider with adaptive model selection

* fix: remove request-tier bias from Chutes model scoring

* feat: add provider-aware 6-agent fallback chains with 15s failover

* feat: add subscription prompts and provider-aware chains for anthropic/copilot/zai

* refactor: rebalance mixed provider defaults toward Kimi and GPT-5.3

* feat: add dynamic provider-aware model planning from live catalog

* docs: add 5 provider-combination config scenarios

---------

Co-authored-by: Ruben Beuker <rubenbeuker@MacBook-Pro-van-Ruben.local>
Daltonganger 2 months ago
parent
commit
f5b1830db1

+ 13 - 2
README.md

@@ -1,7 +1,7 @@
 <div align="center">
   <img src="img/team.png" alt="Pantheon agents" width="420">
   <p><i>Six divine beings emerged from the dawn of code, each an immortal master of their craft await your command to forge order from chaos and build what was once thought impossible.</i></p>
-  <p><b>Multi Agent Suite</b> · Mix any models · Auto delegate tasks · Now with native Antigravity support</p>
+  <p><b>Multi Agent Suite</b> · Mix any models · Auto delegate tasks · Antigravity + Chutes ready</p>
 </div>
 
 ---
@@ -14,6 +14,12 @@
 bunx oh-my-opencode-slim@latest install
 ```
 
+The installer can refresh and use OpenCode free models directly:
+
+```bash
+bunx oh-my-opencode-slim@latest install --no-tui --kimi=yes --openai=yes --antigravity=yes --chutes=yes --opencode-free=yes --opencode-free-model=auto --tmux=no --skills=yes
+```
+
 Then authenticate:
 
 ```bash
@@ -22,7 +28,12 @@ opencode auth login
 
 Run `ping all agents` to verify everything works.
 
-> **💡 Models are fully customizable.** Edit `~/.config/opencode/oh-my-opencode-slim.json` to assign any model to any agent. Supports Kimi, OpenAI, and Antigravity (Google) providers.
+OpenCode free-model mode uses `opencode models --refresh --verbose`, filters to free `opencode/*` models, and applies coding-first selection:
+- OpenCode-only mode can use multiple OpenCode free models across agents.
+- Hybrid mode can combine OpenCode free models with OpenAI, Kimi, and/or Antigravity.
+- In hybrid mode, `designer` stays on the external provider mapping.
+- Chutes mode auto-selects primary/support models with daily-cap awareness (300/2000/5000).
+
 > **💡 Models are fully customizable.** Edit `~/.config/opencode/oh-my-opencode-slim.json` (or `.jsonc` for comments support) to assign any model to any agent.
 
 ### For LLM Agents

+ 8 - 10
docs/antigravity.md

@@ -23,7 +23,7 @@
 The installer automatically:
 - Adds `opencode-antigravity-auth@latest` plugin
 - Configures Google provider with all Antigravity and Gemini CLI models
-- Sets up agent mapping (Kimi/GPT for Orchestrator/Oracle, Antigravity for others)
+- Sets up Antigravity-focused agent mapping presets
 
 ## Models Available
 
@@ -86,7 +86,7 @@ When you install with `--antigravity=yes`, the preset depends on other providers
 
 ### antigravity-mixed-both (Kimi + OpenAI + Antigravity)
 - **Orchestrator**: Kimi k2p5
-- **Oracle**: GPT-5.2-codex
+- **Oracle**: OpenAI model
 - **Explorer/Librarian/Designer/Fixer**: Gemini 3 Flash (Antigravity)
 
 ### antigravity-mixed-kimi (Kimi + Antigravity)
@@ -96,7 +96,7 @@ When you install with `--antigravity=yes`, the preset depends on other providers
 
 ### antigravity-mixed-openai (OpenAI + Antigravity)
 - **Orchestrator**: Gemini 3 Flash (Antigravity)
-- **Oracle**: GPT-5.2-codex
+- **Oracle**: OpenAI model
 - **Explorer/Librarian/Designer/Fixer**: Gemini 3 Flash (Antigravity)
 
 ### antigravity (Pure Antigravity)
@@ -106,22 +106,20 @@ When you install with `--antigravity=yes`, the preset depends on other providers
 
 ## Manual Configuration
 
-If you prefer to configure manually, edit `~/.config/opencode/oh-my-opencode-slim.json`:
-Edit `~/.config/opencode/oh-my-opencode-slim.json` (or `.jsonc`) and add the Antigravity preset:
+If you prefer to configure manually, edit `~/.config/opencode/oh-my-opencode-slim.json` (or `.jsonc`) and add a pure Antigravity preset:
 
 ```json
 {
-  "preset": "antigravity-mixed-both",
+  "preset": "antigravity",
   "presets": {
-    "antigravity-mixed-both": {
+    "antigravity": {
       "orchestrator": {
-        "model": "kimi-for-coding/k2p5",
+        "model": "google/antigravity-gemini-3-flash",
         "skills": ["*"],
         "mcps": ["websearch"]
       },
       "oracle": {
-        "model": "openai/gpt-5.2-codex",
-        "variant": "high",
+        "model": "google/antigravity-gemini-3-pro",
         "skills": [],
         "mcps": []
       },

+ 31 - 13
docs/installation.md

@@ -24,19 +24,29 @@ bunx oh-my-opencode-slim@latest install
 Or use non-interactive mode:
 
 ```bash
-bunx oh-my-opencode-slim@latest install --no-tui --kimi=yes --openai=yes --antigravity=yes --tmux=no
+bunx oh-my-opencode-slim@latest install --no-tui --kimi=yes --openai=yes --antigravity=yes --chutes=yes --opencode-free=yes --opencode-free-model=auto --tmux=no --skills=yes
 ```
 
 ### Provider Options
 
 The installer supports multiple providers:
+- **OpenCode Free Models**: Live-refreshed free `opencode/*` models
 - **Kimi For Coding**: High-performance coding models
 - **OpenAI**: GPT-4 and GPT-3.5 models
 - **Antigravity (Google)**: Claude 4.5 and Gemini 3 models via Google's infrastructure
+- **Chutes**: Free daily-capped models (`chutes/*`) with dynamic role-aware selection
+
+When OpenCode free mode is enabled, the installer runs:
+
+```bash
+opencode models --refresh --verbose
+```
+
+It then filters to free `opencode/*` models only, picks a coding-first primary model, and picks a support model for search/implementation agents.
 
 Enable during installation:
 ```bash
-bunx oh-my-opencode-slim install --kimi=yes --openai=yes --antigravity=yes
+bunx oh-my-opencode-slim install --kimi=yes --openai=yes --antigravity=yes --chutes=yes --opencode-free=yes --opencode-free-model=auto
 ```
 
 ### After Installation
@@ -83,33 +93,41 @@ Ask these questions **one at a time**, waiting for responses:
 1. "Do you have access to **Kimi For Coding**?" *(Provides Kimi k1.5 models)*
 2. "Do you have access to **OpenAI** API?" *(Enables `openai/` models)*
 3. "Do you have access to **Antigravity (Google)**?" *(Enables `google/` models via Antigravity)*
+4. "Do you want to use **Chutes**?" *(Enables `chutes/` models with daily-cap aware selection)*
+5. "Do you want to use **OpenCode free models**?" *(Refreshes and selects from free `opencode/*` models)*
 
 Help the user understand the tradeoffs:
-- Kimi For Coding provides powerful k1.5 models for coding tasks.
+- OpenCode free mode discovers the latest free `opencode/*` models using `opencode models --refresh --verbose`.
+- OpenCode-only mode can assign more than one OpenCode model across agents.
+- Hybrid mode can combine OpenCode free models with OpenAI, Kimi, and/or Antigravity.
+- In hybrid mode, `designer` remains on the external provider mapping.
+- Chutes selection prioritizes stronger models for orchestrator/oracle and higher-cap models for support agents.
+- Kimi For Coding provides powerful coding models.
 - OpenAI enables `openai/` models.
-- Antigravity (Google) provides Claude 4.5 and Gemini 3 models.
-- If the user has **no providers**, the plugin still works using **OpenCode Zen** free models (`opencode/big-pickle`). They can switch to paid providers later by editing `~/.config/opencode/oh-my-opencode-slim.json`.
-- OpenAI is optional; it enables `openai/` models.
-- If the user has **no providers**, the plugin still works using **OpenCode Zen** free models (`opencode/big-pickle`). They can switch to paid providers later by editing `~/.config/opencode/oh-my-opencode-slim.json` (or `.jsonc`).
+- Antigravity (Google) provides Claude and Gemini models via Google infrastructure.
+- Chutes provides free daily-capped models and requires `CHUTES_API_KEY`.
 
 ### Step 3: Run the Installer
 
 Based on answers, run:
 
 ```bash
-bunx oh-my-opencode-slim@latest install --no-tui --kimi=<yes|no> --openai=<yes|no> --antigravity=<yes|no>
+bunx oh-my-opencode-slim@latest install --no-tui --kimi=<yes|no> --openai=<yes|no> --antigravity=<yes|no> --chutes=<yes|no> --opencode-free=<yes|no> --opencode-free-model=<id|auto> --tmux=<yes|no> --skills=<yes|no>
 ```
 
 **Examples:**
 ```bash
 # Kimi + OpenAI + Antigravity
-bunx oh-my-opencode-slim@latest install --no-tui --kimi=yes --openai=yes --antigravity=yes --tmux=no
+bunx oh-my-opencode-slim@latest install --no-tui --kimi=yes --openai=yes --antigravity=yes --chutes=yes --opencode-free=yes --opencode-free-model=auto --tmux=no --skills=yes
 
 # OpenAI only
-bunx oh-my-opencode-slim@latest install --no-tui --kimi=no --openai=yes --antigravity=no --tmux=no
+bunx oh-my-opencode-slim@latest install --no-tui --kimi=no --openai=yes --antigravity=no --chutes=no --opencode-free=no --tmux=no --skills=yes
+
+# OpenCode free models only (auto-select)
+bunx oh-my-opencode-slim@latest install --no-tui --kimi=no --openai=no --antigravity=no --chutes=no --opencode-free=yes --opencode-free-model=auto --tmux=no --skills=yes
 
-# No providers (Zen free models only)
-bunx oh-my-opencode-slim@latest install --no-tui --kimi=no --openai=no --antigravity=no --tmux=no
+# OpenCode free models + OpenAI (manual primary model)
+bunx oh-my-opencode-slim@latest install --no-tui --kimi=no --openai=yes --antigravity=no --chutes=no --opencode-free=yes --opencode-free-model=opencode/gpt-5-nano --tmux=no --skills=yes
 ```
 
 The installer automatically:
@@ -224,4 +242,4 @@ See the [Quick Reference](quick-reference.md#tmux-integration) for more details.
    ```bash
    npx skills remove simplify
    npx skills remove agent-browser
-   ```
+   ```

+ 128 - 0
docs/provider-combination-matrix.md

@@ -0,0 +1,128 @@
+# Provider Combination Test Matrix (2 to 6 Active)
+
+This matrix tests 5 combinations across the 8 provider toggles in this project:
+
+- `openai`
+- `anthropic`
+- `github-copilot`
+- `zai-coding-plan`
+- `kimi-for-coding`
+- `google` (Antigravity/Gemini)
+- `chutes`
+- `opencode` free (`useOpenCodeFreeModels`)
+
+## How this was determined
+
+I generated outputs directly from `generateLiteConfig` in `src/cli/providers.ts` using fixed deterministic inputs:
+
+- `selectedOpenCodePrimaryModel = opencode/glm-4.7-free`
+- `selectedOpenCodeSecondaryModel = opencode/gpt-5-nano`
+- `selectedChutesPrimaryModel = chutes/kimi-k2.5`
+- `selectedChutesSecondaryModel = chutes/minimax-m2.1`
+
+This represents the config output shape written by the installer when those selected models are available.
+
+## Scenario S1 - 2 providers
+
+Active providers: OpenAI + OpenCode Free
+
+- Preset: `openai`
+- Agents:
+  - `orchestrator`: `openai/gpt-5.3-codex`
+  - `oracle`: `openai/gpt-5.3-codex` (`high`)
+  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
+  - `explorer`: `opencode/gpt-5-nano`
+  - `librarian`: `opencode/gpt-5-nano`
+  - `fixer`: `opencode/gpt-5-nano`
+- Fallback chains:
+  - `orchestrator`: `openai/gpt-5.3-codex -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `oracle`: `openai/gpt-5.3-codex -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `designer`: `openai/gpt-5.1-codex-mini -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> opencode/big-pickle`
+  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> opencode/big-pickle`
+  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> opencode/big-pickle`
+
+## Scenario S2 - 3 providers
+
+Active providers: OpenAI + Chutes + OpenCode Free
+
+- Preset: `openai`
+- Agents:
+  - `orchestrator`: `openai/gpt-5.3-codex`
+  - `oracle`: `openai/gpt-5.3-codex` (`high`)
+  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
+  - `explorer`: `opencode/gpt-5-nano`
+  - `librarian`: `opencode/gpt-5-nano`
+  - `fixer`: `opencode/gpt-5-nano`
+- Fallback chains:
+  - `orchestrator`: `openai/gpt-5.3-codex -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `oracle`: `openai/gpt-5.3-codex -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `designer`: `openai/gpt-5.1-codex-mini -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> chutes/minimax-m2.1 -> opencode/big-pickle`
+  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> chutes/minimax-m2.1 -> opencode/big-pickle`
+  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> chutes/minimax-m2.1 -> opencode/big-pickle`
+
+## Scenario S3 - 4 providers
+
+Active providers: OpenAI + Copilot + ZAI Plan + OpenCode Free
+
+- Preset: `openai`
+- Agents:
+  - `orchestrator`: `openai/gpt-5.3-codex`
+  - `oracle`: `openai/gpt-5.3-codex` (`high`)
+  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
+  - `explorer`: `opencode/gpt-5-nano`
+  - `librarian`: `opencode/gpt-5-nano`
+  - `fixer`: `opencode/gpt-5-nano`
+- Fallback chains:
+  - `orchestrator`: `openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `oracle`: `openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `designer`: `openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/big-pickle`
+  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/big-pickle`
+  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/big-pickle`
+
+## Scenario S4 - 5 providers
+
+Active providers: OpenAI + Gemini + Chutes + Copilot + OpenCode Free
+
+- Preset: `antigravity-mixed-openai`
+- Agents:
+  - `orchestrator`: `chutes/kimi-k2.5`
+  - `oracle`: `openai/gpt-5.3-codex` (`high`)
+  - `designer`: `chutes/kimi-k2.5` (`medium`)
+  - `explorer`: `opencode/gpt-5-nano`
+  - `librarian`: `opencode/gpt-5-nano`
+  - `fixer`: `opencode/gpt-5-nano`
+- Fallback chains:
+  - `orchestrator`: `chutes/kimi-k2.5 -> openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `oracle`: `openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-pro -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `designer`: `chutes/kimi-k2.5 -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> chutes/minimax-m2.1 -> opencode/big-pickle`
+  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> chutes/minimax-m2.1 -> opencode/big-pickle`
+  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> chutes/minimax-m2.1 -> opencode/big-pickle`
+
+## Scenario S5 - 6 providers
+
+Active providers: OpenAI + Anthropic + Copilot + ZAI Plan + Chutes + OpenCode Free
+
+- Preset: `openai`
+- Agents:
+  - `orchestrator`: `openai/gpt-5.3-codex`
+  - `oracle`: `openai/gpt-5.3-codex` (`high`)
+  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
+  - `explorer`: `opencode/gpt-5-nano`
+  - `librarian`: `opencode/gpt-5-nano`
+  - `fixer`: `opencode/gpt-5-nano`
+- Fallback chains:
+  - `orchestrator`: `openai/gpt-5.3-codex -> anthropic/claude-opus-4-6 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `oracle`: `openai/gpt-5.3-codex -> anthropic/claude-opus-4-6 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `designer`: `openai/gpt-5.1-codex-mini -> anthropic/claude-sonnet-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
+  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> anthropic/claude-haiku-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/minimax-m2.1 -> opencode/big-pickle`
+  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> anthropic/claude-sonnet-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/minimax-m2.1 -> opencode/big-pickle`
+  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> anthropic/claude-sonnet-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/minimax-m2.1 -> opencode/big-pickle`
+
+## Notes
+
+- This matrix shows deterministic `generateLiteConfig` output for the selected combinations.
+- If the dynamic planner is used during full install (live model catalog), the generated `dynamic` preset may differ based on discovered models and capabilities.

+ 25 - 2
docs/quick-reference.md

@@ -17,6 +17,28 @@ Complete reference for oh-my-opencode-slim configuration and capabilities.
 
 Presets are pre-configured agent model mappings for different provider combinations. The installer generates these automatically based on your available providers, and you can switch between them instantly.
 
+### OpenCode Free Discovery
+
+The installer can discover the latest OpenCode free models by running:
+
+```bash
+opencode models --refresh --verbose
+```
+
+Selection rules:
+- Only free `opencode/*` models are considered.
+- A coding-first primary model is selected for orchestration/strategy workloads.
+- A support model is selected for research/implementation workloads.
+- OpenCode-only mode can assign multiple OpenCode models across agents.
+- Hybrid mode can combine OpenCode free models with OpenAI/Kimi/Antigravity; `designer` remains on the external provider mapping.
+
+Useful flags:
+
+```bash
+--opencode-free=yes|no
+--opencode-free-model=<id|auto>
+```
+
 ### Switching Presets
 
 **Method 1: Edit Config File**
@@ -66,13 +88,14 @@ Access Claude 4.5 and Gemini 3 models through Google's Antigravity infrastructur
 
 **Installation:**
 ```bash
-bunx oh-my-opencode-slim install --antigravity=yes
+bunx oh-my-opencode-slim install --antigravity=yes --opencode-free=yes --opencode-free-model=auto
 ```
 
 **Agent Mapping:**
 - Orchestrator: Kimi (if available)
 - Oracle: GPT (if available)
 - Explorer/Librarian/Designer/Fixer: Gemini 3 Flash via Antigravity
+- If OpenCode free mode is enabled, Explorer/Librarian/Fixer may use selected free `opencode/*` support model while `designer` stays on external mapping
 
 **Authentication:**
 ```bash
@@ -476,4 +499,4 @@ The installer generates this file based on your providers. You can manually cust
 | `tmux.main_pane_size` | number | `60` | Main pane size as percentage (20-80) |
 | `disabled_mcps` | string[] | `[]` | MCP server IDs to disable globally (e.g., `"websearch"`) |
 
-> **Note:** Agent configuration should be defined within `presets`. The root-level `agents` field is deprecated.
+> **Note:** Agent configuration should be defined within `presets`. The root-level `agents` field is deprecated.

+ 91 - 1
src/background/background-manager.test.ts

@@ -11,6 +11,7 @@ function createMockContext(overrides?: {
       parts?: Array<{ type: string; text?: string }>;
     }>;
   };
+  promptImpl?: (args: any) => Promise<unknown>;
 }) {
   let callCount = 0;
   return {
@@ -30,7 +31,12 @@ function createMockContext(overrides?: {
         messages: mock(
           async () => overrides?.sessionMessagesResult ?? { data: [] },
         ),
-        prompt: mock(async () => ({})),
+        prompt: mock(async (args: any) => {
+          if (overrides?.promptImpl) {
+            return await overrides.promptImpl(args);
+          }
+          return {};
+        }),
       },
     },
     directory: '/test/directory',
@@ -475,6 +481,90 @@ describe('BackgroundTaskManager', () => {
   });
 
   describe('BackgroundTask logic', () => {
+    test('falls back to next model when first model prompt fails', async () => {
+      let promptCalls = 0;
+      const ctx = createMockContext({
+        promptImpl: async (args) => {
+          const isTaskPrompt =
+            typeof args.path?.id === 'string' &&
+            args.path.id.startsWith('test-session-');
+          const isParentNotification = !isTaskPrompt;
+          if (isParentNotification) return {};
+
+          promptCalls += 1;
+          const modelRef = args.body?.model;
+          if (
+            modelRef?.providerID === 'openai' &&
+            modelRef?.modelID === 'gpt-5.2-codex'
+          ) {
+            throw new Error('primary failed');
+          }
+          return {};
+        },
+      });
+
+      const manager = new BackgroundTaskManager(ctx, undefined, {
+        fallback: {
+          enabled: true,
+          timeoutMs: 15000,
+          chains: {
+            explorer: ['openai/gpt-5.2-codex', 'opencode/gpt-5-nano'],
+          },
+        },
+      });
+
+      const task = manager.launch({
+        agent: 'explorer',
+        prompt: 'test',
+        description: 'test',
+        parentSessionId: 'parent-123',
+      });
+
+      await Promise.resolve();
+      await Promise.resolve();
+      await new Promise((r) => setTimeout(r, 10));
+
+      expect(task.status).toBe('running');
+      expect(promptCalls).toBe(2);
+    });
+
+    test('fails task when all fallback models fail', async () => {
+      const ctx = createMockContext({
+        promptImpl: async (args) => {
+          const isTaskPrompt =
+            typeof args.path?.id === 'string' &&
+            args.path.id.startsWith('test-session-');
+          const isParentNotification = !isTaskPrompt;
+          if (isParentNotification) return {};
+          throw new Error('all models failing');
+        },
+      });
+
+      const manager = new BackgroundTaskManager(ctx, undefined, {
+        fallback: {
+          enabled: true,
+          timeoutMs: 15000,
+          chains: {
+            explorer: ['openai/gpt-5.2-codex', 'opencode/gpt-5-nano'],
+          },
+        },
+      });
+
+      const task = manager.launch({
+        agent: 'explorer',
+        prompt: 'test',
+        description: 'test',
+        parentSessionId: 'parent-123',
+      });
+
+      await Promise.resolve();
+      await Promise.resolve();
+      await new Promise((r) => setTimeout(r, 10));
+
+      expect(task.status).toBe('failed');
+      expect(task.error).toContain('All fallback models failed');
+    });
+
     test('extracts content from multiple types and messages', async () => {
       const ctx = createMockContext({
         sessionMessagesResult: {

+ 101 - 6
src/background/background-manager.ts

@@ -15,6 +15,7 @@
 
 import type { PluginInput } from '@opencode-ai/plugin';
 import type { BackgroundTaskConfig, PluginConfig } from '../config';
+import { FALLBACK_FAILOVER_TIMEOUT_MS } from '../config';
 import type { TmuxConfig } from '../config/schema';
 import { applyAgentVariant, resolveAgentVariant } from '../utils';
 import { log } from '../utils/logger';
@@ -32,6 +33,21 @@ type PromptBody = {
 
 type OpencodeClient = PluginInput['client'];
 
+function parseModelReference(model: string): {
+  providerID: string;
+  modelID: string;
+} | null {
+  const slashIndex = model.indexOf('/');
+  if (slashIndex <= 0 || slashIndex >= model.length - 1) {
+    return null;
+  }
+
+  return {
+    providerID: model.slice(0, slashIndex),
+    modelID: model.slice(slashIndex + 1),
+  };
+}
+
 /**
  * Represents a background task running in an isolated session.
  * Tasks are tracked from creation through completion or failure.
@@ -165,6 +181,40 @@ export class BackgroundTaskManager {
     }
   }
 
+  private resolveFallbackChain(agentName: string): string[] {
+    const fallback = this.config?.fallback;
+    const chains = fallback?.chains as
+      | Record<string, string[] | undefined>
+      | undefined;
+    const configuredChain = chains?.[agentName] ?? [];
+    const primary = this.config?.agents?.[agentName]?.model;
+
+    const chain: string[] = [];
+    const seen = new Set<string>();
+
+    for (const model of [primary, ...configuredChain]) {
+      if (!model || seen.has(model)) continue;
+      seen.add(model);
+      chain.push(model);
+    }
+
+    return chain;
+  }
+
+  private async promptWithTimeout(
+    args: Parameters<OpencodeClient['session']['prompt']>[0],
+    timeoutMs: number,
+  ): Promise<void> {
+    await Promise.race([
+      this.client.session.prompt(args),
+      new Promise<never>((_, reject) => {
+        setTimeout(() => {
+          reject(new Error(`Prompt timed out after ${timeoutMs}ms`));
+        }, timeoutMs);
+      }),
+    ]);
+  }
+
   /**
    * Start a task in the background (Phase B).
    */
@@ -205,17 +255,62 @@ export class BackgroundTaskManager {
       // Send prompt
       const promptQuery: Record<string, string> = { directory: this.directory };
       const resolvedVariant = resolveAgentVariant(this.config, task.agent);
-      const promptBody = applyAgentVariant(resolvedVariant, {
+      const basePromptBody = applyAgentVariant(resolvedVariant, {
         agent: task.agent,
         tools: { background_task: false, task: false },
         parts: [{ type: 'text' as const, text: task.prompt }],
       } as PromptBody) as unknown as PromptBody;
 
-      await this.client.session.prompt({
-        path: { id: session.data.id },
-        body: promptBody,
-        query: promptQuery,
-      });
+      const timeoutMs =
+        this.config?.fallback?.timeoutMs ?? FALLBACK_FAILOVER_TIMEOUT_MS;
+      const fallbackEnabled = this.config?.fallback?.enabled ?? true;
+      const chain = fallbackEnabled
+        ? this.resolveFallbackChain(task.agent)
+        : [];
+      const attemptModels = chain.length > 0 ? chain : [undefined];
+
+      const errors: string[] = [];
+      let succeeded = false;
+
+      for (const model of attemptModels) {
+        try {
+          const body: PromptBody = {
+            ...basePromptBody,
+            model: undefined,
+          };
+
+          if (model) {
+            const ref = parseModelReference(model);
+            if (!ref) {
+              throw new Error(`Invalid fallback model format: ${model}`);
+            }
+            body.model = ref;
+          }
+
+          await this.promptWithTimeout(
+            {
+              path: { id: session.data.id },
+              body,
+              query: promptQuery,
+            },
+            timeoutMs,
+          );
+
+          succeeded = true;
+          break;
+        } catch (error) {
+          const msg = error instanceof Error ? error.message : String(error);
+          if (model) {
+            errors.push(`${model}: ${msg}`);
+          } else {
+            errors.push(`default-model: ${msg}`);
+          }
+        }
+      }
+
+      if (!succeeded) {
+        throw new Error(`All fallback models failed. ${errors.join(' | ')}`);
+      }
 
       log(`[background-manager] task started: ${task.id}`, {
         sessionId: session.data.id,

+ 69 - 0
src/cli/chutes-selection.test.ts

@@ -0,0 +1,69 @@
+/// <reference types="bun-types" />
+
+import { describe, expect, test } from 'bun:test';
+import {
+  pickBestCodingChutesModel,
+  pickSupportChutesModel,
+} from './chutes-selection';
+import type { OpenCodeFreeModel } from './types';
+
+function model(input: Partial<OpenCodeFreeModel>): OpenCodeFreeModel {
+  return {
+    providerID: 'chutes',
+    model: input.model ?? 'chutes/unknown',
+    name: input.name ?? input.model ?? 'unknown',
+    status: input.status ?? 'active',
+    contextLimit: input.contextLimit ?? 128000,
+    outputLimit: input.outputLimit ?? 16000,
+    reasoning: input.reasoning ?? false,
+    toolcall: input.toolcall ?? false,
+    attachment: input.attachment ?? false,
+    dailyRequestLimit: input.dailyRequestLimit,
+  };
+}
+
+describe('chutes-selection', () => {
+  test('prefers reasoning model for primary role', () => {
+    const models = [
+      model({
+        model: 'chutes/minimax-m2.1',
+        reasoning: true,
+        toolcall: true,
+        contextLimit: 512000,
+        outputLimit: 64000,
+        dailyRequestLimit: 300,
+      }),
+      model({
+        model: 'chutes/gpt-oss-20b-mini',
+        reasoning: false,
+        toolcall: true,
+        dailyRequestLimit: 5000,
+      }),
+    ];
+
+    expect(pickBestCodingChutesModel(models)?.model).toBe(
+      'chutes/minimax-m2.1',
+    );
+  });
+
+  test('prefers high-cap fast model for support role', () => {
+    const models = [
+      model({
+        model: 'chutes/kimi-k2.5',
+        reasoning: true,
+        toolcall: true,
+        dailyRequestLimit: 300,
+      }),
+      model({
+        model: 'chutes/qwen3-coder-30b-mini',
+        reasoning: true,
+        toolcall: true,
+        dailyRequestLimit: 5000,
+      }),
+    ];
+
+    expect(pickSupportChutesModel(models, 'chutes/kimi-k2.5')?.model).toBe(
+      'chutes/qwen3-coder-30b-mini',
+    );
+  });
+});

+ 64 - 0
src/cli/chutes-selection.ts

@@ -0,0 +1,64 @@
+import {
+  pickBestModel,
+  pickPrimaryAndSupport,
+  type ScoreFunction,
+} from './model-selection';
+import type { OpenCodeFreeModel } from './types';
+
+function speedBonus(modelName: string): number {
+  const lower = modelName.toLowerCase();
+  let score = 0;
+  if (lower.includes('nano')) score += 60;
+  if (lower.includes('flash')) score += 45;
+  if (lower.includes('mini')) score += 30;
+  if (lower.includes('lite')) score += 20;
+  if (lower.includes('small')) score += 15;
+  return score;
+}
+
+const scoreChutesPrimaryForCoding: ScoreFunction<OpenCodeFreeModel> = (
+  model,
+) => {
+  return (
+    (model.reasoning ? 120 : 0) +
+    (model.toolcall ? 80 : 0) +
+    (model.attachment ? 20 : 0) +
+    Math.min(model.contextLimit, 1_000_000) / 9_000 +
+    Math.min(model.outputLimit, 300_000) / 10_000 +
+    (model.status === 'active' ? 10 : 0)
+  );
+};
+
+const scoreChutesSupportForCoding: ScoreFunction<OpenCodeFreeModel> = (
+  model,
+) => {
+  return (
+    (model.toolcall ? 90 : 0) +
+    (model.reasoning ? 35 : 0) +
+    speedBonus(model.model) +
+    Math.min(model.contextLimit, 400_000) / 20_000 +
+    (model.status === 'active' ? 8 : 0)
+  );
+};
+
+export function pickBestCodingChutesModel(
+  models: OpenCodeFreeModel[],
+): OpenCodeFreeModel | null {
+  return pickBestModel(models, scoreChutesPrimaryForCoding);
+}
+
+export function pickSupportChutesModel(
+  models: OpenCodeFreeModel[],
+  primaryModel?: string,
+): OpenCodeFreeModel | null {
+  const { support } = pickPrimaryAndSupport(
+    models,
+    {
+      primary: scoreChutesPrimaryForCoding,
+      support: scoreChutesSupportForCoding,
+    },
+    primaryModel,
+  );
+
+  return support;
+}

+ 42 - 0
src/cli/config-io.test.ts

@@ -11,6 +11,7 @@ import {
 import { tmpdir } from 'node:os';
 import { join } from 'node:path';
 import {
+  addChutesProvider,
   addPluginToOpenCodeConfig,
   detectCurrentConfig,
   disableDefaultAgents,
@@ -126,6 +127,7 @@ describe('config-io', () => {
     const result = writeLiteConfig({
       hasKimi: true,
       hasOpenAI: false,
+      hasAntigravity: false,
       hasOpencodeZen: false,
       hasTmux: true,
       installSkills: false,
@@ -175,6 +177,9 @@ describe('config-io', () => {
         presets: {
           openai: {
             orchestrator: { model: 'openai/gpt-4' },
+            oracle: { model: 'anthropic/claude-opus-4-6' },
+            explorer: { model: 'github-copilot/grok-code-fast-1' },
+            librarian: { model: 'zai-coding-plan/glm-4.7' },
           },
         },
         tmux: { enabled: true },
@@ -185,6 +190,43 @@ describe('config-io', () => {
     expect(detected.isInstalled).toBe(true);
     expect(detected.hasKimi).toBe(true);
     expect(detected.hasOpenAI).toBe(true);
+    expect(detected.hasAnthropic).toBe(true);
+    expect(detected.hasCopilot).toBe(true);
+    expect(detected.hasZaiPlan).toBe(true);
     expect(detected.hasTmux).toBe(true);
   });
+
+  test('addChutesProvider configures chutes provider and detection', () => {
+    const configPath = join(tmpDir, 'opencode', 'opencode.json');
+    const litePath = join(tmpDir, 'opencode', 'oh-my-opencode-slim.json');
+    paths.ensureConfigDir();
+
+    writeFileSync(
+      configPath,
+      JSON.stringify({ plugin: ['oh-my-opencode-slim'] }),
+    );
+    writeFileSync(
+      litePath,
+      JSON.stringify({
+        preset: 'chutes',
+        presets: {
+          chutes: {
+            orchestrator: { model: 'chutes/kimi-k2.5' },
+          },
+        },
+      }),
+    );
+
+    const result = addChutesProvider();
+    expect(result.success).toBe(true);
+
+    const saved = JSON.parse(readFileSync(configPath, 'utf-8'));
+    expect(saved.provider.chutes).toBeDefined();
+    expect(saved.provider.chutes.options.baseURL).toBe(
+      'https://llm.chutes.ai/v1',
+    );
+
+    const detected = detectCurrentConfig();
+    expect(detected.hasChutes).toBe(true);
+  });
 });

+ 49 - 0
src/cli/config-io.ts

@@ -340,6 +340,41 @@ export function addGoogleProvider(): ConfigMergeResult {
   }
 }
 
+export function addChutesProvider(): ConfigMergeResult {
+  const configPath = getExistingConfigPath();
+  try {
+    const { config: parsedConfig, error } = parseConfig(configPath);
+    if (error) {
+      return {
+        success: false,
+        configPath,
+        error: `Failed to parse config: ${error}`,
+      };
+    }
+    const config = parsedConfig ?? {};
+    const providers = (config.provider ?? {}) as Record<string, unknown>;
+
+    providers.chutes = {
+      npm: '@ai-sdk/openai-compatible',
+      name: 'Chutes',
+      options: {
+        baseURL: 'https://llm.chutes.ai/v1',
+        apiKey: '{env:CHUTES_API_KEY}',
+      },
+    };
+    config.provider = providers;
+
+    writeConfig(configPath, config);
+    return { success: true, configPath };
+  } catch (err) {
+    return {
+      success: false,
+      configPath,
+      error: `Failed to add chutes provider: ${err}`,
+    };
+  }
+}
+
 export function detectAntigravityConfig(): boolean {
   const { config } = parseConfig(getExistingConfigPath());
   if (!config) return false;
@@ -356,7 +391,11 @@ export function detectCurrentConfig(): DetectedConfig {
     isInstalled: false,
     hasKimi: false,
     hasOpenAI: false,
+    hasAnthropic: false,
+    hasCopilot: false,
+    hasZaiPlan: false,
     hasAntigravity: false,
+    hasChutes: false,
     hasOpencodeZen: false,
     hasTmux: false,
   };
@@ -373,6 +412,10 @@ export function detectCurrentConfig(): DetectedConfig {
   // Check for providers
   const providers = config.provider as Record<string, unknown> | undefined;
   result.hasKimi = !!providers?.kimi;
+  result.hasAnthropic = !!providers?.anthropic;
+  result.hasCopilot = !!providers?.['github-copilot'];
+  result.hasZaiPlan = !!providers?.['zai-coding-plan'];
+  result.hasChutes = !!providers?.chutes;
   if (providers?.google) result.hasAntigravity = true;
 
   // Try to detect from lite config
@@ -390,10 +433,16 @@ export function detectCurrentConfig(): DetectedConfig {
         .map((a) => a?.model)
         .filter(Boolean);
       result.hasOpenAI = models.some((m) => m?.startsWith('openai/'));
+      result.hasAnthropic = models.some((m) => m?.startsWith('anthropic/'));
+      result.hasCopilot = models.some((m) => m?.startsWith('github-copilot/'));
+      result.hasZaiPlan = models.some((m) => m?.startsWith('zai-coding-plan/'));
       result.hasOpencodeZen = models.some((m) => m?.startsWith('opencode/'));
       if (models.some((m) => m?.startsWith('google/'))) {
         result.hasAntigravity = true;
       }
+      if (models.some((m) => m?.startsWith('chutes/'))) {
+        result.hasChutes = true;
+      }
     }
 
     if (configObj.tmux && typeof configObj.tmux === 'object') {

+ 5 - 0
src/cli/config-manager.ts

@@ -1,4 +1,9 @@
+export * from './chutes-selection';
 export * from './config-io';
+export * from './dynamic-model-selection';
+export * from './model-selection';
+export * from './opencode-models';
+export * from './opencode-selection';
 export * from './paths';
 export * from './providers';
 export * from './system';

+ 91 - 0
src/cli/dynamic-model-selection.test.ts

@@ -0,0 +1,91 @@
+/// <reference types="bun-types" />
+
+import { describe, expect, test } from 'bun:test';
+import { buildDynamicModelPlan } from './dynamic-model-selection';
+import type { DiscoveredModel, InstallConfig } from './types';
+
+function m(
+  input: Partial<DiscoveredModel> & { model: string },
+): DiscoveredModel {
+  const [providerID] = input.model.split('/');
+  return {
+    providerID: providerID ?? 'openai',
+    model: input.model,
+    name: input.name ?? input.model,
+    status: input.status ?? 'active',
+    contextLimit: input.contextLimit ?? 200000,
+    outputLimit: input.outputLimit ?? 32000,
+    reasoning: input.reasoning ?? true,
+    toolcall: input.toolcall ?? true,
+    attachment: input.attachment ?? false,
+    dailyRequestLimit: input.dailyRequestLimit,
+    costInput: input.costInput,
+    costOutput: input.costOutput,
+  };
+}
+
+function baseInstallConfig(): InstallConfig {
+  return {
+    hasKimi: false,
+    hasOpenAI: true,
+    hasAnthropic: false,
+    hasCopilot: true,
+    hasZaiPlan: true,
+    hasAntigravity: false,
+    hasChutes: true,
+    hasOpencodeZen: true,
+    useOpenCodeFreeModels: true,
+    selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
+    selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
+    selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
+    selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
+    hasTmux: false,
+    installSkills: false,
+    installCustomSkills: false,
+  };
+}
+
+describe('dynamic-model-selection', () => {
+  test('builds assignments and chains for all six agents', () => {
+    const plan = buildDynamicModelPlan(
+      [
+        m({ model: 'openai/gpt-5.3-codex', reasoning: true, toolcall: true }),
+        m({
+          model: 'openai/gpt-5.1-codex-mini',
+          reasoning: true,
+          toolcall: true,
+        }),
+        m({
+          model: 'github-copilot/grok-code-fast-1',
+          reasoning: true,
+          toolcall: true,
+        }),
+        m({
+          model: 'zai-coding-plan/glm-4.7',
+          reasoning: true,
+          toolcall: true,
+        }),
+        m({ model: 'chutes/kimi-k2.5', reasoning: true, toolcall: true }),
+        m({ model: 'chutes/minimax-m2.1', reasoning: true, toolcall: true }),
+      ],
+      baseInstallConfig(),
+    );
+
+    expect(plan).not.toBeNull();
+    const agents = plan?.agents ?? {};
+    const chains = plan?.chains ?? {};
+
+    expect(Object.keys(agents).sort()).toEqual([
+      'designer',
+      'explorer',
+      'fixer',
+      'librarian',
+      'oracle',
+      'orchestrator',
+    ]);
+    expect(chains.oracle).toContain('openai/gpt-5.3-codex');
+    expect(chains.orchestrator).toContain('chutes/kimi-k2.5');
+    expect(chains.explorer).toContain('opencode/gpt-5-nano');
+    expect(chains.fixer[chains.fixer.length - 1]).toBe('opencode/big-pickle');
+  });
+});

+ 222 - 0
src/cli/dynamic-model-selection.ts

@@ -0,0 +1,222 @@
+import type { DiscoveredModel, DynamicModelPlan, InstallConfig } from './types';
+
+const AGENTS = [
+  'orchestrator',
+  'oracle',
+  'designer',
+  'explorer',
+  'librarian',
+  'fixer',
+] as const;
+
+type AgentName = (typeof AGENTS)[number];
+
+const ROLE_VARIANT: Record<AgentName, string | undefined> = {
+  orchestrator: undefined,
+  oracle: 'high',
+  designer: 'medium',
+  explorer: 'low',
+  librarian: 'low',
+  fixer: 'low',
+};
+
+function getEnabledProviders(config: InstallConfig): string[] {
+  const providers: string[] = [];
+  if (config.hasOpenAI) providers.push('openai');
+  if (config.hasAnthropic) providers.push('anthropic');
+  if (config.hasCopilot) providers.push('github-copilot');
+  if (config.hasZaiPlan) providers.push('zai-coding-plan');
+  if (config.hasKimi) providers.push('kimi-for-coding');
+  if (config.hasAntigravity) providers.push('google');
+  if (config.hasChutes) providers.push('chutes');
+  if (config.useOpenCodeFreeModels) providers.push('opencode');
+  return providers;
+}
+
+function tokenScore(name: string, re: RegExp, points: number): number {
+  return re.test(name) ? points : 0;
+}
+
+function statusScore(status: DiscoveredModel['status']): number {
+  if (status === 'active') return 20;
+  if (status === 'beta') return 8;
+  if (status === 'alpha') return -5;
+  return -40;
+}
+
+function baseScore(model: DiscoveredModel): number {
+  const lowered = `${model.model} ${model.name}`.toLowerCase();
+  const context = Math.min(model.contextLimit, 1_000_000) / 50_000;
+  const output = Math.min(model.outputLimit, 300_000) / 30_000;
+  const deep = tokenScore(
+    lowered,
+    /(opus|pro|thinking|reason|r1|gpt-5|k2\.5)/i,
+    12,
+  );
+  const fast = tokenScore(
+    lowered,
+    /(nano|flash|mini|lite|fast|turbo|haiku|small)/i,
+    12,
+  );
+  const code = tokenScore(lowered, /(codex|coder|code|dev|program)/i, 12);
+  const versionBoost =
+    tokenScore(lowered, /gpt-5\.3/i, 12) +
+    tokenScore(lowered, /gpt-5\.2/i, 8) +
+    tokenScore(lowered, /k2\.5/i, 6);
+
+  return (
+    statusScore(model.status) +
+    context +
+    output +
+    deep +
+    fast +
+    code +
+    versionBoost +
+    (model.toolcall ? 25 : 0)
+  );
+}
+
+function roleScore(agent: AgentName, model: DiscoveredModel): number {
+  const lowered = `${model.model} ${model.name}`.toLowerCase();
+  const reasoning = model.reasoning ? 1 : 0;
+  const toolcall = model.toolcall ? 1 : 0;
+  const attachment = model.attachment ? 1 : 0;
+  const context = Math.min(model.contextLimit, 1_000_000) / 60_000;
+  const output = Math.min(model.outputLimit, 300_000) / 40_000;
+  const deep = tokenScore(
+    lowered,
+    /(opus|pro|thinking|reason|r1|gpt-5|k2\.5)/i,
+    1,
+  );
+  const fast = tokenScore(
+    lowered,
+    /(nano|flash|mini|lite|fast|turbo|haiku|small)/i,
+    1,
+  );
+  const code = tokenScore(lowered, /(codex|coder|code|dev|program)/i, 1);
+
+  if (
+    (agent === 'orchestrator' ||
+      agent === 'explorer' ||
+      agent === 'librarian' ||
+      agent === 'fixer') &&
+    !model.toolcall
+  ) {
+    return -10_000;
+  }
+
+  if (model.status === 'deprecated') {
+    return -5_000;
+  }
+
+  const score = baseScore(model);
+
+  if (agent === 'orchestrator') {
+    return (
+      score + reasoning * 40 + toolcall * 25 + deep * 10 + code * 8 + context
+    );
+  }
+  if (agent === 'oracle') {
+    return score + reasoning * 55 + deep * 18 + context * 1.2 + toolcall * 10;
+  }
+  if (agent === 'designer') {
+    return (
+      score +
+      attachment * 25 +
+      reasoning * 18 +
+      toolcall * 15 +
+      context * 0.8 +
+      output
+    );
+  }
+  if (agent === 'explorer') {
+    return score + fast * 35 + toolcall * 28 + reasoning * 8 + context * 0.7;
+  }
+  if (agent === 'librarian') {
+    return score + context * 30 + toolcall * 22 + reasoning * 15 + output * 10;
+  }
+
+  return (
+    score + code * 28 + toolcall * 24 + fast * 18 + reasoning * 14 + output * 8
+  );
+}
+
+function rankModels(
+  models: DiscoveredModel[],
+  agent: AgentName,
+): DiscoveredModel[] {
+  return [...models].sort((a, b) => roleScore(agent, b) - roleScore(agent, a));
+}
+
+function dedupe(models: Array<string | undefined>): string[] {
+  const seen = new Set<string>();
+  const result: string[] = [];
+  for (const model of models) {
+    if (!model || seen.has(model)) continue;
+    seen.add(model);
+    result.push(model);
+  }
+  return result;
+}
+
+export function buildDynamicModelPlan(
+  catalog: DiscoveredModel[],
+  config: InstallConfig,
+): DynamicModelPlan | null {
+  const enabledProviders = new Set(getEnabledProviders(config));
+  const providerCandidates = catalog.filter((m) =>
+    enabledProviders.has(m.providerID),
+  );
+
+  if (providerCandidates.length === 0) {
+    return null;
+  }
+
+  const agents: Record<string, { model: string; variant?: string }> = {};
+  const chains: Record<string, string[]> = {};
+
+  for (const agent of AGENTS) {
+    const ranked = rankModels(providerCandidates, agent);
+    const primary = ranked[0];
+    if (!primary) continue;
+
+    const providerOrder = dedupe(ranked.map((m) => m.providerID));
+    const perProviderBest = providerOrder
+      .map(
+        (providerID) => ranked.find((m) => m.providerID === providerID)?.model,
+      )
+      .filter((m): m is string => Boolean(m));
+
+    const selectedOpencode =
+      agent === 'explorer' || agent === 'librarian' || agent === 'fixer'
+        ? (config.selectedOpenCodeSecondaryModel ??
+          config.selectedOpenCodePrimaryModel)
+        : config.selectedOpenCodePrimaryModel;
+
+    const selectedChutes =
+      agent === 'explorer' || agent === 'librarian' || agent === 'fixer'
+        ? (config.selectedChutesSecondaryModel ??
+          config.selectedChutesPrimaryModel)
+        : config.selectedChutesPrimaryModel;
+
+    const chain = dedupe([
+      primary.model,
+      ...perProviderBest,
+      selectedChutes,
+      selectedOpencode,
+      'opencode/big-pickle',
+    ]).slice(0, 7);
+
+    agents[agent] = {
+      model: chain[0] ?? primary.model,
+      variant: ROLE_VARIANT[agent],
+    };
+    chains[agent] = chain;
+  }
+
+  if (Object.keys(agents).length === 0) {
+    return null;
+  }
+
+  return { agents, chains };
+}

+ 19 - 1
src/cli/index.ts

@@ -14,12 +14,24 @@ function parseArgs(args: string[]): InstallArgs {
       result.kimi = arg.split('=')[1] as BooleanArg;
     } else if (arg.startsWith('--openai=')) {
       result.openai = arg.split('=')[1] as BooleanArg;
+    } else if (arg.startsWith('--anthropic=')) {
+      result.anthropic = arg.split('=')[1] as BooleanArg;
+    } else if (arg.startsWith('--copilot=')) {
+      result.copilot = arg.split('=')[1] as BooleanArg;
+    } else if (arg.startsWith('--zai-plan=')) {
+      result.zaiPlan = arg.split('=')[1] as BooleanArg;
     } else if (arg.startsWith('--antigravity=')) {
       result.antigravity = arg.split('=')[1] as BooleanArg;
+    } else if (arg.startsWith('--chutes=')) {
+      result.chutes = arg.split('=')[1] as BooleanArg;
     } else if (arg.startsWith('--tmux=')) {
       result.tmux = arg.split('=')[1] as BooleanArg;
     } else if (arg.startsWith('--skills=')) {
       result.skills = arg.split('=')[1] as BooleanArg;
+    } else if (arg.startsWith('--opencode-free=')) {
+      result.opencodeFree = arg.split('=')[1] as BooleanArg;
+    } else if (arg.startsWith('--opencode-free-model=')) {
+      result.opencodeFreeModel = arg.split('=')[1];
     } else if (arg === '-h' || arg === '--help') {
       printHelp();
       process.exit(0);
@@ -38,7 +50,13 @@ Usage: bunx oh-my-opencode-slim install [OPTIONS]
 Options:
   --kimi=yes|no          Kimi API access (yes/no)
   --openai=yes|no        OpenAI API access (yes/no)
+  --anthropic=yes|no     Anthropic access (yes/no)
+  --copilot=yes|no       GitHub Copilot access (yes/no)
+  --zai-plan=yes|no      ZAI Coding Plan access (yes/no)
   --antigravity=yes|no   Antigravity/Google models (yes/no)
+  --chutes=yes|no        Chutes models (yes/no)
+  --opencode-free=yes|no Use OpenCode free models (opencode/*)
+  --opencode-free-model  Preferred OpenCode model id or "auto"
   --tmux=yes|no          Enable tmux integration (yes/no)
   --skills=yes|no        Install recommended skills (yes/no)
   --no-tui               Non-interactive mode (requires all flags)
@@ -46,7 +64,7 @@ Options:
 
 Examples:
   bunx oh-my-opencode-slim install
-  bunx oh-my-opencode-slim install --no-tui --kimi=yes --openai=yes --antigravity=yes --tmux=no --skills=yes
+  bunx oh-my-opencode-slim install --no-tui --kimi=yes --openai=yes --anthropic=yes --copilot=no --zai-plan=no --antigravity=yes --chutes=no --opencode-free=yes --opencode-free-model=auto --tmux=no --skills=yes
 `);
 }
 

+ 477 - 21
src/cli/install.ts

@@ -1,13 +1,22 @@
 import * as readline from 'node:readline/promises';
 import {
   addAntigravityPlugin,
+  addChutesProvider,
   addGoogleProvider,
   addPluginToOpenCodeConfig,
+  buildDynamicModelPlan,
   detectCurrentConfig,
   disableDefaultAgents,
+  discoverModelCatalog,
+  discoverOpenCodeFreeModels,
+  discoverProviderFreeModels,
   generateLiteConfig,
   getOpenCodeVersion,
   isOpenCodeInstalled,
+  pickBestCodingChutesModel,
+  pickBestCodingOpenCodeModel,
+  pickSupportChutesModel,
+  pickSupportOpenCodeModel,
   writeLiteConfig,
 } from './config-manager';
 import { CUSTOM_SKILLS, installCustomSkill } from './custom-skills';
@@ -18,6 +27,7 @@ import type {
   DetectedConfig,
   InstallArgs,
   InstallConfig,
+  OpenCodeFreeModel,
 } from './types';
 
 // Colors
@@ -113,9 +123,41 @@ function formatConfigSummary(config: InstallConfig): string {
     `  ${config.hasOpenAI ? SYMBOLS.check : `${DIM}○${RESET}`} OpenAI`,
   );
   lines.push(
+    `  ${config.hasAnthropic ? SYMBOLS.check : `${DIM}○${RESET}`} Anthropic`,
+  );
+  lines.push(
+    `  ${config.hasCopilot ? SYMBOLS.check : `${DIM}○${RESET}`} GitHub Copilot`,
+  );
+  lines.push(
+    `  ${config.hasZaiPlan ? SYMBOLS.check : `${DIM}○${RESET}`} ZAI Coding Plan`,
+  );
+  lines.push(
     `  ${config.hasAntigravity ? SYMBOLS.check : `${DIM}○${RESET}`} Antigravity (Google)`,
   );
-  lines.push(`  ${SYMBOLS.check} Opencode Zen (Big Pickle)`); // Always enabled
+  lines.push(
+    `  ${config.hasChutes ? SYMBOLS.check : `${DIM}○${RESET}`} Chutes`,
+  );
+  lines.push(`  ${SYMBOLS.check} Opencode Zen`);
+  if (config.useOpenCodeFreeModels && config.selectedOpenCodePrimaryModel) {
+    lines.push(
+      `  ${SYMBOLS.check} OpenCode Free Primary: ${BLUE}${config.selectedOpenCodePrimaryModel}${RESET}`,
+    );
+  }
+  if (config.useOpenCodeFreeModels && config.selectedOpenCodeSecondaryModel) {
+    lines.push(
+      `  ${SYMBOLS.check} OpenCode Free Support: ${BLUE}${config.selectedOpenCodeSecondaryModel}${RESET}`,
+    );
+  }
+  if (config.hasChutes && config.selectedChutesPrimaryModel) {
+    lines.push(
+      `  ${SYMBOLS.check} Chutes Primary: ${BLUE}${config.selectedChutesPrimaryModel}${RESET}`,
+    );
+  }
+  if (config.hasChutes && config.selectedChutesSecondaryModel) {
+    lines.push(
+      `  ${SYMBOLS.check} Chutes Support: ${BLUE}${config.selectedChutesSecondaryModel}${RESET}`,
+    );
+  }
   lines.push(
     `  ${config.hasTmux ? SYMBOLS.check : `${DIM}○${RESET}`} Tmux Integration`,
   );
@@ -157,14 +199,62 @@ function argsToConfig(args: InstallArgs): InstallConfig {
   return {
     hasKimi: args.kimi === 'yes',
     hasOpenAI: args.openai === 'yes',
+    hasAnthropic: args.anthropic === 'yes',
+    hasCopilot: args.copilot === 'yes',
+    hasZaiPlan: args.zaiPlan === 'yes',
     hasAntigravity: args.antigravity === 'yes',
+    hasChutes: args.chutes === 'yes',
     hasOpencodeZen: true, // Always enabled - free models available to all users
+    useOpenCodeFreeModels: args.opencodeFree === 'yes',
+    preferredOpenCodeModel:
+      args.opencodeFreeModel && args.opencodeFreeModel !== 'auto'
+        ? args.opencodeFreeModel
+        : undefined,
     hasTmux: args.tmux === 'yes',
     installSkills: args.skills === 'yes',
     installCustomSkills: args.skills === 'yes', // Install custom skills when skills=yes
   };
 }
 
+async function askModelSelection(
+  rl: readline.Interface,
+  models: OpenCodeFreeModel[],
+  defaultModel: string,
+  prompt: string,
+): Promise<string> {
+  const defaultIndex = Math.max(
+    0,
+    models.findIndex((model) => model.model === defaultModel),
+  );
+
+  for (const [index, model] of models.entries()) {
+    const marker =
+      model.model === defaultModel ? `${BOLD}(recommended)${RESET}` : '';
+    console.log(
+      `  ${DIM}${index + 1}.${RESET} ${BLUE}${model.model}${RESET} ${DIM}${model.name}${RESET} ${marker}`,
+    );
+  }
+
+  const answer = (
+    await rl.question(
+      `${BLUE}${prompt}${RESET} ${DIM}[default: ${defaultIndex + 1}]${RESET}: `,
+    )
+  )
+    .trim()
+    .toLowerCase();
+
+  if (!answer) return defaultModel;
+
+  const asNumber = Number.parseInt(answer, 10);
+  if (Number.isFinite(asNumber)) {
+    const chosen = models[asNumber - 1];
+    if (chosen) return chosen.model;
+  }
+
+  const byId = models.find((model) => model.model.toLowerCase() === answer);
+  return byId?.model ?? defaultModel;
+}
+
 async function askYesNo(
   rl: readline.Interface,
   prompt: string,
@@ -191,10 +281,69 @@ async function runInteractiveMode(
   // TODO: tmux has a bug, disabled for now
   // const tmuxInstalled = await isTmuxInstalled()
   // const totalQuestions = tmuxInstalled ? 3 : 2
-  const totalQuestions = 3;
+  const totalQuestions = 8;
 
   try {
     console.log(`${BOLD}Question 1/${totalQuestions}:${RESET}`);
+    const useOpenCodeFree = await askYesNo(
+      rl,
+      'Use only OpenCode free models (opencode/*) with live refresh?',
+      'yes',
+    );
+    console.log();
+
+    let availableOpenCodeFreeModels: OpenCodeFreeModel[] | undefined;
+    let selectedOpenCodePrimaryModel: string | undefined;
+    let selectedOpenCodeSecondaryModel: string | undefined;
+    let availableChutesFreeModels: OpenCodeFreeModel[] | undefined;
+    let selectedChutesPrimaryModel: string | undefined;
+    let selectedChutesSecondaryModel: string | undefined;
+
+    if (useOpenCodeFree === 'yes') {
+      printInfo('Refreshing models with: opencode models --refresh --verbose');
+      const discovery = await discoverOpenCodeFreeModels();
+
+      if (discovery.models.length === 0) {
+        printWarning(
+          discovery.error ??
+            'No OpenCode free models found. Continuing without OpenCode free-model assignment.',
+        );
+      } else {
+        availableOpenCodeFreeModels = discovery.models;
+
+        const recommendedPrimary =
+          pickBestCodingOpenCodeModel(discovery.models)?.model ??
+          discovery.models[0]?.model;
+
+        if (recommendedPrimary) {
+          console.log(`${BOLD}OpenCode Free Models:${RESET}`);
+          selectedOpenCodePrimaryModel = await askModelSelection(
+            rl,
+            discovery.models,
+            recommendedPrimary,
+            'Choose primary model for orchestrator/oracle',
+          );
+        }
+
+        if (selectedOpenCodePrimaryModel) {
+          const recommendedSecondary =
+            pickSupportOpenCodeModel(
+              discovery.models,
+              selectedOpenCodePrimaryModel,
+            )?.model ?? selectedOpenCodePrimaryModel;
+          selectedOpenCodeSecondaryModel = await askModelSelection(
+            rl,
+            discovery.models,
+            recommendedSecondary,
+            'Choose support model for explorer/librarian/fixer',
+          );
+        }
+
+        console.log();
+      }
+    }
+
+    console.log(`${BOLD}Question 2/${totalQuestions}:${RESET}`);
     const kimi = await askYesNo(
       rl,
       'Do you want to use Kimi For Coding?',
@@ -202,7 +351,7 @@ async function runInteractiveMode(
     );
     console.log();
 
-    console.log(`${BOLD}Question 2/${totalQuestions}:${RESET}`);
+    console.log(`${BOLD}Question 3/${totalQuestions}:${RESET}`);
     const openai = await askYesNo(
       rl,
       'Do you have access to OpenAI API?',
@@ -210,7 +359,31 @@ async function runInteractiveMode(
     );
     console.log();
 
-    console.log(`${BOLD}Question 3/${totalQuestions}:${RESET}`);
+    console.log(`${BOLD}Question 4/${totalQuestions}:${RESET}`);
+    const anthropic = await askYesNo(
+      rl,
+      'Do you have access to Anthropic models?',
+      detected.hasAnthropic ? 'yes' : 'no',
+    );
+    console.log();
+
+    console.log(`${BOLD}Question 5/${totalQuestions}:${RESET}`);
+    const copilot = await askYesNo(
+      rl,
+      'Do you have access to GitHub Copilot models?',
+      detected.hasCopilot ? 'yes' : 'no',
+    );
+    console.log();
+
+    console.log(`${BOLD}Question 6/${totalQuestions}:${RESET}`);
+    const zaiPlan = await askYesNo(
+      rl,
+      'Do you have access to ZAI Coding Plan models?',
+      detected.hasZaiPlan ? 'yes' : 'no',
+    );
+    console.log();
+
+    console.log(`${BOLD}Question 7/${totalQuestions}:${RESET}`);
     const antigravity = await askYesNo(
       rl,
       'Enable Antigravity authentication for Google models?',
@@ -218,6 +391,58 @@ async function runInteractiveMode(
     );
     console.log();
 
+    console.log(`${BOLD}Question 8/${totalQuestions}:${RESET}`);
+    const chutes = await askYesNo(
+      rl,
+      'Enable Chutes provider with free daily capped models?',
+      detected.hasChutes ? 'yes' : 'no',
+    );
+    console.log();
+
+    if (chutes === 'yes') {
+      printInfo(
+        'Refreshing Chutes model list with: opencode models --refresh --verbose',
+      );
+      const discovery = await discoverProviderFreeModels('chutes');
+
+      if (discovery.models.length === 0) {
+        printWarning(
+          discovery.error ??
+            'No free Chutes models found. Continuing without Chutes dynamic assignment.',
+        );
+      } else {
+        availableChutesFreeModels = discovery.models;
+
+        const recommendedPrimary =
+          pickBestCodingChutesModel(discovery.models)?.model ??
+          discovery.models[0]?.model;
+
+        if (recommendedPrimary) {
+          console.log(`${BOLD}Chutes Free Models:${RESET}`);
+          selectedChutesPrimaryModel = await askModelSelection(
+            rl,
+            discovery.models,
+            recommendedPrimary,
+            'Choose Chutes primary model for orchestrator/oracle/designer',
+          );
+        }
+
+        if (selectedChutesPrimaryModel) {
+          const recommendedSecondary =
+            pickSupportChutesModel(discovery.models, selectedChutesPrimaryModel)
+              ?.model ?? selectedChutesPrimaryModel;
+          selectedChutesSecondaryModel = await askModelSelection(
+            rl,
+            discovery.models,
+            recommendedSecondary,
+            'Choose Chutes support model for explorer/librarian/fixer',
+          );
+        }
+
+        console.log();
+      }
+    }
+
     // TODO: tmux has a bug, disabled for now
     // let tmux: BooleanArg = "no"
     // if (tmuxInstalled) {
@@ -253,8 +478,20 @@ async function runInteractiveMode(
     return {
       hasKimi: kimi === 'yes',
       hasOpenAI: openai === 'yes',
+      hasAnthropic: anthropic === 'yes',
+      hasCopilot: copilot === 'yes',
+      hasZaiPlan: zaiPlan === 'yes',
       hasAntigravity: antigravity === 'yes',
+      hasChutes: chutes === 'yes',
       hasOpencodeZen: true,
+      useOpenCodeFreeModels:
+        useOpenCodeFree === 'yes' && selectedOpenCodePrimaryModel !== undefined,
+      selectedOpenCodePrimaryModel,
+      selectedOpenCodeSecondaryModel,
+      availableOpenCodeFreeModels,
+      selectedChutesPrimaryModel,
+      selectedChutesSecondaryModel,
+      availableChutesFreeModels,
       hasTmux: false,
       installSkills: skills === 'yes',
       installCustomSkills: customSkills === 'yes',
@@ -265,16 +502,33 @@ async function runInteractiveMode(
 }
 
 async function runInstall(config: InstallConfig): Promise<number> {
+  const resolvedConfig: InstallConfig = {
+    ...config,
+  };
+
   const detected = detectCurrentConfig();
   const isUpdate = detected.isInstalled;
 
   printHeader(isUpdate);
 
+  const hasAnyEnabledProvider =
+    resolvedConfig.hasKimi ||
+    resolvedConfig.hasOpenAI ||
+    resolvedConfig.hasAnthropic ||
+    resolvedConfig.hasCopilot ||
+    resolvedConfig.hasZaiPlan ||
+    resolvedConfig.hasAntigravity ||
+    resolvedConfig.hasChutes ||
+    resolvedConfig.useOpenCodeFreeModels;
+
   // Calculate total steps dynamically
   let totalSteps = 4; // Base: check opencode, add plugin, disable default agents, write lite config
-  if (config.hasAntigravity) totalSteps += 2; // antigravity plugin + google provider
-  if (config.installSkills) totalSteps += 1; // skills installation
-  if (config.installCustomSkills) totalSteps += 1; // custom skills installation
+  if (resolvedConfig.useOpenCodeFreeModels) totalSteps += 1;
+  if (resolvedConfig.hasAntigravity) totalSteps += 2; // antigravity plugin + google provider
+  if (resolvedConfig.hasChutes) totalSteps += 1; // chutes provider
+  if (hasAnyEnabledProvider) totalSteps += 1; // dynamic model resolution
+  if (resolvedConfig.installSkills) totalSteps += 1; // skills installation
+  if (resolvedConfig.installCustomSkills) totalSteps += 1; // custom skills installation
 
   let step = 1;
 
@@ -282,12 +536,139 @@ async function runInstall(config: InstallConfig): Promise<number> {
   const { ok } = await checkOpenCodeInstalled();
   if (!ok) return 1;
 
+  if (
+    resolvedConfig.useOpenCodeFreeModels &&
+    (resolvedConfig.availableOpenCodeFreeModels?.length ?? 0) === 0
+  ) {
+    printStep(
+      step++,
+      totalSteps,
+      'Refreshing OpenCode free models (opencode/*)...',
+    );
+    const discovery = await discoverOpenCodeFreeModels();
+    if (discovery.models.length === 0) {
+      printWarning(
+        discovery.error ??
+          'No OpenCode free models found. Continuing without dynamic OpenCode assignment.',
+      );
+      resolvedConfig.useOpenCodeFreeModels = false;
+    } else {
+      resolvedConfig.availableOpenCodeFreeModels = discovery.models;
+
+      const selectedPrimary =
+        resolvedConfig.preferredOpenCodeModel &&
+        discovery.models.some(
+          (model) => model.model === resolvedConfig.preferredOpenCodeModel,
+        )
+          ? resolvedConfig.preferredOpenCodeModel
+          : (resolvedConfig.selectedOpenCodePrimaryModel ??
+            pickBestCodingOpenCodeModel(discovery.models)?.model);
+
+      resolvedConfig.selectedOpenCodePrimaryModel =
+        selectedPrimary ?? discovery.models[0]?.model;
+      resolvedConfig.selectedOpenCodeSecondaryModel =
+        resolvedConfig.selectedOpenCodeSecondaryModel ??
+        pickSupportOpenCodeModel(
+          discovery.models,
+          resolvedConfig.selectedOpenCodePrimaryModel,
+        )?.model ??
+        resolvedConfig.selectedOpenCodePrimaryModel;
+
+      printSuccess(
+        `OpenCode free models ready (${discovery.models.length} models found)`,
+      );
+    }
+  } else if (
+    resolvedConfig.useOpenCodeFreeModels &&
+    (resolvedConfig.availableOpenCodeFreeModels?.length ?? 0) > 0
+  ) {
+    const availableModels = resolvedConfig.availableOpenCodeFreeModels ?? [];
+    resolvedConfig.selectedOpenCodePrimaryModel =
+      resolvedConfig.selectedOpenCodePrimaryModel ??
+      pickBestCodingOpenCodeModel(availableModels)?.model;
+    resolvedConfig.selectedOpenCodeSecondaryModel =
+      resolvedConfig.selectedOpenCodeSecondaryModel ??
+      pickSupportOpenCodeModel(
+        availableModels,
+        resolvedConfig.selectedOpenCodePrimaryModel,
+      )?.model ??
+      resolvedConfig.selectedOpenCodePrimaryModel;
+
+    printStep(
+      step++,
+      totalSteps,
+      'Using previously refreshed OpenCode free model list...',
+    );
+    printSuccess(
+      `OpenCode free models ready (${availableModels.length} models found)`,
+    );
+  }
+
+  if (
+    resolvedConfig.hasChutes &&
+    (resolvedConfig.availableChutesFreeModels?.length ?? 0) === 0
+  ) {
+    printStep(
+      step++,
+      totalSteps,
+      'Refreshing Chutes free models (chutes/*)...',
+    );
+    const discovery = await discoverProviderFreeModels('chutes');
+    if (discovery.models.length === 0) {
+      printWarning(
+        discovery.error ??
+          'No free Chutes models found. Continuing with fallback Chutes mapping.',
+      );
+    } else {
+      resolvedConfig.availableChutesFreeModels = discovery.models;
+      resolvedConfig.selectedChutesPrimaryModel =
+        resolvedConfig.selectedChutesPrimaryModel ??
+        pickBestCodingChutesModel(discovery.models)?.model ??
+        discovery.models[0]?.model;
+      resolvedConfig.selectedChutesSecondaryModel =
+        resolvedConfig.selectedChutesSecondaryModel ??
+        pickSupportChutesModel(
+          discovery.models,
+          resolvedConfig.selectedChutesPrimaryModel,
+        )?.model ??
+        resolvedConfig.selectedChutesPrimaryModel;
+
+      printSuccess(
+        `Chutes models ready (${discovery.models.length} models found)`,
+      );
+    }
+  } else if (
+    resolvedConfig.hasChutes &&
+    (resolvedConfig.availableChutesFreeModels?.length ?? 0) > 0
+  ) {
+    const availableChutes = resolvedConfig.availableChutesFreeModels ?? [];
+    resolvedConfig.selectedChutesPrimaryModel =
+      resolvedConfig.selectedChutesPrimaryModel ??
+      pickBestCodingChutesModel(availableChutes)?.model;
+    resolvedConfig.selectedChutesSecondaryModel =
+      resolvedConfig.selectedChutesSecondaryModel ??
+      pickSupportChutesModel(
+        availableChutes,
+        resolvedConfig.selectedChutesPrimaryModel,
+      )?.model ??
+      resolvedConfig.selectedChutesPrimaryModel;
+
+    printStep(
+      step++,
+      totalSteps,
+      'Using previously refreshed Chutes free model list...',
+    );
+    printSuccess(
+      `Chutes models ready (${availableChutes.length} models found)`,
+    );
+  }
+
   printStep(step++, totalSteps, 'Adding oh-my-opencode-slim plugin...');
   const pluginResult = await addPluginToOpenCodeConfig();
   if (!handleStepResult(pluginResult, 'Plugin added')) return 1;
 
   // Add Antigravity support if requested
-  if (config.hasAntigravity) {
+  if (resolvedConfig.hasAntigravity) {
     printStep(step++, totalSteps, 'Adding Antigravity plugin...');
     const antigravityPluginResult = addAntigravityPlugin();
     if (!handleStepResult(antigravityPluginResult, 'Antigravity plugin added'))
@@ -299,16 +680,49 @@ async function runInstall(config: InstallConfig): Promise<number> {
       return 1;
   }
 
+  if (resolvedConfig.hasChutes) {
+    printStep(step++, totalSteps, 'Configuring Chutes Provider...');
+    const chutesProviderResult = addChutesProvider();
+    if (!handleStepResult(chutesProviderResult, 'Chutes Provider configured'))
+      return 1;
+  }
+
+  if (hasAnyEnabledProvider) {
+    printStep(step++, totalSteps, 'Resolving dynamic model assignments...');
+    const catalogDiscovery = await discoverModelCatalog();
+    if (catalogDiscovery.models.length === 0) {
+      printWarning(
+        catalogDiscovery.error ??
+          'Unable to discover model catalog. Falling back to static mappings.',
+      );
+    } else {
+      const dynamicPlan = buildDynamicModelPlan(
+        catalogDiscovery.models,
+        resolvedConfig,
+      );
+      if (!dynamicPlan) {
+        printWarning(
+          'Dynamic planner found no suitable models. Using static mappings.',
+        );
+      } else {
+        resolvedConfig.dynamicModelPlan = dynamicPlan;
+        printSuccess(
+          `Dynamic assignments ready (${Object.keys(dynamicPlan.agents).length} agents)`,
+        );
+      }
+    }
+  }
+
   printStep(step++, totalSteps, 'Disabling OpenCode default agents...');
   const agentResult = disableDefaultAgents();
   if (!handleStepResult(agentResult, 'Default agents disabled')) return 1;
 
   printStep(step++, totalSteps, 'Writing oh-my-opencode-slim configuration...');
-  const liteResult = writeLiteConfig(config);
+  const liteResult = writeLiteConfig(resolvedConfig);
   if (!handleStepResult(liteResult, 'Config written')) return 1;
 
   // Install skills if requested
-  if (config.installSkills) {
+  if (resolvedConfig.installSkills) {
     printStep(step++, totalSteps, 'Installing recommended skills...');
     let skillsInstalled = 0;
     for (const skill of RECOMMENDED_SKILLS) {
@@ -326,7 +740,7 @@ async function runInstall(config: InstallConfig): Promise<number> {
   }
 
   // Install custom skills if requested
-  if (config.installCustomSkills) {
+  if (resolvedConfig.installCustomSkills) {
     printStep(step++, totalSteps, 'Installing custom skills...');
     let customSkillsInstalled = 0;
     for (const skill of CUSTOM_SKILLS) {
@@ -345,12 +759,20 @@ async function runInstall(config: InstallConfig): Promise<number> {
 
   // Summary
   console.log();
-  console.log(formatConfigSummary(config));
+  console.log(formatConfigSummary(resolvedConfig));
   console.log();
 
-  printAgentModels(config);
-
-  if (!config.hasKimi && !config.hasOpenAI && !config.hasAntigravity) {
+  printAgentModels(resolvedConfig);
+
+  if (
+    !resolvedConfig.hasKimi &&
+    !resolvedConfig.hasOpenAI &&
+    !resolvedConfig.hasAnthropic &&
+    !resolvedConfig.hasCopilot &&
+    !resolvedConfig.hasZaiPlan &&
+    !resolvedConfig.hasAntigravity &&
+    !resolvedConfig.hasChutes
+  ) {
     printWarning(
       'No providers configured. Zen Big Pickle models will be used as fallback.',
     );
@@ -365,17 +787,41 @@ async function runInstall(config: InstallConfig): Promise<number> {
 
   let nextStep = 1;
 
-  if (config.hasKimi || config.hasOpenAI || config.hasAntigravity) {
+  if (
+    resolvedConfig.hasKimi ||
+    resolvedConfig.hasOpenAI ||
+    resolvedConfig.hasAnthropic ||
+    resolvedConfig.hasCopilot ||
+    resolvedConfig.hasZaiPlan ||
+    resolvedConfig.hasAntigravity ||
+    resolvedConfig.hasChutes
+  ) {
     console.log(`  ${nextStep++}. Authenticate with your providers:`);
     console.log(`     ${BLUE}$ opencode auth login${RESET}`);
-    if (config.hasKimi) {
+    if (resolvedConfig.hasKimi) {
       console.log();
       console.log(`     Then select ${BOLD}Kimi For Coding${RESET} provider.`);
     }
-    if (config.hasAntigravity) {
+    if (resolvedConfig.hasAntigravity) {
       console.log();
       console.log(`     Then select ${BOLD}google${RESET} provider.`);
     }
+    if (resolvedConfig.hasAnthropic) {
+      console.log();
+      console.log(`     Then select ${BOLD}anthropic${RESET} provider.`);
+    }
+    if (resolvedConfig.hasCopilot) {
+      console.log();
+      console.log(`     Then select ${BOLD}github-copilot${RESET} provider.`);
+    }
+    if (resolvedConfig.hasZaiPlan) {
+      console.log();
+      console.log(`     Then select ${BOLD}zai-coding-plan${RESET} provider.`);
+    }
+    if (resolvedConfig.hasChutes) {
+      console.log();
+      console.log(`     Then set ${BOLD}CHUTES_API_KEY${RESET} in your shell.`);
+    }
     console.log();
   }
 
@@ -396,7 +842,16 @@ async function runInstall(config: InstallConfig): Promise<number> {
 export async function install(args: InstallArgs): Promise<number> {
   // Non-interactive mode: all args must be provided
   if (!args.tui) {
-    const requiredArgs = ['kimi', 'openai', 'tmux'] as const;
+    const requiredArgs = [
+      'kimi',
+      'openai',
+      'anthropic',
+      'copilot',
+      'zaiPlan',
+      'antigravity',
+      'chutes',
+      'tmux',
+    ] as const;
     const errors = requiredArgs.filter((key) => {
       const value = args[key];
       return value === undefined || !['yes', 'no'].includes(value);
@@ -406,11 +861,12 @@ export async function install(args: InstallArgs): Promise<number> {
       printHeader(false);
       printError('Missing or invalid arguments:');
       for (const key of errors) {
-        console.log(`  ${SYMBOLS.bullet} --${key}=<yes|no>`);
+        const flagName = key === 'zaiPlan' ? 'zai-plan' : key;
+        console.log(`  ${SYMBOLS.bullet} --${flagName}=<yes|no>`);
       }
       console.log();
       printInfo(
-        'Usage: bunx oh-my-opencode-slim install --no-tui --kimi=<yes|no> --openai=<yes|no> --tmux=<yes|no>',
+        'Usage: bunx oh-my-opencode-slim install --no-tui --kimi=<yes|no> --openai=<yes|no> --anthropic=<yes|no> --copilot=<yes|no> --zai-plan=<yes|no> --antigravity=<yes|no> --chutes=<yes|no> --tmux=<yes|no>',
       );
       console.log();
       return 1;

+ 65 - 0
src/cli/model-selection.test.ts

@@ -0,0 +1,65 @@
+/// <reference types="bun-types" />
+
+import { describe, expect, test } from 'bun:test';
+import {
+  type ModelSelectionCandidate,
+  pickBestModel,
+  pickPrimaryAndSupport,
+  rankModels,
+} from './model-selection';
+
+interface Candidate extends ModelSelectionCandidate {
+  speed: number;
+  quality: number;
+}
+
+describe('model-selection', () => {
+  test('pickBestModel returns null for empty list', () => {
+    const best = pickBestModel([], () => 1);
+    expect(best).toBeNull();
+  });
+
+  test('rankModels applies deterministic tie-break by model id', () => {
+    const models: Candidate[] = [
+      { model: 'provider/zeta', speed: 1, quality: 1 },
+      { model: 'provider/alpha', speed: 1, quality: 1 },
+    ];
+
+    const ranked = rankModels(models, () => 10);
+    expect(ranked[0]?.candidate.model).toBe('provider/alpha');
+    expect(ranked[1]?.candidate.model).toBe('provider/zeta');
+  });
+
+  test('pickPrimaryAndSupport avoids duplicate support when possible', () => {
+    const models: Candidate[] = [
+      { model: 'provider/main', speed: 20, quality: 90 },
+      { model: 'provider/helper', speed: 95, quality: 60 },
+    ];
+
+    const picked = pickPrimaryAndSupport(
+      models,
+      {
+        primary: (model) => model.quality,
+        support: (model) => model.speed,
+      },
+      'provider/main',
+    );
+
+    expect(picked.primary?.model).toBe('provider/main');
+    expect(picked.support?.model).toBe('provider/helper');
+  });
+
+  test('pickPrimaryAndSupport falls back to same model when only one exists', () => {
+    const models: Candidate[] = [
+      { model: 'provider/solo', speed: 10, quality: 10 },
+    ];
+
+    const picked = pickPrimaryAndSupport(models, {
+      primary: (model) => model.quality,
+      support: (model) => model.speed,
+    });
+
+    expect(picked.primary?.model).toBe('provider/solo');
+    expect(picked.support?.model).toBe('provider/solo');
+  });
+});

+ 87 - 0
src/cli/model-selection.ts

@@ -0,0 +1,87 @@
+export interface ModelSelectionCandidate {
+  model: string;
+  status?: 'alpha' | 'beta' | 'deprecated' | 'active';
+  contextLimit?: number;
+  outputLimit?: number;
+  reasoning?: boolean;
+  toolcall?: boolean;
+  attachment?: boolean;
+  tags?: string[];
+  meta?: Record<string, unknown>;
+}
+
+export interface RankedModel<T extends ModelSelectionCandidate> {
+  candidate: T;
+  score: number;
+}
+
+export interface SelectionOptions<T extends ModelSelectionCandidate> {
+  excludeModels?: string[];
+  tieBreaker?: (left: T, right: T) => number;
+}
+
+export type ScoreFunction<T extends ModelSelectionCandidate> = (
+  candidate: T,
+) => number;
+
+export interface RoleScoring<T extends ModelSelectionCandidate> {
+  primary: ScoreFunction<T>;
+  support: ScoreFunction<T>;
+}
+
+function defaultTieBreaker<T extends ModelSelectionCandidate>(
+  left: T,
+  right: T,
+): number {
+  return left.model.localeCompare(right.model);
+}
+
+export function rankModels<T extends ModelSelectionCandidate>(
+  models: T[],
+  scoreFn: ScoreFunction<T>,
+  options: SelectionOptions<T> = {},
+): RankedModel<T>[] {
+  const excluded = new Set(options.excludeModels ?? []);
+  const tieBreaker = options.tieBreaker ?? defaultTieBreaker;
+
+  return models
+    .filter((model) => !excluded.has(model.model))
+    .map((candidate) => ({
+      candidate,
+      score: scoreFn(candidate),
+    }))
+    .sort((left, right) => {
+      if (left.score !== right.score) return right.score - left.score;
+      return tieBreaker(left.candidate, right.candidate);
+    });
+}
+
+export function pickBestModel<T extends ModelSelectionCandidate>(
+  models: T[],
+  scoreFn: ScoreFunction<T>,
+  options: SelectionOptions<T> = {},
+): T | null {
+  return rankModels(models, scoreFn, options)[0]?.candidate ?? null;
+}
+
+export function pickPrimaryAndSupport<T extends ModelSelectionCandidate>(
+  models: T[],
+  scoring: RoleScoring<T>,
+  preferredPrimaryModel?: string,
+): { primary: T | null; support: T | null } {
+  if (models.length === 0) return { primary: null, support: null };
+
+  const preferredPrimary = preferredPrimaryModel
+    ? models.find((candidate) => candidate.model === preferredPrimaryModel)
+    : undefined;
+  const primary = preferredPrimary ?? pickBestModel(models, scoring.primary);
+
+  if (!primary) return { primary: null, support: null };
+
+  const support =
+    pickBestModel(models, scoring.support, {
+      excludeModels: [primary.model],
+    }) ?? pickBestModel(models, scoring.support);
+
+  return { primary, support };
+}

+ 43 - 0
src/cli/opencode-models.test.ts

@@ -0,0 +1,43 @@
+/// <reference types="bun-types" />
+
+import { describe, expect, test } from 'bun:test';
+import { parseOpenCodeModelsVerboseOutput } from './opencode-models';
+
+const SAMPLE_OUTPUT = `
+opencode/gpt-5-nano
+{
+  "id": "gpt-5-nano",
+  "providerID": "opencode",
+  "name": "GPT 5 Nano",
+  "status": "active",
+  "cost": { "input": 0, "output": 0, "cache": { "read": 0, "write": 0 } },
+  "limit": { "context": 400000, "output": 128000 },
+  "capabilities": { "reasoning": true, "toolcall": true, "attachment": true }
+}
+chutes/minimax-m2.1-5000
+{
+  "id": "minimax-m2.1-5000",
+  "providerID": "chutes",
+  "name": "MiniMax M2.1 5000 req/day",
+  "status": "active",
+  "cost": { "input": 0, "output": 0, "cache": { "read": 0, "write": 0 } },
+  "limit": { "context": 500000, "output": 64000 },
+  "capabilities": { "reasoning": true, "toolcall": true, "attachment": false }
+}
+`;
+
+describe('opencode-models parser', () => {
+  test('filters by provider and keeps free models', () => {
+    const models = parseOpenCodeModelsVerboseOutput(SAMPLE_OUTPUT, 'opencode');
+    expect(models.length).toBe(1);
+    expect(models[0]?.model).toBe('opencode/gpt-5-nano');
+    expect(models[0]?.providerID).toBe('opencode');
+  });
+
+  test('extracts chutes daily request limit from model metadata', () => {
+    const models = parseOpenCodeModelsVerboseOutput(SAMPLE_OUTPUT, 'chutes');
+    expect(models.length).toBe(1);
+    expect(models[0]?.model).toBe('chutes/minimax-m2.1-5000');
+    expect(models[0]?.dailyRequestLimit).toBe(5000);
+  });
+});

+ 241 - 0
src/cli/opencode-models.ts

@@ -0,0 +1,241 @@
+import type { DiscoveredModel, OpenCodeFreeModel } from './types';
+
+interface OpenCodeModelVerboseRecord {
+  id: string;
+  providerID: string;
+  name?: string;
+  status?: 'alpha' | 'beta' | 'deprecated' | 'active';
+  cost?: {
+    input?: number;
+    output?: number;
+    cache?: {
+      read?: number;
+      write?: number;
+    };
+  };
+  limit?: {
+    context?: number;
+    output?: number;
+  };
+  capabilities?: {
+    reasoning?: boolean;
+    toolcall?: boolean;
+    attachment?: boolean;
+  };
+  quota?: {
+    requestsPerDay?: number;
+  };
+  meta?: {
+    requestsPerDay?: number;
+    dailyLimit?: number;
+  };
+}
+
+function isFreeModel(record: OpenCodeModelVerboseRecord): boolean {
+  const inputCost = record.cost?.input ?? 0;
+  const outputCost = record.cost?.output ?? 0;
+  const cacheReadCost = record.cost?.cache?.read ?? 0;
+  const cacheWriteCost = record.cost?.cache?.write ?? 0;
+
+  return (
+    inputCost === 0 &&
+    outputCost === 0 &&
+    cacheReadCost === 0 &&
+    cacheWriteCost === 0
+  );
+}
+
+function parseDailyRequestLimit(
+  record: OpenCodeModelVerboseRecord,
+): number | undefined {
+  const explicitLimit =
+    record.quota?.requestsPerDay ??
+    record.meta?.requestsPerDay ??
+    record.meta?.dailyLimit;
+
+  if (typeof explicitLimit === 'number' && Number.isFinite(explicitLimit)) {
+    return explicitLimit;
+  }
+
+  const source = `${record.id} ${record.name ?? ''}`.toLowerCase();
+  const match = source.match(
+    /\b(300|2000|5000)\b(?:\s*(?:req|requests|rpd|\/day))?/,
+  );
+  if (!match) return undefined;
+  const parsed = Number.parseInt(match[1], 10);
+  return Number.isFinite(parsed) ? parsed : undefined;
+}
+
+function normalizeDiscoveredModel(
+  record: OpenCodeModelVerboseRecord,
+  providerFilter?: string,
+): DiscoveredModel | null {
+  if (providerFilter && record.providerID !== providerFilter) return null;
+
+  const fullModel = `${record.providerID}/${record.id}`;
+
+  return {
+    providerID: record.providerID,
+    model: fullModel,
+    name: record.name ?? record.id,
+    status: record.status ?? 'active',
+    contextLimit: record.limit?.context ?? 0,
+    outputLimit: record.limit?.output ?? 0,
+    reasoning: record.capabilities?.reasoning === true,
+    toolcall: record.capabilities?.toolcall === true,
+    attachment: record.capabilities?.attachment === true,
+    dailyRequestLimit: parseDailyRequestLimit(record),
+    costInput: record.cost?.input,
+    costOutput: record.cost?.output,
+  };
+}
+
+export function parseOpenCodeModelsVerboseOutput(
+  output: string,
+  providerFilter?: string,
+  freeOnly = true,
+): DiscoveredModel[] {
+  const lines = output.split(/\r?\n/);
+  const models: DiscoveredModel[] = [];
+
+  for (let index = 0; index < lines.length; index++) {
+    const line = lines[index]?.trim();
+    if (!line || !line.includes('/')) continue;
+
+    const isModelHeader = /^[a-z0-9-]+\/[a-z0-9._-]+$/i.test(line);
+    if (!isModelHeader) continue;
+
+    let jsonStart = -1;
+    for (let search = index + 1; search < lines.length; search++) {
+      if (lines[search]?.trim().startsWith('{')) {
+        jsonStart = search;
+        break;
+      }
+
+      if (/^[a-z0-9-]+\/[a-z0-9._-]+$/i.test(lines[search]?.trim() ?? '')) {
+        break;
+      }
+    }
+
+    if (jsonStart === -1) continue;
+
+    let braceDepth = 0;
+    const jsonLines: string[] = [];
+    let jsonEnd = -1;
+
+    for (let cursor = jsonStart; cursor < lines.length; cursor++) {
+      const current = lines[cursor] ?? '';
+      jsonLines.push(current);
+
+      for (const char of current) {
+        if (char === '{') braceDepth++;
+        if (char === '}') braceDepth--;
+      }
+
+      if (braceDepth === 0 && jsonLines.length > 0) {
+        jsonEnd = cursor;
+        break;
+      }
+    }
+
+    if (jsonEnd === -1) continue;
+
+    try {
+      const parsed = JSON.parse(
+        jsonLines.join('\n'),
+      ) as OpenCodeModelVerboseRecord;
+      const normalized = normalizeDiscoveredModel(parsed, providerFilter);
+      if (!normalized) continue;
+      if (freeOnly && !isFreeModel(parsed)) continue;
+      if (normalized) models.push(normalized);
+    } catch {
+      // Ignore malformed blocks and continue parsing the next model.
+    }
+
+    index = jsonEnd;
+  }
+
+  return models;
+}
+
+async function discoverFreeModelsByProvider(providerID?: string): Promise<{
+  models: OpenCodeFreeModel[];
+  error?: string;
+}> {
+  try {
+    const proc = Bun.spawn(['opencode', 'models', '--refresh', '--verbose'], {
+      stdout: 'pipe',
+      stderr: 'pipe',
+    });
+
+    const stdout = await new Response(proc.stdout).text();
+    const stderr = await new Response(proc.stderr).text();
+    await proc.exited;
+
+    if (proc.exitCode !== 0) {
+      return {
+        models: [],
+        error: stderr.trim() || 'Failed to fetch OpenCode models.',
+      };
+    }
+
+    return {
+      models: parseOpenCodeModelsVerboseOutput(
+        stdout,
+        providerID,
+        true,
+      ) as OpenCodeFreeModel[],
+    };
+  } catch {
+    return {
+      models: [],
+      error: 'Unable to run `opencode models --refresh --verbose`.',
+    };
+  }
+}
+
+export async function discoverModelCatalog(): Promise<{
+  models: DiscoveredModel[];
+  error?: string;
+}> {
+  try {
+    const proc = Bun.spawn(['opencode', 'models', '--refresh', '--verbose'], {
+      stdout: 'pipe',
+      stderr: 'pipe',
+    });
+
+    const stdout = await new Response(proc.stdout).text();
+    const stderr = await new Response(proc.stderr).text();
+    await proc.exited;
+
+    if (proc.exitCode !== 0) {
+      return {
+        models: [],
+        error: stderr.trim() || 'Failed to fetch OpenCode models.',
+      };
+    }
+
+    return {
+      models: parseOpenCodeModelsVerboseOutput(stdout, undefined, false),
+    };
+  } catch {
+    return {
+      models: [],
+      error: 'Unable to run `opencode models --refresh --verbose`.',
+    };
+  }
+}
+
+export async function discoverOpenCodeFreeModels(): Promise<{
+  models: OpenCodeFreeModel[];
+  error?: string;
+}> {
+  return discoverFreeModelsByProvider('opencode');
+}
+
+export async function discoverProviderFreeModels(providerID: string): Promise<{
+  models: OpenCodeFreeModel[];
+  error?: string;
+}> {
+  return discoverFreeModelsByProvider(providerID);
+}

+ 63 - 0
src/cli/opencode-selection.ts

@@ -0,0 +1,63 @@
+import {
+  pickBestModel,
+  pickPrimaryAndSupport,
+  type ScoreFunction,
+} from './model-selection';
+import type { OpenCodeFreeModel } from './types';
+
+const scoreOpenCodePrimaryForCoding: ScoreFunction<OpenCodeFreeModel> = (
+  model,
+) => {
+  return (
+    (model.reasoning ? 100 : 0) +
+    (model.toolcall ? 80 : 0) +
+    (model.attachment ? 20 : 0) +
+    Math.min(model.contextLimit, 1_000_000) / 10_000 +
+    Math.min(model.outputLimit, 300_000) / 10_000 +
+    (model.status === 'active' ? 10 : 0)
+  );
+};
+
+function speedBonus(modelName: string): number {
+  const lower = modelName.toLowerCase();
+  let score = 0;
+  if (lower.includes('nano')) score += 60;
+  if (lower.includes('flash')) score += 45;
+  if (lower.includes('mini')) score += 25;
+  if (lower.includes('preview')) score += 10;
+  return score;
+}
+
+const scoreOpenCodeSupportForCoding: ScoreFunction<OpenCodeFreeModel> = (
+  model,
+) => {
+  return (
+    (model.toolcall ? 90 : 0) +
+    (model.reasoning ? 50 : 0) +
+    speedBonus(model.model) +
+    Math.min(model.contextLimit, 400_000) / 20_000 +
+    (model.status === 'active' ? 5 : 0)
+  );
+};
+
+export function pickBestCodingOpenCodeModel(
+  models: OpenCodeFreeModel[],
+): OpenCodeFreeModel | null {
+  return pickBestModel(models, scoreOpenCodePrimaryForCoding);
+}
+
+export function pickSupportOpenCodeModel(
+  models: OpenCodeFreeModel[],
+  primaryModel?: string,
+): OpenCodeFreeModel | null {
+  const { support } = pickPrimaryAndSupport(
+    models,
+    {
+      primary: scoreOpenCodePrimaryForCoding,
+      support: scoreOpenCodeSupportForCoding,
+    },
+    primaryModel,
+  );
+
+  return support;
+}

+ 163 - 12
src/cli/providers.test.ts

@@ -48,7 +48,7 @@ describe('providers', () => {
     expect(agents.orchestrator.model).toBe('kimi-for-coding/k2p5');
     expect(agents.orchestrator.variant).toBeUndefined();
     // Oracle uses OpenAI when both kimi and openai are enabled
-    expect(agents.oracle.model).toBe('openai/gpt-5.2-codex');
+    expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
     expect(agents.oracle.variant).toBe('high');
     // Should NOT include other presets
     expect((config.presets as any).openai).toBeUndefined();
@@ -78,6 +78,112 @@ describe('providers', () => {
     expect((config.presets as any)['zen-free']).toBeUndefined();
   });
 
+  test('generateLiteConfig generates chutes preset when only chutes selected', () => {
+    const config = generateLiteConfig({
+      hasAntigravity: false,
+      hasKimi: false,
+      hasOpenAI: false,
+      hasChutes: true,
+      hasOpencodeZen: false,
+      hasTmux: false,
+      installSkills: false,
+      installCustomSkills: false,
+      selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
+      selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
+    });
+
+    expect(config.preset).toBe('chutes');
+    const agents = (config.presets as any).chutes;
+    expect(agents).toBeDefined();
+    expect(agents.orchestrator.model).toBe('chutes/kimi-k2.5');
+    expect(agents.oracle.model).toBe('chutes/kimi-k2.5');
+    expect(agents.designer.model).toBe('chutes/kimi-k2.5');
+    expect(agents.explorer.model).toBe('chutes/minimax-m2.1');
+    expect(agents.librarian.model).toBe('chutes/minimax-m2.1');
+    expect(agents.fixer.model).toBe('chutes/minimax-m2.1');
+  });
+
+  test('generateLiteConfig generates anthropic preset when only anthropic selected', () => {
+    const config = generateLiteConfig({
+      hasAntigravity: false,
+      hasKimi: false,
+      hasOpenAI: false,
+      hasAnthropic: true,
+      hasCopilot: false,
+      hasZaiPlan: false,
+      hasChutes: false,
+      hasOpencodeZen: false,
+      hasTmux: false,
+      installSkills: false,
+      installCustomSkills: false,
+    });
+
+    expect(config.preset).toBe('anthropic');
+    const agents = (config.presets as any).anthropic;
+    expect(agents.orchestrator.model).toBe('anthropic/claude-opus-4-6');
+    expect(agents.oracle.model).toBe('anthropic/claude-opus-4-6');
+    expect(agents.explorer.model).toBe('anthropic/claude-haiku-4-5');
+  });
+
+  test('generateLiteConfig prefers Chutes Kimi in mixed openai/antigravity when chutes is enabled', () => {
+    const config = generateLiteConfig({
+      hasAntigravity: true,
+      hasKimi: false,
+      hasOpenAI: true,
+      hasChutes: true,
+      hasOpencodeZen: true,
+      useOpenCodeFreeModels: true,
+      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
+      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
+      selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
+      selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
+      hasTmux: false,
+      installSkills: false,
+      installCustomSkills: false,
+    });
+
+    expect(config.preset).toBe('antigravity-mixed-openai');
+    const agents = (config.presets as any)['antigravity-mixed-openai'];
+    expect(agents.orchestrator.model).toBe('chutes/kimi-k2.5');
+    expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
+    expect(agents.explorer.model).toBe('opencode/gpt-5-nano');
+  });
+
+  test('generateLiteConfig emits fallback chains for six agents', () => {
+    const config = generateLiteConfig({
+      hasAntigravity: true,
+      hasKimi: true,
+      hasOpenAI: true,
+      hasChutes: true,
+      hasOpencodeZen: true,
+      useOpenCodeFreeModels: true,
+      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
+      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
+      selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
+      selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
+      hasTmux: false,
+      installSkills: false,
+      installCustomSkills: false,
+    });
+
+    expect((config.fallback as any).enabled).toBe(true);
+    expect((config.fallback as any).timeoutMs).toBe(15000);
+    const chains = (config.fallback as any).chains;
+    expect(Object.keys(chains).sort()).toEqual([
+      'designer',
+      'explorer',
+      'fixer',
+      'librarian',
+      'oracle',
+      'orchestrator',
+    ]);
+    expect(chains.orchestrator).toContain('openai/gpt-5.3-codex');
+    expect(chains.orchestrator).toContain('kimi-for-coding/k2p5');
+    expect(chains.orchestrator).toContain('google/antigravity-gemini-3-flash');
+    expect(chains.orchestrator).toContain('chutes/kimi-k2.5');
+    expect(chains.orchestrator).toContain('opencode/glm-4.7-free');
+  });
+
   test('generateLiteConfig generates zen-free preset when no providers selected', () => {
     const config = generateLiteConfig({
       hasAntigravity: false,
@@ -174,6 +280,53 @@ describe('providers', () => {
     expect(Array.isArray(agents.librarian.mcps)).toBe(true);
   });
 
+  test('generateLiteConfig applies OpenCode free model overrides in hybrid mode', () => {
+    const config = generateLiteConfig({
+      hasAntigravity: false,
+      hasKimi: false,
+      hasOpenAI: true,
+      hasOpencodeZen: true,
+      useOpenCodeFreeModels: true,
+      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
+      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
+      hasTmux: false,
+      installSkills: false,
+      installCustomSkills: false,
+    });
+
+    const agents = (config.presets as any).openai;
+    expect(agents.orchestrator.model).toBe(
+      MODEL_MAPPINGS.openai.orchestrator.model,
+    );
+    expect(agents.oracle.model).toBe(MODEL_MAPPINGS.openai.oracle.model);
+    expect(agents.explorer.model).toBe('opencode/gpt-5-nano');
+    expect(agents.librarian.model).toBe('opencode/gpt-5-nano');
+    expect(agents.fixer.model).toBe('opencode/gpt-5-nano');
+  });
+
+  test('generateLiteConfig applies OpenCode free model overrides in OpenCode-only mode', () => {
+    const config = generateLiteConfig({
+      hasAntigravity: false,
+      hasKimi: false,
+      hasOpenAI: false,
+      hasOpencodeZen: true,
+      useOpenCodeFreeModels: true,
+      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
+      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
+      hasTmux: false,
+      installSkills: false,
+      installCustomSkills: false,
+    });
+
+    const agents = (config.presets as any)['zen-free'];
+    expect(agents.orchestrator.model).toBe('opencode/glm-4.7-free');
+    expect(agents.oracle.model).toBe('opencode/glm-4.7-free');
+    expect(agents.designer.model).toBe('opencode/glm-4.7-free');
+    expect(agents.explorer.model).toBe('opencode/gpt-5-nano');
+    expect(agents.librarian.model).toBe('opencode/gpt-5-nano');
+    expect(agents.fixer.model).toBe('opencode/gpt-5-nano');
+  });
+
   test('generateLiteConfig zen-free includes correct mcps', () => {
     const config = generateLiteConfig({
       hasAntigravity: false,
@@ -214,17 +367,17 @@ describe('providers', () => {
       expect(agents.orchestrator.model).toBe('kimi-for-coding/k2p5');
 
       // Oracle should use OpenAI
-      expect(agents.oracle.model).toBe('openai/gpt-5.2-codex');
+      expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
       expect(agents.oracle.variant).toBe('high');
 
-      // Others should use Antigravity Flash
+      // Explorer/Librarian/Designer use Antigravity Flash; Fixer prefers OpenAI
       expect(agents.explorer.model).toBe('google/antigravity-gemini-3-flash');
       expect(agents.explorer.variant).toBe('low');
       expect(agents.librarian.model).toBe('google/antigravity-gemini-3-flash');
       expect(agents.librarian.variant).toBe('low');
       expect(agents.designer.model).toBe('google/antigravity-gemini-3-flash');
       expect(agents.designer.variant).toBe('medium');
-      expect(agents.fixer.model).toBe('google/antigravity-gemini-3-flash');
+      expect(agents.fixer.model).toBe('openai/gpt-5.3-codex');
       expect(agents.fixer.variant).toBe('low');
     });
 
@@ -277,14 +430,14 @@ describe('providers', () => {
       );
 
       // Oracle should use OpenAI
-      expect(agents.oracle.model).toBe('openai/gpt-5.2-codex');
+      expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
       expect(agents.oracle.variant).toBe('high');
 
-      // Others should use Antigravity Flash
+      // Explorer/Librarian/Designer use Antigravity Flash; Fixer prefers OpenAI
       expect(agents.explorer.model).toBe('google/antigravity-gemini-3-flash');
       expect(agents.librarian.model).toBe('google/antigravity-gemini-3-flash');
       expect(agents.designer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.fixer.model).toBe('google/antigravity-gemini-3-flash');
+      expect(agents.fixer.model).toBe('openai/gpt-5.3-codex');
     });
 
     test('generateLiteConfig generates pure antigravity preset when only Antigravity', () => {
@@ -338,11 +491,11 @@ describe('providers', () => {
         installCustomSkills: false,
       });
 
-      expect((preset.oracle as any).model).toBe('openai/gpt-5.2-codex');
+      expect((preset.oracle as any).model).toBe('openai/gpt-5.3-codex');
       expect((preset.oracle as any).variant).toBe('high');
     });
 
-    test('generateAntigravityMixedPreset always uses Antigravity for explorer/librarian/designer/fixer', () => {
+    test('generateAntigravityMixedPreset uses OpenAI fixer and Antigravity support defaults', () => {
       const preset = generateAntigravityMixedPreset({
         hasKimi: true,
         hasOpenAI: true,
@@ -362,9 +515,7 @@ describe('providers', () => {
       expect((preset.designer as any).model).toBe(
         'google/antigravity-gemini-3-flash',
       );
-      expect((preset.fixer as any).model).toBe(
-        'google/antigravity-gemini-3-flash',
-      );
+      expect((preset.fixer as any).model).toBe('openai/gpt-5.3-codex');
     });
   });
 });

+ 321 - 16
src/cli/providers.ts

@@ -2,6 +2,17 @@ import { DEFAULT_AGENT_MCPS } from '../config/agent-mcps';
 import { RECOMMENDED_SKILLS } from './skills';
 import type { InstallConfig } from './types';
 
+const AGENT_NAMES = [
+  'orchestrator',
+  'oracle',
+  'designer',
+  'explorer',
+  'librarian',
+  'fixer',
+] as const;
+
+type AgentName = (typeof AGENT_NAMES)[number];
+
 // Model mappings by provider priority
 export const MODEL_MAPPINGS = {
   kimi: {
@@ -13,13 +24,37 @@ export const MODEL_MAPPINGS = {
     fixer: { model: 'kimi-for-coding/k2p5', variant: 'low' },
   },
   openai: {
-    orchestrator: { model: 'openai/gpt-5.2-codex' },
-    oracle: { model: 'openai/gpt-5.2-codex', variant: 'high' },
+    orchestrator: { model: 'openai/gpt-5.3-codex' },
+    oracle: { model: 'openai/gpt-5.3-codex', variant: 'high' },
     librarian: { model: 'openai/gpt-5.1-codex-mini', variant: 'low' },
     explorer: { model: 'openai/gpt-5.1-codex-mini', variant: 'low' },
     designer: { model: 'openai/gpt-5.1-codex-mini', variant: 'medium' },
     fixer: { model: 'openai/gpt-5.1-codex-mini', variant: 'low' },
   },
+  anthropic: {
+    orchestrator: { model: 'anthropic/claude-opus-4-6' },
+    oracle: { model: 'anthropic/claude-opus-4-6', variant: 'high' },
+    librarian: { model: 'anthropic/claude-sonnet-4-5', variant: 'low' },
+    explorer: { model: 'anthropic/claude-haiku-4-5', variant: 'low' },
+    designer: { model: 'anthropic/claude-sonnet-4-5', variant: 'medium' },
+    fixer: { model: 'anthropic/claude-sonnet-4-5', variant: 'low' },
+  },
+  copilot: {
+    orchestrator: { model: 'github-copilot/grok-code-fast-1' },
+    oracle: { model: 'github-copilot/grok-code-fast-1', variant: 'high' },
+    librarian: { model: 'github-copilot/grok-code-fast-1', variant: 'low' },
+    explorer: { model: 'github-copilot/grok-code-fast-1', variant: 'low' },
+    designer: { model: 'github-copilot/grok-code-fast-1', variant: 'medium' },
+    fixer: { model: 'github-copilot/grok-code-fast-1', variant: 'low' },
+  },
+  'zai-plan': {
+    orchestrator: { model: 'zai-coding-plan/glm-4.7' },
+    oracle: { model: 'zai-coding-plan/glm-4.7', variant: 'high' },
+    librarian: { model: 'zai-coding-plan/glm-4.7', variant: 'low' },
+    explorer: { model: 'zai-coding-plan/glm-4.7', variant: 'low' },
+    designer: { model: 'zai-coding-plan/glm-4.7', variant: 'medium' },
+    fixer: { model: 'zai-coding-plan/glm-4.7', variant: 'low' },
+  },
   antigravity: {
     orchestrator: { model: 'google/antigravity-gemini-3-flash' },
     oracle: { model: 'google/antigravity-gemini-3-pro' },
@@ -37,6 +72,14 @@ export const MODEL_MAPPINGS = {
     },
     fixer: { model: 'google/antigravity-gemini-3-flash', variant: 'low' },
   },
+  chutes: {
+    orchestrator: { model: 'chutes/kimi-k2.5' },
+    oracle: { model: 'chutes/kimi-k2.5', variant: 'high' },
+    librarian: { model: 'chutes/minimax-m2.1', variant: 'low' },
+    explorer: { model: 'chutes/minimax-m2.1', variant: 'low' },
+    designer: { model: 'chutes/kimi-k2.5', variant: 'medium' },
+    fixer: { model: 'chutes/minimax-m2.1', variant: 'low' },
+  },
   'zen-free': {
     orchestrator: { model: 'opencode/big-pickle' },
     oracle: { model: 'opencode/big-pickle', variant: 'high' },
@@ -88,12 +131,22 @@ export function generateAntigravityMixedPreset(
     model: 'google/antigravity-gemini-3-flash',
   };
 
-  // Orchestrator: Kimi if hasKimi, else keep existing if exists, else antigravity
+  const chutesPrimary =
+    config.selectedChutesPrimaryModel ??
+    MODEL_MAPPINGS.chutes.orchestrator.model;
+  const chutesSupport =
+    config.selectedChutesSecondaryModel ?? MODEL_MAPPINGS.chutes.explorer.model;
+
+  // Orchestrator: Kimi if hasKimi, else Chutes Kimi if enabled, else antigravity
   if (config.hasKimi) {
     result.orchestrator = createAgentConfig(
       'orchestrator',
       MODEL_MAPPINGS.kimi.orchestrator,
     );
+  } else if (config.hasChutes) {
+    result.orchestrator = createAgentConfig('orchestrator', {
+      model: chutesPrimary,
+    });
   } else if (!result.orchestrator) {
     result.orchestrator = createAgentConfig(
       'orchestrator',
@@ -111,23 +164,50 @@ export function generateAntigravityMixedPreset(
     );
   }
 
-  // Explorer, Librarian, Designer, Fixer: Always use Antigravity Flash
+  // Explorer stays flash-first for speed.
   result.explorer = createAgentConfig('explorer', {
     ...antigravityFlash,
     variant: 'low',
   });
-  result.librarian = createAgentConfig('librarian', {
-    ...antigravityFlash,
-    variant: 'low',
-  });
-  result.designer = createAgentConfig('designer', {
-    ...antigravityFlash,
-    variant: 'medium',
-  });
-  result.fixer = createAgentConfig('fixer', {
-    ...antigravityFlash,
-    variant: 'low',
-  });
+
+  // Librarian/Designer prefer Kimi-K2.5 via Chutes when available.
+  if (config.hasChutes) {
+    result.librarian = createAgentConfig('librarian', {
+      model: chutesSupport,
+      variant: 'low',
+    });
+    result.designer = createAgentConfig('designer', {
+      model: chutesPrimary,
+      variant: 'medium',
+    });
+  } else {
+    result.librarian = createAgentConfig('librarian', {
+      ...antigravityFlash,
+      variant: 'low',
+    });
+    result.designer = createAgentConfig('designer', {
+      ...antigravityFlash,
+      variant: 'medium',
+    });
+  }
+
+  // Fixer prefers OpenAI codex when available.
+  if (config.hasOpenAI) {
+    result.fixer = createAgentConfig('fixer', {
+      ...MODEL_MAPPINGS.openai.oracle,
+      variant: 'low',
+    });
+  } else if (config.hasChutes) {
+    result.fixer = createAgentConfig('fixer', {
+      model: chutesSupport,
+      variant: 'low',
+    });
+  } else {
+    result.fixer = createAgentConfig('fixer', {
+      ...antigravityFlash,
+      variant: 'low',
+    });
+  }
 
   return result;
 }
@@ -144,7 +224,11 @@ export function generateLiteConfig(
   let activePreset:
     | 'kimi'
     | 'openai'
+    | 'anthropic'
+    | 'copilot'
+    | 'zai-plan'
     | 'antigravity'
+    | 'chutes'
     | 'antigravity-mixed-both'
     | 'antigravity-mixed-kimi'
     | 'antigravity-mixed-openai'
@@ -167,6 +251,14 @@ export function generateLiteConfig(
     activePreset = 'kimi';
   } else if (installConfig.hasOpenAI) {
     activePreset = 'openai';
+  } else if (installConfig.hasAnthropic) {
+    activePreset = 'anthropic';
+  } else if (installConfig.hasCopilot) {
+    activePreset = 'copilot';
+  } else if (installConfig.hasZaiPlan) {
+    activePreset = 'zai-plan';
+  } else if (installConfig.hasChutes) {
+    activePreset = 'chutes';
   }
 
   config.preset = activePreset;
@@ -200,6 +292,185 @@ export function generateLiteConfig(
     };
   };
 
+  if (installConfig.dynamicModelPlan) {
+    const dynamicPreset = Object.fromEntries(
+      Object.entries(installConfig.dynamicModelPlan.agents).map(
+        ([agentName, assignment]) => [
+          agentName,
+          createAgentConfig(
+            agentName,
+            assignment as { model: string; variant?: string },
+          ),
+        ],
+      ),
+    );
+
+    config.preset = 'dynamic';
+    (config.presets as Record<string, unknown>).dynamic = dynamicPreset;
+    config.fallback = {
+      enabled: true,
+      timeoutMs: 15000,
+      chains: installConfig.dynamicModelPlan.chains,
+    };
+
+    if (installConfig.hasTmux) {
+      config.tmux = {
+        enabled: true,
+        layout: 'main-vertical',
+        main_pane_size: 60,
+      };
+    }
+
+    return config;
+  }
+
+  const applyOpenCodeFreeAssignments = (
+    presetAgents: Record<string, unknown>,
+    hasExternalProviders: boolean,
+  ) => {
+    if (!installConfig.useOpenCodeFreeModels) return;
+
+    const primaryModel = installConfig.selectedOpenCodePrimaryModel;
+    const secondaryModel =
+      installConfig.selectedOpenCodeSecondaryModel ?? primaryModel;
+
+    if (!primaryModel || !secondaryModel) return;
+
+    const setAgent = (agentName: string, model: string) => {
+      presetAgents[agentName] = createAgentConfig(agentName, { model });
+    };
+
+    if (!hasExternalProviders) {
+      setAgent('orchestrator', primaryModel);
+      setAgent('oracle', primaryModel);
+      setAgent('designer', primaryModel);
+    }
+
+    setAgent('librarian', secondaryModel);
+    setAgent('explorer', secondaryModel);
+    setAgent('fixer', secondaryModel);
+  };
+
+  const applyChutesAssignments = (presetAgents: Record<string, unknown>) => {
+    if (!installConfig.hasChutes) return;
+
+    const hasExternalProviders =
+      installConfig.hasKimi ||
+      installConfig.hasOpenAI ||
+      installConfig.hasAntigravity;
+
+    if (hasExternalProviders && activePreset !== 'chutes') return;
+
+    const primaryModel = installConfig.selectedChutesPrimaryModel;
+    const secondaryModel =
+      installConfig.selectedChutesSecondaryModel ?? primaryModel;
+
+    if (!primaryModel || !secondaryModel) return;
+
+    const setAgent = (agentName: string, model: string) => {
+      presetAgents[agentName] = createAgentConfig(agentName, { model });
+    };
+
+    setAgent('orchestrator', primaryModel);
+    setAgent('oracle', primaryModel);
+    setAgent('designer', primaryModel);
+    setAgent('librarian', secondaryModel);
+    setAgent('explorer', secondaryModel);
+    setAgent('fixer', secondaryModel);
+  };
+
+  const dedupeModels = (models: Array<string | undefined>) => {
+    const seen = new Set<string>();
+    const result: string[] = [];
+
+    for (const model of models) {
+      if (!model || seen.has(model)) continue;
+      seen.add(model);
+      result.push(model);
+    }
+
+    return result;
+  };
+
+  const getOpenCodeFallbackForAgent = (agentName: AgentName) => {
+    if (!installConfig.useOpenCodeFreeModels) return undefined;
+    const isSupport =
+      agentName === 'explorer' ||
+      agentName === 'librarian' ||
+      agentName === 'fixer';
+    if (isSupport) {
+      return (
+        installConfig.selectedOpenCodeSecondaryModel ??
+        installConfig.selectedOpenCodePrimaryModel
+      );
+    }
+    return installConfig.selectedOpenCodePrimaryModel;
+  };
+
+  const getChutesFallbackForAgent = (agentName: AgentName) => {
+    if (!installConfig.hasChutes) return undefined;
+    const isSupport =
+      agentName === 'explorer' ||
+      agentName === 'librarian' ||
+      agentName === 'fixer';
+    if (isSupport) {
+      return (
+        installConfig.selectedChutesSecondaryModel ??
+        installConfig.selectedChutesPrimaryModel ??
+        MODEL_MAPPINGS.chutes[agentName].model
+      );
+    }
+    return (
+      installConfig.selectedChutesPrimaryModel ??
+      MODEL_MAPPINGS.chutes[agentName].model
+    );
+  };
+
+  const attachFallbackConfig = (presetAgents: Record<string, unknown>) => {
+    const chains: Partial<Record<AgentName, string[]>> = {};
+
+    for (const agentName of AGENT_NAMES) {
+      const currentModel = (
+        presetAgents[agentName] as { model?: string } | undefined
+      )?.model;
+
+      const chain = dedupeModels([
+        currentModel,
+        installConfig.hasOpenAI
+          ? MODEL_MAPPINGS.openai[agentName].model
+          : undefined,
+        installConfig.hasAnthropic
+          ? MODEL_MAPPINGS.anthropic[agentName].model
+          : undefined,
+        installConfig.hasCopilot
+          ? MODEL_MAPPINGS.copilot[agentName].model
+          : undefined,
+        installConfig.hasZaiPlan
+          ? MODEL_MAPPINGS['zai-plan'][agentName].model
+          : undefined,
+        installConfig.hasKimi
+          ? MODEL_MAPPINGS.kimi[agentName].model
+          : undefined,
+        installConfig.hasAntigravity
+          ? MODEL_MAPPINGS.antigravity[agentName].model
+          : undefined,
+        getChutesFallbackForAgent(agentName),
+        getOpenCodeFallbackForAgent(agentName),
+        MODEL_MAPPINGS['zen-free'][agentName].model,
+      ]);
+
+      if (chain.length > 0) {
+        chains[agentName] = chain;
+      }
+    }
+
+    config.fallback = {
+      enabled: true,
+      timeoutMs: 15000,
+      chains,
+    };
+  };
+
   const buildPreset = (mappingName: keyof typeof MODEL_MAPPINGS) => {
     const mapping = MODEL_MAPPINGS[mappingName];
     return Object.fromEntries(
@@ -229,10 +500,44 @@ export function generateLiteConfig(
     // Use dedicated mixed preset generator
     (config.presets as Record<string, unknown>)[activePreset] =
       generateAntigravityMixedPreset(installConfig);
+
+    applyOpenCodeFreeAssignments(
+      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
+      installConfig.hasKimi ||
+        installConfig.hasOpenAI ||
+        installConfig.hasAnthropic ||
+        installConfig.hasCopilot ||
+        installConfig.hasZaiPlan ||
+        installConfig.hasAntigravity ||
+        installConfig.hasChutes === true,
+    );
+    applyChutesAssignments(
+      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
+    );
+    attachFallbackConfig(
+      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
+    );
   } else {
     // Use standard buildPreset for pure presets
     (config.presets as Record<string, unknown>)[activePreset] =
       buildPreset(activePreset);
+
+    applyOpenCodeFreeAssignments(
+      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
+      installConfig.hasKimi ||
+        installConfig.hasOpenAI ||
+        installConfig.hasAnthropic ||
+        installConfig.hasCopilot ||
+        installConfig.hasZaiPlan ||
+        installConfig.hasAntigravity ||
+        installConfig.hasChutes === true,
+    );
+    applyChutesAssignments(
+      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
+    );
+    attachFallbackConfig(
+      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
+    );
   }
 
   if (installConfig.hasTmux) {

+ 90 - 0
src/cli/system.test.ts

@@ -1,6 +1,11 @@
 /// <reference types="bun-types" />
 
 import { describe, expect, mock, test } from 'bun:test';
+import { parseOpenCodeModelsVerboseOutput } from './opencode-models';
+import {
+  pickBestCodingOpenCodeModel,
+  pickSupportOpenCodeModel,
+} from './opencode-selection';
 import {
   fetchLatestVersion,
   getOpenCodeVersion,
@@ -63,4 +68,89 @@ describe('system', () => {
       expect(version).toBeNull();
     }
   });
+
+  test('parseOpenCodeModelsVerboseOutput extracts only opencode free models', () => {
+    const output = `opencode/glm-4.7-free
+{
+  "id": "glm-4.7-free",
+  "providerID": "opencode",
+  "name": "GLM-4.7 Free",
+  "status": "active",
+  "cost": { "input": 0, "output": 0, "cache": { "read": 0, "write": 0 } },
+  "limit": { "context": 204800, "output": 131072 },
+  "capabilities": { "reasoning": true, "toolcall": true, "attachment": false }
+}
+
+openai/gpt-5.3-codex
+{
+  "id": "gpt-5.3-codex",
+  "providerID": "openai",
+  "name": "GPT-5.3 Codex",
+  "status": "active",
+  "cost": { "input": 1, "output": 1, "cache": { "read": 0, "write": 0 } },
+  "limit": { "context": 400000, "output": 128000 },
+  "capabilities": { "reasoning": true, "toolcall": true, "attachment": true }
+}
+`;
+
+    const models = parseOpenCodeModelsVerboseOutput(output);
+    expect(models).toHaveLength(1);
+    expect(models[0]?.model).toBe('opencode/glm-4.7-free');
+  });
+
+  test('pickBestCodingOpenCodeModel prefers stronger coding profile', () => {
+    const models = [
+      {
+        model: 'opencode/gpt-5-nano',
+        name: 'GPT-5 Nano',
+        status: 'active' as const,
+        contextLimit: 400000,
+        outputLimit: 128000,
+        reasoning: true,
+        toolcall: true,
+        attachment: true,
+      },
+      {
+        model: 'opencode/trinity-large-preview-free',
+        name: 'Trinity Large Preview',
+        status: 'active' as const,
+        contextLimit: 131072,
+        outputLimit: 131072,
+        reasoning: false,
+        toolcall: true,
+        attachment: false,
+      },
+    ];
+
+    const best = pickBestCodingOpenCodeModel(models);
+    expect(best?.model).toBe('opencode/gpt-5-nano');
+  });
+
+  test('pickSupportOpenCodeModel picks helper model different from primary', () => {
+    const models = [
+      {
+        model: 'opencode/glm-4.7-free',
+        name: 'GLM-4.7 Free',
+        status: 'active' as const,
+        contextLimit: 204800,
+        outputLimit: 131072,
+        reasoning: true,
+        toolcall: true,
+        attachment: false,
+      },
+      {
+        model: 'opencode/gpt-5-nano',
+        name: 'GPT-5 Nano',
+        status: 'active' as const,
+        contextLimit: 400000,
+        outputLimit: 128000,
+        reasoning: true,
+        toolcall: true,
+        attachment: true,
+      },
+    ];
+
+    const support = pickSupportOpenCodeModel(models, 'opencode/glm-4.7-free');
+    expect(support?.model).toBe('opencode/gpt-5-nano');
+  });
 });

+ 61 - 0
src/cli/types.ts

@@ -4,9 +4,53 @@ export interface InstallArgs {
   tui: boolean;
   kimi?: BooleanArg;
   openai?: BooleanArg;
+  anthropic?: BooleanArg;
+  copilot?: BooleanArg;
+  zaiPlan?: BooleanArg;
   antigravity?: BooleanArg;
+  chutes?: BooleanArg;
   tmux?: BooleanArg;
   skills?: BooleanArg;
+  opencodeFree?: BooleanArg;
+  opencodeFreeModel?: string;
+}
+
+export interface OpenCodeFreeModel {
+  providerID: string;
+  model: string;
+  name: string;
+  status: 'alpha' | 'beta' | 'deprecated' | 'active';
+  contextLimit: number;
+  outputLimit: number;
+  reasoning: boolean;
+  toolcall: boolean;
+  attachment: boolean;
+  dailyRequestLimit?: number;
+}
+
+export interface DiscoveredModel {
+  providerID: string;
+  model: string;
+  name: string;
+  status: 'alpha' | 'beta' | 'deprecated' | 'active';
+  contextLimit: number;
+  outputLimit: number;
+  reasoning: boolean;
+  toolcall: boolean;
+  attachment: boolean;
+  dailyRequestLimit?: number;
+  costInput?: number;
+  costOutput?: number;
+}
+
+export interface DynamicAgentAssignment {
+  model: string;
+  variant?: string;
+}
+
+export interface DynamicModelPlan {
+  agents: Record<string, DynamicAgentAssignment>;
+  chains: Record<string, string[]>;
 }
 
 export interface OpenCodeConfig {
@@ -19,8 +63,21 @@ export interface OpenCodeConfig {
 export interface InstallConfig {
   hasKimi: boolean;
   hasOpenAI: boolean;
+  hasAnthropic?: boolean;
+  hasCopilot?: boolean;
+  hasZaiPlan?: boolean;
   hasAntigravity: boolean;
+  hasChutes?: boolean;
   hasOpencodeZen: boolean;
+  useOpenCodeFreeModels?: boolean;
+  preferredOpenCodeModel?: string;
+  selectedOpenCodePrimaryModel?: string;
+  selectedOpenCodeSecondaryModel?: string;
+  availableOpenCodeFreeModels?: OpenCodeFreeModel[];
+  selectedChutesPrimaryModel?: string;
+  selectedChutesSecondaryModel?: string;
+  availableChutesFreeModels?: OpenCodeFreeModel[];
+  dynamicModelPlan?: DynamicModelPlan;
   hasTmux: boolean;
   installSkills: boolean;
   installCustomSkills: boolean;
@@ -36,7 +93,11 @@ export interface DetectedConfig {
   isInstalled: boolean;
   hasKimi: boolean;
   hasOpenAI: boolean;
+  hasAnthropic?: boolean;
+  hasCopilot?: boolean;
+  hasZaiPlan?: boolean;
   hasAntigravity: boolean;
+  hasChutes?: boolean;
   hasOpencodeZen: boolean;
   hasTmux: boolean;
 }

+ 1 - 0
src/config/constants.ts

@@ -37,6 +37,7 @@ export const POLL_INTERVAL_BACKGROUND_MS = 2000;
 // Timeouts
 export const DEFAULT_TIMEOUT_MS = 2 * 60 * 1000; // 2 minutes
 export const MAX_POLL_TIME_MS = 5 * 60 * 1000; // 5 minutes
+export const FALLBACK_FAILOVER_TIMEOUT_MS = 15_000;
 
 // Polling stability
 export const STABLE_POLLS_THRESHOLD = 3;

+ 58 - 0
src/config/loader.test.ts

@@ -256,6 +256,64 @@ describe('deepMerge behavior', () => {
     const config = loadPluginConfig(projectDir);
     expect(config.agents?.oracle?.model).toBe('user/model');
   });
+
+  test('merges fallback timeout and chains from user and project', () => {
+    const userOpencodeDir = path.join(userConfigDir, 'opencode');
+    fs.mkdirSync(userOpencodeDir, { recursive: true });
+    fs.writeFileSync(
+      path.join(userOpencodeDir, 'oh-my-opencode-slim.json'),
+      JSON.stringify({
+        fallback: {
+          timeoutMs: 15000,
+          chains: {
+            oracle: ['openai/gpt-5.2-codex', 'opencode/glm-4.7-free'],
+          },
+        },
+      }),
+    );
+
+    const projectDir = path.join(tempDir, 'project');
+    const projectConfigDir = path.join(projectDir, '.opencode');
+    fs.mkdirSync(projectConfigDir, { recursive: true });
+    fs.writeFileSync(
+      path.join(projectConfigDir, 'oh-my-opencode-slim.json'),
+      JSON.stringify({
+        fallback: {
+          chains: {
+            explorer: ['google/antigravity-gemini-3-flash'],
+          },
+        },
+      }),
+    );
+
+    const config = loadPluginConfig(projectDir);
+    expect(config.fallback?.timeoutMs).toBe(15000);
+    expect(config.fallback?.chains.oracle).toEqual([
+      'openai/gpt-5.2-codex',
+      'opencode/glm-4.7-free',
+    ]);
+    expect(config.fallback?.chains.explorer).toEqual([
+      'google/antigravity-gemini-3-flash',
+    ]);
+  });
+
+  test('rejects fallback chains with unsupported agent keys', () => {
+    const projectDir = path.join(tempDir, 'project');
+    const projectConfigDir = path.join(projectDir, '.opencode');
+    fs.mkdirSync(projectConfigDir, { recursive: true });
+    fs.writeFileSync(
+      path.join(projectConfigDir, 'oh-my-opencode-slim.json'),
+      JSON.stringify({
+        fallback: {
+          chains: {
+            writing: ['openai/gpt-5.2-codex'],
+          },
+        },
+      }),
+    );
+
+    expect(loadPluginConfig(projectDir)).toEqual({});
+  });
 });
 
 describe('preset resolution', () => {

+ 1 - 0
src/config/loader.ts

@@ -159,6 +159,7 @@ export function loadPluginConfig(directory: string): PluginConfig {
       ...projectConfig,
       agents: deepMerge(config.agents, projectConfig.agents),
       tmux: deepMerge(config.tmux, projectConfig.tmux),
+      fallback: deepMerge(config.fallback, projectConfig.fallback),
     };
   }
 

+ 33 - 0
src/config/schema.ts

@@ -1,5 +1,29 @@
 import { z } from 'zod';
 
+const FALLBACK_AGENT_NAMES = [
+  'orchestrator',
+  'oracle',
+  'designer',
+  'explorer',
+  'librarian',
+  'fixer',
+] as const;
+
+const AgentModelChainSchema = z.array(z.string()).min(1);
+
+const FallbackChainsSchema = z
+  .object({
+    orchestrator: AgentModelChainSchema.optional(),
+    oracle: AgentModelChainSchema.optional(),
+    designer: AgentModelChainSchema.optional(),
+    explorer: AgentModelChainSchema.optional(),
+    librarian: AgentModelChainSchema.optional(),
+    fixer: AgentModelChainSchema.optional(),
+  })
+  .strict();
+
+export type FallbackAgentName = (typeof FALLBACK_AGENT_NAMES)[number];
+
 // Agent override configuration (distinct from SDK's AgentConfig)
 export const AgentOverrideConfigSchema = z.object({
   model: z.string().optional(),
@@ -46,6 +70,14 @@ export const BackgroundTaskConfigSchema = z.object({
 
 export type BackgroundTaskConfig = z.infer<typeof BackgroundTaskConfigSchema>;
 
+export const FailoverConfigSchema = z.object({
+  enabled: z.boolean().default(true),
+  timeoutMs: z.number().min(1000).max(120000).default(15000),
+  chains: FallbackChainsSchema.default({}),
+});
+
+export type FailoverConfig = z.infer<typeof FailoverConfigSchema>;
+
 // Main plugin config
 export const PluginConfigSchema = z.object({
   preset: z.string().optional(),
@@ -54,6 +86,7 @@ export const PluginConfigSchema = z.object({
   disabled_mcps: z.array(z.string()).optional(),
   tmux: TmuxConfigSchema.optional(),
   background: BackgroundTaskConfigSchema.optional(),
+  fallback: FailoverConfigSchema.optional(),
 });
 
 export type PluginConfig = z.infer<typeof PluginConfigSchema>;