Browse Source

Simplify installer: OpenAI default, remove scoring/chutes/antigravity (#170)

* Phase 1: Delete scoring engine, dynamic model selection, chutes, antigravity, opencode-free files

Remove:
- src/cli/scoring-v2/ (entire directory)
- dynamic-model-selection.ts + tests
- chutes-selection.ts + tests
- external-rankings.ts
- opencode-models.ts + tests
- opencode-selection.ts
- model-selection.ts + tests
- precedence-resolver.ts + tests
- docs/antigravity.md
- docs/provider-combination-matrix.md

* Phase 2-7: Simplify providers, install, index, types, config-manager, config-io

- providers.ts: Only 4 providers (openai, kimi, copilot, zai-plan), always defaults to OpenAI
- install.ts: No interactive questions, generates OpenAI by default, links to provider-configurations.md
- index.ts: Removed preset flags, only --no-tui, --tmux, --skills, --force, --models-only, --help
- types.ts: Removed InstallConfig fields for chutes/opencode-free/antigravity/balanced-spend
- config-manager.ts: Removed re-exports of deleted modules
- config-io.ts: Removed addAntigravityPlugin, addGoogleProvider, addChutesProvider, detectAntigravityConfig

* Phase 8-10: Create provider-configurations.md, update README and quick-reference

- Created docs/provider-configurations.md with config examples for all 4 providers
- Updated README.md: removed antigravity/chutes/opencode-free references, simplified install section
- Updated docs/quick-reference.md: removed antigravity/chutes/opencode-free sections, added link to provider-configurations.md

* Phase 11-12: Update tests for simplified codebase, all 330 tests passing

- system.test.ts: Removed parseOpenCodeModelsVerboseOutput tests (deleted module)
- providers.test.ts: Rewrote to test only 4 providers (openai, kimi, copilot, zai-plan)
- config-io.test.ts: Removed addChutesProvider test, updated writeLiteConfig test for new format

* chore: remove --models-only flag (dead code after refactor)

* docs: add recommended models with cerebras/gemini alternatives

* fix: remove stale modelsOnly references (greptile review)

* fix: actually remove models subcommand and modelsOnly (ci fix)

* docs: update ZAI to GLM-5, Gemini to 3.1 Pro (latest models)

* feat: update Copilot defaults - Opus for leadership, Gemini for designer, Grok for support

* fix: correct model IDs to match models.dev (dots not hyphens, preview suffixes)
alvinreal 1 month ago
parent
commit
fb8958ab85

+ 44 - 24
README.md

@@ -15,27 +15,17 @@
 bunx oh-my-opencode-slim@latest install
 ```
 
-The installer can refresh and use OpenCode free models directly:
+The installer generates an OpenAI configuration by default (using `gpt-5.4` and `gpt-5-codex`). No provider questions asked.
 
-```bash
-bunx oh-my-opencode-slim@latest install --no-tui --kimi=yes --openai=yes --antigravity=yes --chutes=yes --opencode-free=yes --opencode-free-model=auto --tmux=no --skills=yes
-```
-
-Then authenticate:
+For non-interactive mode:
 
 ```bash
-opencode auth login
+bunx oh-my-opencode-slim@latest install --no-tui --tmux=no --skills=yes
 ```
 
-Run `ping all agents` to verify everything works.
-
-OpenCode free-model mode uses `opencode models --refresh --verbose`, filters to free `opencode/*` models, and applies coding-first selection:
-- OpenCode-only mode can use multiple OpenCode free models across agents.
-- Hybrid mode can combine OpenCode free models with OpenAI, Kimi, and/or Antigravity.
-- In hybrid mode, `designer` stays on the external provider mapping.
-- Chutes mode auto-selects primary/support models with daily-cap awareness (300/2000/5000).
+### For Alternative Providers
 
-> **💡 Models are fully customizable.** Edit `~/.config/opencode/oh-my-opencode-slim.json` (or `.jsonc` for comments support) to assign any model to any agent.
+The default configuration uses OpenAI. To use Kimi, GitHub Copilot, or ZAI Coding Plan, see **[Provider Configurations](docs/provider-configurations.md)** for step-by-step instructions and config examples.
 
 ### For LLM Agents
 
@@ -49,7 +39,7 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
 **Detailed installation guide:** [docs/installation.md](docs/installation.md)
 
 **Additional guides:**
-- **[Antigravity Setup](docs/antigravity.md)** - Complete guide for Antigravity provider configuration  
+- **[Provider Configurations](docs/provider-configurations.md)** - Config examples for all supported providers
 - **[Tmux Integration](docs/tmux-integration.md)** - Real-time agent monitoring with tmux
 
 ---
@@ -80,7 +70,12 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
   </tr>
   <tr>
     <td colspan="2">
-      <b>Recommended Models:</b> <code>kimi-for-coding/k2p5</code> <code>openai/gpt-5.2-codex</code>
+      <b>Default Model:</b> <code>openai/gpt-5.4</code>
+    </td>
+  </tr>
+  <tr>
+    <td colspan="2">
+      <b>Recommended Models:</b> <code>openai/gpt-5.4</code> <code>kimi-for-coding/k2p5</code>
     </td>
   </tr>
 </table>
@@ -111,7 +106,12 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
   </tr>
   <tr>
     <td colspan="2">
-      <b>Recommended Models:</b> <code>cerebras/zai-glm-4.7</code> <code>google/gemini-3-flash</code> <code>openai/gpt-5.1-codex-mini</code>
+      <b>Default Model:</b> <code>openai/gpt-5-codex</code>
+    </td>
+  </tr>
+  <tr>
+    <td colspan="2">
+      <b>Recommended Models:</b> <code>cerebras/zai-glm-4.7</code> <code>google/gemini-3.1-pro-preview</code> <code>openai/gpt-5-codex</code>
     </td>
   </tr>
 </table>
@@ -142,7 +142,12 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
   </tr>
   <tr>
     <td colspan="2">
-      <b>Recommended Models:</b> <code>openai/gpt-5.2-codex</code> <code>kimi-for-coding/k2p5</code>
+      <b>Default Model:</b> <code>openai/gpt-5.4</code>
+    </td>
+  </tr>
+  <tr>
+    <td colspan="2">
+      <b>Recommended Models:</b> <code>openai/gpt-5.4</code> <code>kimi-for-coding/k2p5</code>
     </td>
   </tr>
 </table>
@@ -173,7 +178,12 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
   </tr>
   <tr>
     <td colspan="2">
-      <b>Recommended Models:</b> <code>google/gemini-3-flash</code> <code>openai/gpt-5.1-codex-mini</code>
+      <b>Default Model:</b> <code>openai/gpt-5-codex</code>
+    </td>
+  </tr>
+  <tr>
+    <td colspan="2">
+      <b>Recommended Models:</b> <code>google/gemini-3.1-pro-preview</code> <code>openai/gpt-5-codex</code>
     </td>
   </tr>
 </table>
@@ -204,7 +214,12 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
   </tr>
   <tr>
     <td colspan="2">
-      <b>Recommended Models:</b> <code>google/gemini-3-flash</code>
+      <b>Default Model:</b> <code>kimi-for-coding/k2p5</code>
+    </td>
+  </tr>
+  <tr>
+    <td colspan="2">
+      <b>Recommended Models:</b> <code>google/gemini-3.1-pro-preview</code>
     </td>
   </tr>
 </table>
@@ -235,7 +250,12 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
   </tr>
   <tr>
     <td colspan="2">
-      <b>Recommended Models:</b> <code>cerebras/zai-glm-4.7</code> <code>google/gemini-3-flash</code> <code>openai/gpt-5.1-codex-mini</code>
+      <b>Default Model:</b> <code>openai/gpt-5-codex</code>
+    </td>
+  </tr>
+  <tr>
+    <td colspan="2">
+      <b>Recommended Models:</b> <code>cerebras/zai-glm-4.7</code> <code>google/gemini-3.1-pro-preview</code> <code>openai/gpt-5-codex</code>
     </td>
   </tr>
 </table>
@@ -244,10 +264,10 @@ https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/mas
 
 ## 📚 Documentation
 
-- **[Quick Reference](docs/quick-reference.md)** - Presets, Skills, MCPs, Tools, Configuration
+- **[Quick Reference](docs/quick-reference.md)** - Skills, MCPs, Tools, Configuration
+- **[Provider Configurations](docs/provider-configurations.md)** - Config examples for OpenAI, Kimi, Copilot, ZAI
 - **[Installation Guide](docs/installation.md)** - Detailed installation and troubleshooting
 - **[Cartography Skill](docs/cartography.md)** - Custom skill for repository mapping + codemap generation
-- **[Antigravity Setup](docs/antigravity.md)** - Complete guide for Antigravity provider configuration
 - **[Tmux Integration](docs/tmux-integration.md)** - Real-time agent monitoring with tmux
 
 ---

+ 0 - 202
docs/antigravity.md

@@ -1,202 +0,0 @@
-# Antigravity Setup Guide
-
-## Quick Setup
-
-1. Install with Antigravity support:
-   ```bash
-   bunx oh-my-opencode-slim install --antigravity=yes
-   ```
-
-2. Authenticate:
-   ```bash
-   opencode auth login
-   # Select "google" provider
-   ```
-
-3. Start using:
-   ```bash
-   opencode
-   ```
-
-## How It Works
-
-The installer automatically:
-- Adds `opencode-antigravity-auth@latest` plugin
-- Configures Google provider with all Antigravity and Gemini CLI models
-- Sets up Antigravity-focused agent mapping presets
-
-## Models Available
-
-### Antigravity Models (via Google Infrastructure)
-
-1. **antigravity-gemini-3.1-pro**
-   - Name: Gemini 3.1 Pro (Antigravity)
-   - Context: 1M tokens, Output: 65K tokens
-   - Variants: low, high thinking levels
-   - Best for: Complex reasoning, high-quality outputs
-
-2. **antigravity-gemini-3-flash**
-   - Name: Gemini 3 Flash (Antigravity)
-   - Context: 1M tokens, Output: 65K tokens
-   - Variants: minimal, low, medium, high thinking levels
-   - Best for: Fast responses, efficient agent tasks
-
-3. **antigravity-claude-sonnet-4-5**
-   - Name: Claude Sonnet 4.5 (Antigravity)
-   - Context: 200K tokens, Output: 64K tokens
-   - Best for: Balanced performance
-
-4. **antigravity-claude-sonnet-4-5-thinking**
-   - Name: Claude Sonnet 4.5 Thinking (Antigravity)
-   - Context: 200K tokens, Output: 64K tokens
-   - Variants: low (8K budget), max (32K budget)
-   - Best for: Deep reasoning tasks
-
-5. **antigravity-claude-opus-4-5-thinking**
-   - Name: Claude Opus 4.5 Thinking (Antigravity)
-   - Context: 200K tokens, Output: 64K tokens
-   - Variants: low (8K budget), max (32K budget)
-   - Best for: Most complex reasoning
-
-### Gemini CLI Models (Fallback)
-
-6. **gemini-2.5-flash**
-   - Name: Gemini 2.5 Flash (Gemini CLI)
-   - Context: 1M tokens, Output: 65K tokens
-   - Requires: Gemini CLI authentication
-
-7. **gemini-2.5-pro**
-   - Name: Gemini 2.5 Pro (Gemini CLI)
-   - Context: 1M tokens, Output: 65K tokens
-   - Requires: Gemini CLI authentication
-
-8. **gemini-3-flash-preview**
-   - Name: Gemini 3 Flash Preview (Gemini CLI)
-   - Context: 1M tokens, Output: 65K tokens
-   - Requires: Gemini CLI authentication
-
-9. **gemini-3.1-pro-preview**
-   - Name: Gemini 3.1 Pro Preview (Gemini CLI)
-   - Context: 1M tokens, Output: 65K tokens
-   - Requires: Gemini CLI authentication
-
-## Agent Configuration
-
-When you install with `--antigravity=yes`, the preset depends on other providers:
-
-### antigravity-mixed-both (Kimi + OpenAI + Antigravity)
-- **Orchestrator**: Kimi k2p5
-- **Oracle**: OpenAI model
-- **Explorer/Librarian/Designer/Fixer**: Gemini 3 Flash (Antigravity)
-
-### antigravity-mixed-kimi (Kimi + Antigravity)
-- **Orchestrator**: Kimi k2p5
-- **Oracle**: Gemini 3.1 Pro (Antigravity)
-- **Explorer/Librarian/Designer/Fixer**: Gemini 3 Flash (Antigravity)
-
-### antigravity-mixed-openai (OpenAI + Antigravity)
-- **Orchestrator**: Gemini 3 Flash (Antigravity)
-- **Oracle**: OpenAI model
-- **Explorer/Librarian/Designer/Fixer**: Gemini 3 Flash (Antigravity)
-
-### antigravity (Pure Antigravity)
-- **Orchestrator**: Gemini 3 Flash (Antigravity)
-- **Oracle**: Gemini 3.1 Pro (Antigravity)
-- **Explorer/Librarian/Designer/Fixer**: Gemini 3 Flash (Antigravity)
-
-## Manual Configuration
-
-If you prefer to configure manually, edit `~/.config/opencode/oh-my-opencode-slim.json` (or `.jsonc`) and add a pure Antigravity preset:
-
-```json
-{
-  "preset": "antigravity",
-  "presets": {
-    "antigravity": {
-      "orchestrator": {
-        "model": "google/antigravity-gemini-3-flash",
-        "skills": ["*"],
-        "mcps": ["websearch"]
-      },
-      "oracle": {
-        "model": "google/antigravity-gemini-3.1-pro",
-        "skills": [],
-        "mcps": []
-      },
-      "explorer": {
-        "model": "google/antigravity-gemini-3-flash",
-        "variant": "low",
-        "skills": [],
-        "mcps": []
-      },
-      "librarian": {
-        "model": "google/antigravity-gemini-3-flash",
-        "variant": "low",
-        "skills": [],
-        "mcps": ["websearch", "context7", "grep_app"]
-      },
-      "designer": {
-        "model": "google/antigravity-gemini-3-flash",
-        "variant": "medium",
-        "skills": ["agent-browser"],
-        "mcps": []
-      },
-      "fixer": {
-        "model": "google/antigravity-gemini-3-flash",
-        "variant": "low",
-        "skills": [],
-        "mcps": []
-      }
-    }
-  }
-}
-```
-
-## Troubleshooting
-
-### Authentication Failed
-```bash
-# Ensure Antigravity service is running
-# Check service status
-curl http://127.0.0.1:8317/health
-
-# Re-authenticate
-opencode auth login
-```
-
-### Models Not Available
-```bash
-# Verify plugin is installed
-cat ~/.config/opencode/opencode.json | grep antigravity
-
-# Reinstall plugin
-bunx oh-my-opencode-slim install --antigravity=yes --no-tui --kimi=no --openai=no --tmux=no --skills=no
-```
-
-### Wrong Model Selected
-```bash
-# Check current preset
-echo $OH_MY_OPENCODE_SLIM_PRESET
-
-# Change preset
-export OH_MY_OPENCODE_SLIM_PRESET=antigravity
-opencode
-```
-
-### Service Connection Issues
-```bash
-# Check if Antigravity service is running on correct port
-lsof -i :8317
-
-# Restart the service
-# (Follow your Antigravity/LLM-Mux restart procedure)
-# Or edit ~/.config/opencode/oh-my-opencode-slim.json (or .jsonc)
-# Change the "preset" field and restart OpenCode
-```
-
-## Notes
-
-- **Terms of Service**: Using Antigravity may violate Google's ToS. Use at your own risk.
-- **Performance**: Antigravity models typically have lower latency than direct API calls
-- **Fallback**: Gemini CLI models require separate authentication but work as fallback
-- **Customization**: You can mix and match any models across agents by editing the config

+ 0 - 180
docs/provider-combination-matrix.md

@@ -1,180 +0,0 @@
-# Provider Combination Test Matrix (2 to 6 Active)
-
-This matrix tests 5 combinations across the 8 provider toggles in this project:
-
-- `openai`
-- `anthropic`
-- `github-copilot`
-- `zai-coding-plan`
-- `kimi-for-coding`
-- `google` (Antigravity/Gemini)
-- `chutes`
-- `opencode` free (`useOpenCodeFreeModels`)
-
-## How this was determined
-
-I generated outputs directly from `generateLiteConfig` in `src/cli/providers.ts` using fixed deterministic inputs:
-
-- `selectedOpenCodePrimaryModel = opencode/glm-4.7-free`
-- `selectedOpenCodeSecondaryModel = opencode/gpt-5-nano`
-- `selectedChutesPrimaryModel = chutes/kimi-k2.5`
-- `selectedChutesSecondaryModel = chutes/minimax-m2.1`
-
-This represents the config output shape written by the installer when those selected models are available.
-
-## Scenario S1 - 2 providers
-
-Active providers: OpenAI + OpenCode Free
-
-- Preset: `openai`
-- Agents:
-  - `orchestrator`: `openai/gpt-5.3-codex`
-  - `oracle`: `openai/gpt-5.3-codex` (`high`)
-  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
-  - `explorer`: `opencode/gpt-5-nano`
-  - `librarian`: `opencode/gpt-5-nano`
-  - `fixer`: `opencode/gpt-5-nano`
-- Fallback chains:
-  - `orchestrator`: `openai/gpt-5.3-codex -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `oracle`: `openai/gpt-5.3-codex -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `designer`: `openai/gpt-5.1-codex-mini -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> opencode/big-pickle`
-  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> opencode/big-pickle`
-  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> opencode/big-pickle`
-
-## Scenario S2 - 3 providers
-
-Active providers: OpenAI + Chutes + OpenCode Free
-
-- Preset: `openai`
-- Agents:
-  - `orchestrator`: `openai/gpt-5.3-codex`
-  - `oracle`: `openai/gpt-5.3-codex` (`high`)
-  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
-  - `explorer`: `opencode/gpt-5-nano`
-  - `librarian`: `opencode/gpt-5-nano`
-  - `fixer`: `opencode/gpt-5-nano`
-- Fallback chains:
-  - `orchestrator`: `openai/gpt-5.3-codex -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `oracle`: `openai/gpt-5.3-codex -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `designer`: `openai/gpt-5.1-codex-mini -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> chutes/minimax-m2.1 -> opencode/big-pickle`
-  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> chutes/minimax-m2.1 -> opencode/big-pickle`
-  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> chutes/minimax-m2.1 -> opencode/big-pickle`
-
-## Scenario S3 - 4 providers
-
-Active providers: OpenAI + Copilot + ZAI Plan + OpenCode Free
-
-- Preset: `openai`
-- Agents:
-  - `orchestrator`: `openai/gpt-5.3-codex`
-  - `oracle`: `openai/gpt-5.3-codex` (`high`)
-  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
-  - `explorer`: `opencode/gpt-5-nano`
-  - `librarian`: `opencode/gpt-5-nano`
-  - `fixer`: `opencode/gpt-5-nano`
-- Fallback chains:
-  - `orchestrator`: `openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `oracle`: `openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `designer`: `openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/big-pickle`
-  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/big-pickle`
-  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> opencode/big-pickle`
-
-## Scenario S4 - 5 providers
-
-Active providers: OpenAI + Gemini + Chutes + Copilot + OpenCode Free
-
-- Preset: `antigravity-mixed-openai`
-- Agents:
-  - `orchestrator`: `chutes/kimi-k2.5`
-  - `oracle`: `google/antigravity-gemini-3.1-pro` (`high`)
-  - `designer`: `chutes/kimi-k2.5` (`medium`)
-  - `explorer`: `opencode/gpt-5-nano`
-  - `librarian`: `opencode/gpt-5-nano`
-  - `fixer`: `opencode/gpt-5-nano`
-- Fallback chains:
-  - `orchestrator`: `chutes/kimi-k2.5 -> openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `oracle`: `google/antigravity-gemini-3.1-pro -> openai/gpt-5.3-codex -> github-copilot/grok-code-fast-1 -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `designer`: `chutes/kimi-k2.5 -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> chutes/minimax-m2.1 -> opencode/big-pickle`
-  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> chutes/minimax-m2.1 -> opencode/big-pickle`
-  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> github-copilot/grok-code-fast-1 -> google/antigravity-gemini-3-flash -> chutes/minimax-m2.1 -> opencode/big-pickle`
-
-## Scenario S5 - 6 providers
-
-Active providers: OpenAI + Anthropic + Copilot + ZAI Plan + Chutes + OpenCode Free
-
-- Preset: `openai`
-- Agents:
-  - `orchestrator`: `openai/gpt-5.3-codex`
-  - `oracle`: `openai/gpt-5.3-codex` (`high`)
-  - `designer`: `openai/gpt-5.1-codex-mini` (`medium`)
-  - `explorer`: `opencode/gpt-5-nano`
-  - `librarian`: `opencode/gpt-5-nano`
-  - `fixer`: `opencode/gpt-5-nano`
-- Fallback chains:
-  - `orchestrator`: `openai/gpt-5.3-codex -> anthropic/claude-opus-4-6 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `oracle`: `openai/gpt-5.3-codex -> anthropic/claude-opus-4-6 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `designer`: `openai/gpt-5.1-codex-mini -> anthropic/claude-sonnet-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/kimi-k2.5 -> opencode/glm-4.7-free -> opencode/big-pickle`
-  - `explorer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> anthropic/claude-haiku-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/minimax-m2.1 -> opencode/big-pickle`
-  - `librarian`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> anthropic/claude-sonnet-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/minimax-m2.1 -> opencode/big-pickle`
-  - `fixer`: `opencode/gpt-5-nano -> openai/gpt-5.1-codex-mini -> anthropic/claude-sonnet-4-5 -> github-copilot/grok-code-fast-1 -> zai-coding-plan/glm-4.7 -> chutes/minimax-m2.1 -> opencode/big-pickle`
-
-## Dynamic scoring rerun (new compositions + 3 random)
-
-This rerun validates `buildDynamicModelPlan` using `scoringEngineVersion` in three modes:
-
-- `v1`
-- `v2-shadow` (applies V1 results, compares V2)
-- `v2`
-
-The exact assertions are captured in `src/cli/dynamic-model-selection-matrix.test.ts`.
-
-### C1 (curated)
-
-Active providers: OpenAI + Anthropic + Chutes + OpenCode Free
-
-- V1 agents: `oracle=openai/gpt-5.3-codex`, `orchestrator=openai/gpt-5.3-codex`, `fixer=openai/gpt-5.1-codex-mini`, `designer=chutes/kimi-k2.5`, `librarian=anthropic/claude-opus-4-6`, `explorer=anthropic/claude-haiku-4-5`
-- V2 agents: `oracle=openai/gpt-5.3-codex`, `orchestrator=openai/gpt-5.3-codex`, `fixer=openai/gpt-5.1-codex-mini`, `designer=anthropic/claude-opus-4-6`, `librarian=chutes/kimi-k2.5`, `explorer=anthropic/claude-haiku-4-5`
-
-### C2 (curated)
-
-Active providers: OpenAI + Copilot + ZAI Plan + Gemini + OpenCode Free
-
-- V1 agents: `oracle=google/antigravity-gemini-3.1-pro`, `orchestrator=openai/gpt-5.3-codex`, `fixer=openai/gpt-5.1-codex-mini`, `designer=google/antigravity-gemini-3.1-pro`, `librarian=zai-coding-plan/glm-4.7`, `explorer=github-copilot/grok-code-fast-1`
-- V2 agents: same as V1 for this composition
-
-### C3 (curated)
-
-Active providers: Kimi + Gemini + Chutes + OpenCode Free
-
-- V1 agents: `oracle=google/antigravity-gemini-3.1-pro`, `orchestrator=google/antigravity-gemini-3.1-pro`, `fixer=chutes/minimax-m2.1`, `designer=kimi-for-coding/k2p5`, `librarian=google/antigravity-gemini-3.1-pro`, `explorer=google/antigravity-gemini-3-flash`
-- V2 agents: same except `fixer=chutes/kimi-k2.5`
-
-### R1 (random)
-
-Active providers: Anthropic + Copilot + OpenCode Free
-
-- V1 agents: `oracle=anthropic/claude-opus-4-6`, `orchestrator=github-copilot/grok-code-fast-1`, `fixer=github-copilot/grok-code-fast-1`, `designer=anthropic/claude-opus-4-6`, `librarian=github-copilot/grok-code-fast-1`, `explorer=anthropic/claude-haiku-4-5`
-- V2 agents: same as V1 for this composition
-
-### R2 (random)
-
-Active providers: OpenAI + Kimi + ZAI Plan + Chutes + OpenCode Free
-
-- V1 agents: `oracle=openai/gpt-5.3-codex`, `orchestrator=openai/gpt-5.3-codex`, `fixer=chutes/minimax-m2.1`, `designer=zai-coding-plan/glm-4.7`, `librarian=kimi-for-coding/k2p5`, `explorer=chutes/minimax-m2.1`
-- V2 agents: `oracle=openai/gpt-5.3-codex`, `orchestrator=openai/gpt-5.3-codex`, `fixer=chutes/kimi-k2.5`, `designer=kimi-for-coding/k2p5`, `librarian=zai-coding-plan/glm-4.7`, `explorer=chutes/minimax-m2.1`
-
-### R3 (random)
-
-Active providers: Gemini + Anthropic + Chutes + OpenCode Free
-
-- V1 agents: `oracle=google/antigravity-gemini-3.1-pro`, `orchestrator=google/antigravity-gemini-3.1-pro`, `fixer=chutes/minimax-m2.1`, `designer=anthropic/claude-opus-4-6`, `librarian=google/antigravity-gemini-3.1-pro`, `explorer=google/antigravity-gemini-3-flash`
-- V2 agents: `oracle=google/antigravity-gemini-3.1-pro`, `orchestrator=google/antigravity-gemini-3.1-pro`, `fixer=anthropic/claude-opus-4-6`, `designer=chutes/kimi-k2.5`, `librarian=google/antigravity-gemini-3.1-pro`, `explorer=google/antigravity-gemini-3-flash`
-
-## Notes
-
-- This matrix shows deterministic `generateLiteConfig` output for the selected combinations.
-- If the dynamic planner is used during full install (live model catalog), the generated `dynamic` preset may differ based on discovered models and capabilities.

+ 135 - 0
docs/provider-configurations.md

@@ -0,0 +1,135 @@
+# Provider Configurations
+
+oh-my-opencode-slim uses **OpenAI** as the default provider. This document shows how to configure alternative providers by editing your plugin config file.
+
+## Config File Location
+
+Edit `~/.config/opencode/oh-my-opencode-slim.json` (or `.jsonc` for comments support).
+
+## Default: OpenAI
+
+The installer generates this configuration automatically:
+
+```json
+{
+  "preset": "openai",
+  "presets": {
+    "openai": {
+      "orchestrator": { "model": "openai/gpt-5.4", "variant": "high", "skills": ["*"], "mcps": ["websearch"] },
+      "oracle": { "model": "openai/gpt-5.4", "variant": "high", "skills": [], "mcps": [] },
+      "librarian": { "model": "openai/gpt-5-codex", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
+      "explorer": { "model": "openai/gpt-5-codex", "variant": "low", "skills": [], "mcps": [] },
+      "designer": { "model": "openai/gpt-5-codex", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
+      "fixer": { "model": "openai/gpt-5-codex", "variant": "low", "skills": [], "mcps": [] }
+    }
+  }
+}
+```
+
+## Kimi For Coding
+
+To use Kimi, add a `kimi` preset and set it as active:
+
+```json
+{
+  "preset": "kimi",
+  "presets": {
+    "kimi": {
+      "orchestrator": { "model": "kimi-for-coding/k2p5", "variant": "high", "skills": ["*"], "mcps": ["websearch"] },
+      "oracle": { "model": "kimi-for-coding/k2p5", "variant": "high", "skills": [], "mcps": [] },
+      "librarian": { "model": "kimi-for-coding/k2p5", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
+      "explorer": { "model": "kimi-for-coding/k2p5", "variant": "low", "skills": [], "mcps": [] },
+      "designer": { "model": "kimi-for-coding/k2p5", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
+      "fixer": { "model": "kimi-for-coding/k2p5", "variant": "low", "skills": [], "mcps": [] }
+    }
+  }
+}
+```
+
+Then authenticate:
+```bash
+opencode auth login
+# Select "Kimi For Coding" provider
+```
+
+## GitHub Copilot
+
+To use GitHub Copilot with Grok Code Fast:
+
+```json
+{
+  "preset": "copilot",
+  "presets": {
+    "copilot": {
+      "orchestrator": { "model": "github-copilot/claude-opus-4.6", "variant": "high", "skills": ["*"], "mcps": ["websearch"] },
+      "oracle": { "model": "github-copilot/claude-opus-4.6", "variant": "high", "skills": [], "mcps": [] },
+      "librarian": { "model": "github-copilot/grok-code-fast-1", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
+      "explorer": { "model": "github-copilot/grok-code-fast-1", "variant": "low", "skills": [], "mcps": [] },
+      "designer": { "model": "github-copilot/gemini-3.1-pro-preview", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
+      "fixer": { "model": "github-copilot/claude-sonnet-4.6", "variant": "low", "skills": [], "mcps": [] }
+    }
+  }
+}
+```
+
+Then authenticate:
+```bash
+opencode auth login
+# Select "github-copilot" provider
+```
+
+## ZAI Coding Plan
+
+To use ZAI Coding Plan with GLM 5:
+
+```json
+{
+  "preset": "zai-plan",
+  "presets": {
+    "zai-plan": {
+      "orchestrator": { "model": "zai-coding-plan/glm-5", "variant": "high", "skills": ["*"], "mcps": ["websearch"] },
+      "oracle": { "model": "zai-coding-plan/glm-5", "variant": "high", "skills": [], "mcps": [] },
+      "librarian": { "model": "zai-coding-plan/glm-5", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
+      "explorer": { "model": "zai-coding-plan/glm-5", "variant": "low", "skills": [], "mcps": [] },
+      "designer": { "model": "zai-coding-plan/glm-5", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
+      "fixer": { "model": "zai-coding-plan/glm-5", "variant": "low", "skills": [], "mcps": [] }
+    }
+  }
+}
+```
+
+Then authenticate:
+```bash
+opencode auth login
+# Select "zai-coding-plan" provider
+```
+
+## Mixing Providers
+
+You can mix models from different providers across agents. Create a custom preset:
+
+```json
+{
+  "preset": "my-mix",
+  "presets": {
+    "my-mix": {
+      "orchestrator": { "model": "openai/gpt-5.4", "skills": ["*"], "mcps": ["websearch"] },
+      "oracle": { "model": "openai/gpt-5.4", "variant": "high", "skills": [], "mcps": [] },
+      "librarian": { "model": "kimi-for-coding/k2p5", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
+      "explorer": { "model": "github-copilot/grok-code-fast-1", "variant": "low", "skills": [], "mcps": [] },
+      "designer": { "model": "kimi-for-coding/k2p5", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
+      "fixer": { "model": "openai/gpt-5-codex", "variant": "low", "skills": [], "mcps": [] }
+    }
+  }
+}
+```
+
+## Switching Presets
+
+**Method 1: Edit the config file** — Change the `preset` field to match a key in your `presets` object.
+
+**Method 2: Environment variable** (takes precedence over config file):
+```bash
+export OH_MY_OPENCODE_SLIM_PRESET=my-mix
+opencode
+```

+ 18 - 94
docs/quick-reference.md

@@ -15,29 +15,7 @@ Complete reference for oh-my-opencode-slim configuration and capabilities.
 
 ## Presets
 
-Presets are pre-configured agent model mappings for different provider combinations. The installer generates these automatically based on your available providers, and you can switch between them instantly.
-
-### OpenCode Free Discovery
-
-The installer can discover the latest OpenCode free models by running:
-
-```bash
-opencode models --refresh --verbose
-```
-
-Selection rules:
-- Only free `opencode/*` models are considered.
-- A coding-first primary model is selected for orchestration/strategy workloads.
-- A support model is selected for research/implementation workloads.
-- OpenCode-only mode can assign multiple OpenCode models across agents.
-- Hybrid mode can combine OpenCode free models with OpenAI/Kimi/Antigravity; `designer` remains on the external provider mapping.
-
-Useful flags:
-
-```bash
---opencode-free=yes|no
---opencode-free-model=<id|auto>
-```
+The default installer generates an OpenAI preset. To use alternative providers (Kimi, GitHub Copilot, ZAI Coding Plan), see **[Provider Configurations](provider-configurations.md)** for step-by-step instructions and full config examples.
 
 ### Switching Presets
 
@@ -62,7 +40,7 @@ opencode
 
 The environment variable takes precedence over the config file.
 
-### OpenAI Preset
+### OpenAI Preset (Default)
 
 Uses OpenAI models exclusively:
 
@@ -71,70 +49,20 @@ Uses OpenAI models exclusively:
   "preset": "openai",
   "presets": {
     "openai": {
-      "orchestrator": { "model": "openai/gpt-5.2-codex", "skills": ["*"], "mcps": ["websearch"] },
-      "oracle": { "model": "openai/gpt-5.2-codex", "variant": "high", "skills": [], "mcps": [] },
-      "librarian": { "model": "openai/gpt-5.1-codex-mini", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
-      "explorer": { "model": "openai/gpt-5.1-codex-mini", "variant": "low", "skills": [], "mcps": [] },
-      "designer": { "model": "openai/gpt-5.1-codex-mini", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
-      "fixer": { "model": "openai/gpt-5.1-codex-mini", "variant": "low", "skills": [], "mcps": [] }
+      "orchestrator": { "model": "openai/gpt-5.4", "skills": ["*"], "mcps": ["websearch"] },
+      "oracle": { "model": "openai/gpt-5.4", "variant": "high", "skills": [], "mcps": [] },
+      "librarian": { "model": "openai/gpt-5-codex", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
+      "explorer": { "model": "openai/gpt-5-codex", "variant": "low", "skills": [], "mcps": [] },
+      "designer": { "model": "openai/gpt-5-codex", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
+      "fixer": { "model": "openai/gpt-5-codex", "variant": "low", "skills": [], "mcps": [] }
     }
   }
 }
 ```
 
-### Google Provider (Antigravity)
-
-Access Claude 4.5 and Gemini 3 models through Google's Antigravity infrastructure.
-
-**Installation:**
-```bash
-bunx oh-my-opencode-slim install --antigravity=yes --opencode-free=yes --opencode-free-model=auto
-```
-
-**Agent Mapping:**
-- Orchestrator: Kimi (if available)
-- Oracle: GPT (if available)
-- Explorer/Librarian/Designer/Fixer: Gemini 3 Flash via Antigravity
-- If OpenCode free mode is enabled, Explorer/Librarian/Fixer may use selected free `opencode/*` support model while `designer` stays on external mapping
+### Other Providers
 
-**Authentication:**
-```bash
-opencode auth login
-# Select "google" provider
-```
-
-**Available Models:**
-- `google/antigravity-gemini-3-flash`
-- `google/antigravity-gemini-3.1-pro`
-- `google/antigravity-claude-sonnet-4-5`
-- `google/antigravity-claude-sonnet-4-5-thinking`
-- `google/antigravity-claude-opus-4-5-thinking`
-- `google/gemini-2.5-flash` (Gemini CLI)
-- `google/gemini-2.5-pro` (Gemini CLI)
-- `google/gemini-3-flash-preview` (Gemini CLI)
-- `google/gemini-3.1-pro-preview` (Gemini CLI)
-
-### Author's Preset
-
-Mixed setup combining multiple providers:
-
-```json
-{
-  "preset": "alvin",
-  "presets": {
-    "alvin": {
-      "orchestrator": { "model": "google/claude-opus-4-5-thinking", "skills": ["*"], "mcps": ["*"] },
-      "oracle": { "model": "openai/gpt-5.2-codex", "variant": "high", "skills": [], "mcps": [] },
-      "librarian": { "model": "google/gemini-3-flash", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
-      "explorer": { "model": "cerebras/zai-glm-4.7", "variant": "low", "skills": [], "mcps": [] },
-      "designer": { "model": "google/gemini-3-flash", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
-      "fixer": { "model": "cerebras/zai-glm-4.7", "variant": "low", "skills": [], "mcps": [] }
-    }
-  }
-}
-```
-
-> **Antigravity Provider:** For complete Antigravity setup guide, see [Antigravity Setup](antigravity.md)
+For Kimi, GitHub Copilot, and ZAI Coding Plan presets, see **[Provider Configurations](provider-configurations.md)**.
 
 ---
 
@@ -326,8 +254,6 @@ You can disable specific MCP servers globally by adding them to the `disabled_mc
 
 ### Tmux Integration
 
-> ⚠️ **Temporary workaround:** Start OpenCode with `--port` to enable tmux integration. The port must match the `OPENCODE_PORT` environment variable (default: 4096). This is required until the upstream issue is resolved. [opencode#9099](https://github.com/anomalyco/opencode/issues/9099).
-
 **Watch your agents work in real-time.** When the Orchestrator launches sub-agents or initiates background tasks, new tmux panes automatically spawn showing each agent's live progress. No more waiting in the dark.
 
 #### Quick Setup
@@ -347,7 +273,7 @@ You can disable specific MCP servers globally by adding them to the `disabled_mc
 2. **Run OpenCode inside tmux**:
     ```bash
     tmux
-    opencode --port 4096
+    opencode
     ```
 
 #### Layout Options
@@ -468,14 +394,14 @@ The plugin supports **JSONC** format for configuration files, allowing you to:
 ```jsonc
 {
   // Use preset for development
-  "preset": "dev",
+  "preset": "openai",
 
   /* Presets definition - customize agent models here */
   "presets": {
-    "dev": {
+    "openai": {
       // Fast models for quick iteration
-      "oracle": { "model": "google/gemini-3-flash" },
-      "explorer": { "model": "google/gemini-3-flash" },
+      "oracle": { "model": "openai/gpt-5.4" },
+      "explorer": { "model": "openai/gpt-5-codex" },
     },
   },
 
@@ -488,15 +414,15 @@ The plugin supports **JSONC** format for configuration files, allowing you to:
 
 ### Plugin Config (`oh-my-opencode-slim.json` or `oh-my-opencode-slim.jsonc`)
 
-The installer generates this file based on your providers. You can manually customize it to mix and match models. See the [Presets](#presets) section for detailed configuration options.
+The installer generates this file with the OpenAI preset by default. You can manually customize it to mix and match models from any provider. See [Provider Configurations](provider-configurations.md) for examples.
 
 #### Option Reference
 
 | Option | Type | Default | Description |
 |--------|------|---------|-------------|
-| `preset` | string | - | Name of the preset to use (e.g., `"openai"`, `"antigravity"`) |
+| `preset` | string | - | Name of the preset to use (e.g., `"openai"`, `"kimi"`) |
 | `presets` | object | - | Named preset configurations containing agent mappings |
-| `presets.<name>.<agent>.model` | string | - | Model ID for the agent (e.g., `"google/claude-opus-4-5-thinking"`) |
+| `presets.<name>.<agent>.model` | string | - | Model ID for the agent (e.g., `"openai/gpt-5.4"`) |
 | `presets.<name>.<agent>.temperature` | number | - | Temperature setting (0-2) for the agent |
 | `presets.<name>.<agent>.variant` | string | - | Agent variant for reasoning effort (e.g., `"low"`, `"medium"`, `"high"`) |
 | `presets.<name>.<agent>.skills` | string[] | - | Array of skill names the agent can use (`"*"` for all, `"!item"` to exclude) |
@@ -505,5 +431,3 @@ The installer generates this file based on your providers. You can manually cust
 | `tmux.layout` | string | `"main-vertical"` | Layout preset: `main-vertical`, `main-horizontal`, `tiled`, `even-horizontal`, `even-vertical` |
 | `tmux.main_pane_size` | number | `60` | Main pane size as percentage (20-80) |
 | `disabled_mcps` | string[] | `[]` | MCP server IDs to disable globally (e.g., `"websearch"`) |
-
-> **Note:** Agent configuration should be defined within `presets`. The root-level `agents` field is deprecated.

+ 3 - 3
src/background/background-manager.test.ts

@@ -497,7 +497,7 @@ describe('BackgroundTaskManager', () => {
           const modelRef = args.body?.model;
           if (
             modelRef?.providerID === 'openai' &&
-            modelRef?.modelID === 'gpt-5.2-codex'
+            modelRef?.modelID === 'gpt-5.4'
           ) {
             throw new Error('primary failed');
           }
@@ -510,7 +510,7 @@ describe('BackgroundTaskManager', () => {
           enabled: true,
           timeoutMs: 15000,
           chains: {
-            explorer: ['openai/gpt-5.2-codex', 'opencode/gpt-5-nano'],
+            explorer: ['openai/gpt-5.4', 'opencode/gpt-5-nano'],
           },
         },
       });
@@ -547,7 +547,7 @@ describe('BackgroundTaskManager', () => {
           enabled: true,
           timeoutMs: 15000,
           chains: {
-            explorer: ['openai/gpt-5.2-codex', 'opencode/gpt-5-nano'],
+            explorer: ['openai/gpt-5.4', 'opencode/gpt-5-nano'],
           },
         },
       });

+ 0 - 69
src/cli/chutes-selection.test.ts

@@ -1,69 +0,0 @@
-/// <reference types="bun-types" />
-
-import { describe, expect, test } from 'bun:test';
-import {
-  pickBestCodingChutesModel,
-  pickSupportChutesModel,
-} from './chutes-selection';
-import type { OpenCodeFreeModel } from './types';
-
-function model(input: Partial<OpenCodeFreeModel>): OpenCodeFreeModel {
-  return {
-    providerID: 'chutes',
-    model: input.model ?? 'chutes/unknown',
-    name: input.name ?? input.model ?? 'unknown',
-    status: input.status ?? 'active',
-    contextLimit: input.contextLimit ?? 128000,
-    outputLimit: input.outputLimit ?? 16000,
-    reasoning: input.reasoning ?? false,
-    toolcall: input.toolcall ?? false,
-    attachment: input.attachment ?? false,
-    dailyRequestLimit: input.dailyRequestLimit,
-  };
-}
-
-describe('chutes-selection', () => {
-  test('prefers reasoning model for primary role', () => {
-    const models = [
-      model({
-        model: 'chutes/minimax-m2.1',
-        reasoning: true,
-        toolcall: true,
-        contextLimit: 512000,
-        outputLimit: 64000,
-        dailyRequestLimit: 300,
-      }),
-      model({
-        model: 'chutes/gpt-oss-20b-mini',
-        reasoning: false,
-        toolcall: true,
-        dailyRequestLimit: 5000,
-      }),
-    ];
-
-    expect(pickBestCodingChutesModel(models)?.model).toBe(
-      'chutes/minimax-m2.1',
-    );
-  });
-
-  test('prefers high-cap fast model for support role', () => {
-    const models = [
-      model({
-        model: 'chutes/kimi-k2.5',
-        reasoning: true,
-        toolcall: true,
-        dailyRequestLimit: 300,
-      }),
-      model({
-        model: 'chutes/qwen3-coder-30b-mini',
-        reasoning: true,
-        toolcall: true,
-        dailyRequestLimit: 5000,
-      }),
-    ];
-
-    expect(pickSupportChutesModel(models, 'chutes/kimi-k2.5')?.model).toBe(
-      'chutes/qwen3-coder-30b-mini',
-    );
-  });
-});

+ 0 - 64
src/cli/chutes-selection.ts

@@ -1,64 +0,0 @@
-import {
-  pickBestModel,
-  pickPrimaryAndSupport,
-  type ScoreFunction,
-} from './model-selection';
-import type { OpenCodeFreeModel } from './types';
-
-function speedBonus(modelName: string): number {
-  const lower = modelName.toLowerCase();
-  let score = 0;
-  if (lower.includes('nano')) score += 60;
-  if (lower.includes('flash')) score += 45;
-  if (lower.includes('mini')) score += 30;
-  if (lower.includes('lite')) score += 20;
-  if (lower.includes('small')) score += 15;
-  return score;
-}
-
-const scoreChutesPrimaryForCoding: ScoreFunction<OpenCodeFreeModel> = (
-  model,
-) => {
-  return (
-    (model.reasoning ? 120 : 0) +
-    (model.toolcall ? 80 : 0) +
-    (model.attachment ? 20 : 0) +
-    Math.min(model.contextLimit, 1_000_000) / 9_000 +
-    Math.min(model.outputLimit, 300_000) / 10_000 +
-    (model.status === 'active' ? 10 : 0)
-  );
-};
-
-const scoreChutesSupportForCoding: ScoreFunction<OpenCodeFreeModel> = (
-  model,
-) => {
-  return (
-    (model.toolcall ? 90 : 0) +
-    (model.reasoning ? 35 : 0) +
-    speedBonus(model.model) +
-    Math.min(model.contextLimit, 400_000) / 20_000 +
-    (model.status === 'active' ? 8 : 0)
-  );
-};
-
-export function pickBestCodingChutesModel(
-  models: OpenCodeFreeModel[],
-): OpenCodeFreeModel | null {
-  return pickBestModel(models, scoreChutesPrimaryForCoding);
-}
-
-export function pickSupportChutesModel(
-  models: OpenCodeFreeModel[],
-  primaryModel?: string,
-): OpenCodeFreeModel | null {
-  const { support } = pickPrimaryAndSupport(
-    models,
-    {
-      primary: scoreChutesPrimaryForCoding,
-      support: scoreChutesSupportForCoding,
-    },
-    primaryModel,
-  );
-
-  return support;
-}

+ 3 - 40
src/cli/config-io.test.ts

@@ -11,7 +11,6 @@ import {
 import { tmpdir } from 'node:os';
 import { join } from 'node:path';
 import {
-  addChutesProvider,
   addPluginToOpenCodeConfig,
   detectCurrentConfig,
   disableDefaultAgents,
@@ -120,15 +119,11 @@ describe('config-io', () => {
     expect(saved.plugin.length).toBe(2);
   });
 
-  test('writeLiteConfig writes lite config', () => {
+  test('writeLiteConfig writes lite config with OpenAI preset', () => {
     const litePath = join(tmpDir, 'opencode', 'oh-my-opencode-slim.json');
     paths.ensureConfigDir();
 
     const result = writeLiteConfig({
-      hasKimi: true,
-      hasOpenAI: false,
-      hasAntigravity: false,
-      hasOpencodeZen: false,
       hasTmux: true,
       installSkills: false,
       installCustomSkills: false,
@@ -136,8 +131,8 @@ describe('config-io', () => {
     expect(result.success).toBe(true);
 
     const saved = JSON.parse(readFileSync(litePath, 'utf-8'));
-    expect(saved.preset).toBe('kimi');
-    expect(saved.presets.kimi).toBeDefined();
+    expect(saved.preset).toBe('openai');
+    expect(saved.presets.openai).toBeDefined();
     expect(saved.tmux.enabled).toBe(true);
   });
 
@@ -195,36 +190,4 @@ describe('config-io', () => {
     expect(detected.hasZaiPlan).toBe(true);
     expect(detected.hasTmux).toBe(true);
   });
-
-  test('addChutesProvider keeps OpenCode auth-based chutes flow intact', () => {
-    const configPath = join(tmpDir, 'opencode', 'opencode.json');
-    const litePath = join(tmpDir, 'opencode', 'oh-my-opencode-slim.json');
-    paths.ensureConfigDir();
-
-    writeFileSync(
-      configPath,
-      JSON.stringify({ plugin: ['oh-my-opencode-slim'] }),
-    );
-    writeFileSync(
-      litePath,
-      JSON.stringify({
-        preset: 'chutes',
-        presets: {
-          chutes: {
-            orchestrator: { model: 'chutes/kimi-k2.5' },
-          },
-        },
-      }),
-    );
-
-    const result = addChutesProvider();
-    expect(result.success).toBe(true);
-
-    const saved = JSON.parse(readFileSync(configPath, 'utf-8'));
-    expect(saved.plugin).toContain('oh-my-opencode-slim');
-    expect(saved.provider).toBeUndefined();
-
-    const detected = detectCurrentConfig();
-    expect(detected.hasChutes).toBe(true);
-  });
 });

+ 1 - 158
src/cli/config-io.ts

@@ -215,164 +215,7 @@ export function canModifyOpenCodeConfig(): boolean {
   }
 }
 
-export function addAntigravityPlugin(): ConfigMergeResult {
-  const configPath = getExistingConfigPath();
-  try {
-    const { config: parsedConfig, error } = parseConfig(configPath);
-    if (error) {
-      return {
-        success: false,
-        configPath,
-        error: `Failed to parse config: ${error}`,
-      };
-    }
-    const config = parsedConfig ?? {};
-    const plugins = config.plugin ?? [];
-
-    const pluginName = 'opencode-antigravity-auth@latest';
-    if (!plugins.includes(pluginName)) {
-      plugins.push(pluginName);
-    }
-    config.plugin = plugins;
-
-    writeConfig(configPath, config);
-    return { success: true, configPath };
-  } catch (err) {
-    return {
-      success: false,
-      configPath,
-      error: `Failed to add antigravity plugin: ${err}`,
-    };
-  }
-}
-
-export function addGoogleProvider(): ConfigMergeResult {
-  const configPath = getExistingConfigPath();
-  try {
-    const { config: parsedConfig, error } = parseConfig(configPath);
-    if (error) {
-      return {
-        success: false,
-        configPath,
-        error: `Failed to parse config: ${error}`,
-      };
-    }
-    const config = parsedConfig ?? {};
-    const providers = (config.provider ?? {}) as Record<string, unknown>;
-
-    providers.google = {
-      models: {
-        'antigravity-gemini-3.1-pro': {
-          name: 'Gemini 3.1 Pro (Antigravity)',
-          limit: { context: 1048576, output: 65535 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-          variants: {
-            low: { thinkingLevel: 'low' },
-            high: { thinkingLevel: 'high' },
-          },
-        },
-        'antigravity-gemini-3-flash': {
-          name: 'Gemini 3 Flash (Antigravity)',
-          limit: { context: 1048576, output: 65536 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-          variants: {
-            minimal: { thinkingLevel: 'minimal' },
-            low: { thinkingLevel: 'low' },
-            medium: { thinkingLevel: 'medium' },
-            high: { thinkingLevel: 'high' },
-          },
-        },
-        'antigravity-claude-sonnet-4-5': {
-          name: 'Claude Sonnet 4.5 (Antigravity)',
-          limit: { context: 200000, output: 64000 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-        },
-        'antigravity-claude-sonnet-4-5-thinking': {
-          name: 'Claude Sonnet 4.5 Thinking (Antigravity)',
-          limit: { context: 200000, output: 64000 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-          variants: {
-            low: { thinkingConfig: { thinkingBudget: 8192 } },
-            max: { thinkingConfig: { thinkingBudget: 32768 } },
-          },
-        },
-        'antigravity-claude-opus-4-5-thinking': {
-          name: 'Claude Opus 4.5 Thinking (Antigravity)',
-          limit: { context: 200000, output: 64000 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-          variants: {
-            low: { thinkingConfig: { thinkingBudget: 8192 } },
-            max: { thinkingConfig: { thinkingBudget: 32768 } },
-          },
-        },
-        'gemini-2.5-flash': {
-          name: 'Gemini 2.5 Flash (Gemini CLI)',
-          limit: { context: 1048576, output: 65536 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-        },
-        'gemini-2.5-pro': {
-          name: 'Gemini 2.5 Pro (Gemini CLI)',
-          limit: { context: 1048576, output: 65536 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-        },
-        'gemini-3-flash-preview': {
-          name: 'Gemini 3 Flash Preview (Gemini CLI)',
-          limit: { context: 1048576, output: 65536 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-        },
-        'gemini-3.1-pro-preview': {
-          name: 'Gemini 3.1 Pro Preview (Gemini CLI)',
-          limit: { context: 1048576, output: 65535 },
-          modalities: { input: ['text', 'image', 'pdf'], output: ['text'] },
-        },
-      },
-    };
-    config.provider = providers;
-
-    writeConfig(configPath, config);
-    return { success: true, configPath };
-  } catch (err) {
-    return {
-      success: false,
-      configPath,
-      error: `Failed to add google provider: ${err}`,
-    };
-  }
-}
-
-export function addChutesProvider(): ConfigMergeResult {
-  const configPath = getExistingConfigPath();
-  try {
-    // Chutes now follows the OpenCode auth flow (same as other providers).
-    // Keep this step as a no-op success for backward-compatible install output.
-    const { error } = parseConfig(configPath);
-    if (error) {
-      return {
-        success: false,
-        configPath,
-        error: `Failed to parse config: ${error}`,
-      };
-    }
-    return { success: true, configPath };
-  } catch (err) {
-    return {
-      success: false,
-      configPath,
-      error: `Failed to validate chutes provider config: ${err}`,
-    };
-  }
-}
-
-export function detectAntigravityConfig(): boolean {
-  const { config } = parseConfig(getExistingConfigPath());
-  if (!config) return false;
-
-  const providers = config.provider as Record<string, unknown> | undefined;
-  if (providers?.google) return true;
-
-  const plugins = config.plugin ?? [];
-  return plugins.some((p) => p.startsWith('opencode-antigravity-auth'));
-}
+// Antigravity, Google provider, and Chutes provider functions removed in simplification refactor.
 
 export function detectCurrentConfig(): DetectedConfig {
   const result: DetectedConfig = {

+ 0 - 8
src/cli/config-manager.ts

@@ -1,12 +1,4 @@
-export * from './chutes-selection';
 export * from './config-io';
-export * from './dynamic-model-selection';
-export * from './external-rankings';
-export * from './model-selection';
-export * from './opencode-models';
-export * from './opencode-selection';
 export * from './paths';
-export * from './precedence-resolver';
 export * from './providers';
-export * from './scoring-v2';
 export * from './system';

+ 0 - 261
src/cli/dynamic-model-selection-matrix.test.ts

@@ -1,261 +0,0 @@
-/// <reference types="bun-types" />
-
-import { describe, expect, test } from 'bun:test';
-import { buildDynamicModelPlan } from './dynamic-model-selection';
-import type { DiscoveredModel, InstallConfig } from './types';
-
-function m(
-  input: Partial<DiscoveredModel> & { model: string },
-): DiscoveredModel {
-  const [providerID] = input.model.split('/');
-  return {
-    providerID: providerID ?? 'openai',
-    model: input.model,
-    name: input.name ?? input.model,
-    status: input.status ?? 'active',
-    contextLimit: input.contextLimit ?? 200000,
-    outputLimit: input.outputLimit ?? 32000,
-    reasoning: input.reasoning ?? true,
-    toolcall: input.toolcall ?? true,
-    attachment: input.attachment ?? false,
-    dailyRequestLimit: input.dailyRequestLimit,
-    costInput: input.costInput,
-    costOutput: input.costOutput,
-  };
-}
-
-const CATALOG: DiscoveredModel[] = [
-  m({ model: 'openai/gpt-5.3-codex' }),
-  m({ model: 'openai/gpt-5.1-codex-mini' }),
-  m({ model: 'anthropic/claude-opus-4-6' }),
-  m({ model: 'anthropic/claude-sonnet-4-5' }),
-  m({ model: 'anthropic/claude-haiku-4-5', reasoning: false }),
-  m({ model: 'github-copilot/grok-code-fast-1' }),
-  m({ model: 'zai-coding-plan/glm-4.7' }),
-  m({ model: 'google/antigravity-gemini-3.1-pro' }),
-  m({ model: 'google/antigravity-gemini-3-flash' }),
-  m({ model: 'chutes/kimi-k2.5' }),
-  m({ model: 'chutes/minimax-m2.1' }),
-  m({ model: 'kimi-for-coding/k2p5' }),
-  m({ model: 'opencode/glm-4.7-free' }),
-  m({ model: 'opencode/gpt-5-nano' }),
-  m({ model: 'opencode/big-pickle' }),
-];
-
-function baseConfig(): InstallConfig {
-  return {
-    hasKimi: false,
-    hasOpenAI: false,
-    hasAnthropic: false,
-    hasCopilot: false,
-    hasZaiPlan: false,
-    hasAntigravity: false,
-    hasChutes: false,
-    hasOpencodeZen: true,
-    useOpenCodeFreeModels: true,
-    selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
-    selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
-    selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
-    selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
-    hasTmux: false,
-    installSkills: false,
-    installCustomSkills: false,
-    setupMode: 'quick',
-  };
-}
-
-describe('dynamic-model-selection matrix', () => {
-  const scenarios = [
-    {
-      name: 'C1 openai+anthropic+chutes+opencode',
-      config: {
-        ...baseConfig(),
-        hasOpenAI: true,
-        hasAnthropic: true,
-        hasChutes: true,
-      },
-      expectedV1: {
-        oracle: 'openai/gpt-5.3-codex',
-        orchestrator: 'openai/gpt-5.3-codex',
-        fixer: 'chutes/minimax-m2.1',
-        designer: 'chutes/kimi-k2.5',
-        librarian: 'anthropic/claude-opus-4-6',
-        explorer: 'chutes/minimax-m2.1',
-      },
-      expectedV2: {
-        oracle: 'openai/gpt-5.3-codex',
-        orchestrator: 'openai/gpt-5.3-codex',
-        fixer: 'chutes/minimax-m2.1',
-        designer: 'chutes/kimi-k2.5',
-        librarian: 'anthropic/claude-opus-4-6',
-        explorer: 'chutes/minimax-m2.1',
-      },
-    },
-    {
-      name: 'C2 openai+copilot+zai+google+opencode',
-      config: {
-        ...baseConfig(),
-        hasOpenAI: true,
-        hasCopilot: true,
-        hasZaiPlan: true,
-        hasAntigravity: true,
-      },
-      expectedV1: {
-        oracle: 'google/antigravity-gemini-3.1-pro',
-        orchestrator: 'openai/gpt-5.3-codex',
-        fixer: 'openai/gpt-5.1-codex-mini',
-        designer: 'google/antigravity-gemini-3.1-pro',
-        librarian: 'zai-coding-plan/glm-4.7',
-        explorer: 'github-copilot/grok-code-fast-1',
-      },
-      expectedV2: {
-        oracle: 'google/antigravity-gemini-3.1-pro',
-        orchestrator: 'openai/gpt-5.3-codex',
-        fixer: 'openai/gpt-5.1-codex-mini',
-        designer: 'google/antigravity-gemini-3.1-pro',
-        librarian: 'zai-coding-plan/glm-4.7',
-        explorer: 'github-copilot/grok-code-fast-1',
-      },
-    },
-    {
-      name: 'C3 kimi+google+chutes+opencode',
-      config: {
-        ...baseConfig(),
-        hasKimi: true,
-        hasAntigravity: true,
-        hasChutes: true,
-      },
-      expectedV1: {
-        oracle: 'google/antigravity-gemini-3.1-pro',
-        orchestrator: 'chutes/kimi-k2.5',
-        fixer: 'google/antigravity-gemini-3.1-pro',
-        designer: 'chutes/kimi-k2.5',
-        librarian: 'kimi-for-coding/k2p5',
-        explorer: 'google/antigravity-gemini-3-flash',
-      },
-      expectedV2: {
-        oracle: 'google/antigravity-gemini-3.1-pro',
-        orchestrator: 'chutes/kimi-k2.5',
-        fixer: 'google/antigravity-gemini-3.1-pro',
-        designer: 'chutes/kimi-k2.5',
-        librarian: 'kimi-for-coding/k2p5',
-        explorer: 'google/antigravity-gemini-3-flash',
-      },
-    },
-    {
-      name: 'R1 anthropic+copilot+opencode',
-      config: { ...baseConfig(), hasAnthropic: true, hasCopilot: true },
-      expectedV1: {
-        oracle: 'anthropic/claude-opus-4-6',
-        orchestrator: 'github-copilot/grok-code-fast-1',
-        fixer: 'github-copilot/grok-code-fast-1',
-        designer: 'anthropic/claude-opus-4-6',
-        librarian: 'github-copilot/grok-code-fast-1',
-        explorer: 'github-copilot/grok-code-fast-1',
-      },
-      expectedV2: {
-        oracle: 'anthropic/claude-opus-4-6',
-        orchestrator: 'github-copilot/grok-code-fast-1',
-        fixer: 'github-copilot/grok-code-fast-1',
-        designer: 'anthropic/claude-opus-4-6',
-        librarian: 'github-copilot/grok-code-fast-1',
-        explorer: 'github-copilot/grok-code-fast-1',
-      },
-    },
-    {
-      name: 'R2 openai+kimi+zai+chutes+opencode',
-      config: {
-        ...baseConfig(),
-        hasOpenAI: true,
-        hasKimi: true,
-        hasZaiPlan: true,
-        hasChutes: true,
-      },
-      expectedV1: {
-        oracle: 'openai/gpt-5.3-codex',
-        orchestrator: 'openai/gpt-5.3-codex',
-        fixer: 'chutes/minimax-m2.1',
-        designer: 'chutes/kimi-k2.5',
-        librarian: 'zai-coding-plan/glm-4.7',
-        explorer: 'chutes/minimax-m2.1',
-      },
-      expectedV2: {
-        oracle: 'openai/gpt-5.3-codex',
-        orchestrator: 'openai/gpt-5.3-codex',
-        fixer: 'chutes/minimax-m2.1',
-        designer: 'chutes/kimi-k2.5',
-        librarian: 'zai-coding-plan/glm-4.7',
-        explorer: 'chutes/minimax-m2.1',
-      },
-    },
-    {
-      name: 'R3 google+anthropic+chutes+opencode',
-      config: {
-        ...baseConfig(),
-        hasAntigravity: true,
-        hasAnthropic: true,
-        hasChutes: true,
-      },
-      expectedV1: {
-        oracle: 'google/antigravity-gemini-3.1-pro',
-        orchestrator: 'chutes/kimi-k2.5',
-        fixer: 'google/antigravity-gemini-3.1-pro',
-        designer: 'anthropic/claude-opus-4-6',
-        librarian: 'chutes/minimax-m2.1',
-        explorer: 'google/antigravity-gemini-3-flash',
-      },
-      expectedV2: {
-        oracle: 'google/antigravity-gemini-3.1-pro',
-        orchestrator: 'chutes/kimi-k2.5',
-        fixer: 'google/antigravity-gemini-3.1-pro',
-        designer: 'anthropic/claude-opus-4-6',
-        librarian: 'chutes/minimax-m2.1',
-        explorer: 'google/antigravity-gemini-3-flash',
-      },
-    },
-  ] as const;
-
-  for (const scenario of scenarios) {
-    test(scenario.name, () => {
-      const v1 = buildDynamicModelPlan(CATALOG, scenario.config, undefined, {
-        scoringEngineVersion: 'v1',
-      });
-      const shadow = buildDynamicModelPlan(
-        CATALOG,
-        scenario.config,
-        undefined,
-        {
-          scoringEngineVersion: 'v2-shadow',
-        },
-      );
-      const v2 = buildDynamicModelPlan(CATALOG, scenario.config, undefined, {
-        scoringEngineVersion: 'v2',
-      });
-
-      expect(v1).not.toBeNull();
-      expect(v2).not.toBeNull();
-      expect(shadow).not.toBeNull();
-
-      expect(v1?.agents).toMatchObject(
-        Object.fromEntries(
-          Object.entries(scenario.expectedV1).map(([agent, model]) => [
-            agent,
-            { model },
-          ]),
-        ),
-      );
-      expect(v2?.agents).toMatchObject(
-        Object.fromEntries(
-          Object.entries(scenario.expectedV2).map(([agent, model]) => [
-            agent,
-            { model },
-          ]),
-        ),
-      );
-
-      expect(shadow?.agents).toEqual(v1?.agents);
-      expect(shadow?.scoring?.engineVersionApplied).toBe('v1');
-      expect(shadow?.scoring?.shadowCompared).toBe(true);
-    });
-  }
-});

+ 0 - 238
src/cli/dynamic-model-selection.test.ts

@@ -1,238 +0,0 @@
-/// <reference types="bun-types" />
-
-import { describe, expect, test } from 'bun:test';
-import {
-  buildDynamicModelPlan,
-  rankModelsV1WithBreakdown,
-} from './dynamic-model-selection';
-import type { DiscoveredModel, InstallConfig } from './types';
-
-function m(
-  input: Partial<DiscoveredModel> & { model: string },
-): DiscoveredModel {
-  const [providerID] = input.model.split('/');
-  return {
-    providerID: providerID ?? 'openai',
-    model: input.model,
-    name: input.name ?? input.model,
-    status: input.status ?? 'active',
-    contextLimit: input.contextLimit ?? 200000,
-    outputLimit: input.outputLimit ?? 32000,
-    reasoning: input.reasoning ?? true,
-    toolcall: input.toolcall ?? true,
-    attachment: input.attachment ?? false,
-    dailyRequestLimit: input.dailyRequestLimit,
-    costInput: input.costInput,
-    costOutput: input.costOutput,
-  };
-}
-
-function baseInstallConfig(): InstallConfig {
-  return {
-    hasKimi: false,
-    hasOpenAI: true,
-    hasAnthropic: false,
-    hasCopilot: true,
-    hasZaiPlan: true,
-    hasAntigravity: false,
-    hasChutes: true,
-    hasOpencodeZen: true,
-    useOpenCodeFreeModels: true,
-    selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
-    selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
-    selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
-    selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
-    hasTmux: false,
-    installSkills: false,
-    installCustomSkills: false,
-    setupMode: 'quick',
-  };
-}
-
-describe('dynamic-model-selection', () => {
-  test('builds assignments and chains for all six agents', () => {
-    const plan = buildDynamicModelPlan(
-      [
-        m({ model: 'openai/gpt-5.3-codex', reasoning: true, toolcall: true }),
-        m({
-          model: 'openai/gpt-5.1-codex-mini',
-          reasoning: true,
-          toolcall: true,
-        }),
-        m({
-          model: 'github-copilot/grok-code-fast-1',
-          reasoning: true,
-          toolcall: true,
-        }),
-        m({
-          model: 'zai-coding-plan/glm-4.7',
-          reasoning: true,
-          toolcall: true,
-        }),
-        m({ model: 'chutes/kimi-k2.5', reasoning: true, toolcall: true }),
-        m({ model: 'chutes/minimax-m2.1', reasoning: true, toolcall: true }),
-      ],
-      baseInstallConfig(),
-    );
-
-    expect(plan).not.toBeNull();
-    const agents = plan?.agents ?? {};
-    const chains = plan?.chains ?? {};
-
-    expect(Object.keys(agents).sort()).toEqual([
-      'designer',
-      'explorer',
-      'fixer',
-      'librarian',
-      'oracle',
-      'orchestrator',
-    ]);
-    expect(agents.oracle?.model.startsWith('opencode/')).toBe(false);
-    expect(agents.orchestrator?.model.startsWith('opencode/')).toBe(false);
-    expect(chains.oracle.some((m: string) => m.startsWith('openai/'))).toBe(
-      true,
-    );
-    expect(chains.orchestrator).toContain('chutes/kimi-k2.5');
-    expect(chains.explorer).toContain('opencode/gpt-5-nano');
-    expect(chains.fixer[chains.fixer.length - 1]).toBe('opencode/gpt-5-nano');
-    expect(plan?.provenance?.oracle?.winnerLayer).toBe(
-      'dynamic-recommendation',
-    );
-    expect(plan?.scoring?.engineVersionApplied).toBe('v1');
-  });
-
-  test('supports v2-shadow mode without changing applied engine', () => {
-    const plan = buildDynamicModelPlan(
-      [
-        m({ model: 'openai/gpt-5.3-codex', reasoning: true, toolcall: true }),
-        m({ model: 'chutes/kimi-k2.5', reasoning: true, toolcall: true }),
-        m({ model: 'opencode/gpt-5-nano', reasoning: true, toolcall: true }),
-      ],
-      baseInstallConfig(),
-      undefined,
-      { scoringEngineVersion: 'v2-shadow' },
-    );
-
-    expect(plan).not.toBeNull();
-    expect(plan?.scoring?.engineVersionApplied).toBe('v1');
-    expect(plan?.scoring?.shadowCompared).toBe(true);
-    expect(plan?.scoring?.diffs?.oracle).toBeDefined();
-  });
-
-  test('balances provider usage when subscription mode is enabled', () => {
-    const plan = buildDynamicModelPlan(
-      [
-        m({ model: 'openai/gpt-5.3-codex', reasoning: true, toolcall: true }),
-        m({
-          model: 'openai/gpt-5.1-codex-mini',
-          reasoning: true,
-          toolcall: true,
-        }),
-        m({
-          model: 'zai-coding-plan/glm-4.7',
-          reasoning: true,
-          toolcall: true,
-        }),
-        m({
-          model: 'zai-coding-plan/glm-4.7-flash',
-          reasoning: true,
-          toolcall: true,
-        }),
-        m({
-          model: 'chutes/moonshotai/Kimi-K2.5-TEE',
-          reasoning: true,
-          toolcall: true,
-        }),
-        m({
-          model: 'chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8-TEE',
-          reasoning: true,
-          toolcall: true,
-        }),
-      ],
-      {
-        ...baseInstallConfig(),
-        hasCopilot: false,
-        balanceProviderUsage: true,
-      },
-      undefined,
-      { scoringEngineVersion: 'v2' },
-    );
-
-    expect(plan).not.toBeNull();
-    const usage = Object.values(plan?.agents ?? {}).reduce(
-      (acc, assignment) => {
-        const provider = assignment.model.split('/')[0] ?? 'unknown';
-        acc[provider] = (acc[provider] ?? 0) + 1;
-        return acc;
-      },
-      {} as Record<string, number>,
-    );
-
-    expect(usage.openai).toBe(2);
-    expect(usage['zai-coding-plan']).toBe(2);
-    expect(usage.chutes).toBe(2);
-  });
-
-  test('matches external signals for multi-segment chutes ids in v1', () => {
-    const ranked = rankModelsV1WithBreakdown(
-      [m({ model: 'chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8-TEE' })],
-      'fixer',
-      {
-        'qwen/qwen3-coder-480b-a35b-instruct': {
-          source: 'artificial-analysis',
-          qualityScore: 95,
-          codingScore: 92,
-        },
-      },
-    );
-
-    expect(ranked[0]?.externalSignalBoost).toBeGreaterThan(0);
-  });
-
-  test('prefers chutes kimi/minimax over qwen3 in v1 role scoring', () => {
-    const catalog = [
-      m({
-        model: 'chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8-TEE',
-        reasoning: true,
-        toolcall: true,
-      }),
-      m({
-        model: 'chutes/moonshotai/Kimi-K2.5-TEE',
-        reasoning: true,
-        toolcall: true,
-      }),
-      m({
-        model: 'chutes/minimax-m2.1',
-        reasoning: true,
-        toolcall: true,
-      }),
-    ];
-
-    const fixer = rankModelsV1WithBreakdown(catalog, 'fixer');
-    const explorer = rankModelsV1WithBreakdown(catalog, 'explorer');
-
-    expect(fixer[0]?.model).not.toContain('Qwen3-Coder-480B');
-    expect(explorer[0]?.model).toContain('minimax-m2.1');
-  });
-
-  test('does not apply a positive Gemini bonus in v1 scoring', () => {
-    const catalog = [
-      m({
-        model: 'google/antigravity-gemini-3.1-pro',
-        reasoning: true,
-        toolcall: true,
-      }),
-      m({ model: 'openai/gpt-5.3-codex', reasoning: true, toolcall: true }),
-    ];
-
-    const oracle = rankModelsV1WithBreakdown(catalog, 'oracle');
-    const orchestrator = rankModelsV1WithBreakdown(catalog, 'orchestrator');
-    const designer = rankModelsV1WithBreakdown(catalog, 'designer');
-    const librarian = rankModelsV1WithBreakdown(catalog, 'librarian');
-
-    expect(oracle[0]?.model).toBe('openai/gpt-5.3-codex');
-    expect(orchestrator[0]?.model).toBe('openai/gpt-5.3-codex');
-    expect(designer[0]?.model).toBe('openai/gpt-5.3-codex');
-    expect(librarian[0]?.model).toBe('openai/gpt-5.3-codex');
-  });
-});

File diff suppressed because it is too large
+ 0 - 1348
src/cli/dynamic-model-selection.ts


+ 0 - 255
src/cli/external-rankings.ts

@@ -1,255 +0,0 @@
-import { buildModelKeyAliases } from './model-key-normalization';
-import type { ExternalModelSignal, ExternalSignalMap } from './types';
-
-interface ArtificialAnalysisResponse {
-  data?: Array<{
-    id?: string;
-    name?: string;
-    slug?: string;
-    model_creator?: {
-      slug?: string;
-    };
-    evaluations?: {
-      artificial_analysis_intelligence_index?: number;
-      artificial_analysis_coding_index?: number;
-      livecodebench?: number;
-    };
-    pricing?: {
-      price_1m_input_tokens?: number;
-      price_1m_output_tokens?: number;
-      price_1m_blended_3_to_1?: number;
-    };
-    median_time_to_first_token_seconds?: number;
-  }>;
-}
-
-interface OpenRouterModelsResponse {
-  data?: Array<{
-    id?: string;
-    pricing?: {
-      prompt?: string;
-      completion?: string;
-    };
-  }>;
-}
-
-function normalizeKey(input: string): string {
-  return input.trim().toLowerCase();
-}
-
-function baseAliases(key: string): string[] {
-  return buildModelKeyAliases(normalizeKey(key));
-}
-
-function providerScopedAlias(alias: string, providerPrefix?: string): string {
-  if (!providerPrefix || alias.includes('/')) return alias;
-  return `${providerPrefix}/${alias}`;
-}
-
-function mergeSignal(
-  existing: ExternalModelSignal | undefined,
-  incoming: ExternalModelSignal,
-): ExternalModelSignal {
-  if (!existing) return incoming;
-
-  return {
-    qualityScore: incoming.qualityScore ?? existing.qualityScore,
-    codingScore: incoming.codingScore ?? existing.codingScore,
-    latencySeconds: incoming.latencySeconds ?? existing.latencySeconds,
-    inputPricePer1M: incoming.inputPricePer1M ?? existing.inputPricePer1M,
-    outputPricePer1M: incoming.outputPricePer1M ?? existing.outputPricePer1M,
-    source: 'merged',
-  };
-}
-
-function providerPrefixFromCreator(creatorSlug?: string): string | undefined {
-  if (!creatorSlug) return undefined;
-  const slug = creatorSlug.toLowerCase();
-  if (slug.includes('openai')) return 'openai';
-  if (slug.includes('anthropic')) return 'anthropic';
-  if (slug.includes('google')) return 'google';
-  if (slug.includes('chutes')) return 'chutes';
-  if (slug.includes('copilot') || slug.includes('github'))
-    return 'github-copilot';
-  if (slug.includes('zai') || slug.includes('z-ai')) return 'zai-coding-plan';
-  if (slug.includes('kimi')) return 'kimi-for-coding';
-  if (slug.includes('opencode')) return 'opencode';
-  return undefined;
-}
-
-function parseOpenRouterPrice(value: string | undefined): number | undefined {
-  if (!value) return undefined;
-  const parsed = Number.parseFloat(value);
-  if (!Number.isFinite(parsed)) return undefined;
-  return parsed * 1_000_000;
-}
-
-async function fetchJsonWithTimeout(
-  url: string,
-  init: RequestInit,
-  timeoutMs: number,
-): Promise<Response> {
-  const controller = new AbortController();
-  const timer = setTimeout(() => controller.abort(), timeoutMs);
-
-  try {
-    return await fetch(url, {
-      ...init,
-      signal: controller.signal,
-    });
-  } finally {
-    clearTimeout(timer);
-  }
-}
-
-import { getEnv } from '../utils';
-
-async function fetchArtificialAnalysisSignals(
-  apiKey: string,
-): Promise<ExternalSignalMap> {
-  const response = await fetchJsonWithTimeout(
-    'https://artificialanalysis.ai/api/v2/data/llms/models',
-    {
-      headers: {
-        'x-api-key': apiKey,
-      },
-    },
-    8000,
-  );
-
-  if (!response.ok) {
-    throw new Error(
-      `Artificial Analysis request failed (${response.status} ${response.statusText})`,
-    );
-  }
-
-  const parsed = (await response.json()) as ArtificialAnalysisResponse;
-  const map: ExternalSignalMap = {};
-
-  for (const model of parsed.data ?? []) {
-    const baseSignal: ExternalModelSignal = {
-      qualityScore: model.evaluations?.artificial_analysis_intelligence_index,
-      codingScore:
-        model.evaluations?.artificial_analysis_coding_index ??
-        model.evaluations?.livecodebench,
-      latencySeconds: model.median_time_to_first_token_seconds,
-      inputPricePer1M:
-        model.pricing?.price_1m_input_tokens ??
-        model.pricing?.price_1m_blended_3_to_1,
-      outputPricePer1M:
-        model.pricing?.price_1m_output_tokens ??
-        model.pricing?.price_1m_blended_3_to_1,
-      source: 'artificial-analysis',
-    };
-
-    const id = model.id ? normalizeKey(model.id) : undefined;
-    const slug = model.slug ? normalizeKey(model.slug) : undefined;
-    const name = model.name ? normalizeKey(model.name) : undefined;
-    const providerPrefix = providerPrefixFromCreator(model.model_creator?.slug);
-
-    for (const key of [id, slug, name]) {
-      if (!key) continue;
-      for (const alias of baseAliases(key)) {
-        if (!providerPrefix || alias.includes('/')) {
-          map[alias] = mergeSignal(map[alias], baseSignal);
-        }
-
-        const scopedAlias = providerScopedAlias(alias, providerPrefix);
-        map[scopedAlias] = mergeSignal(map[scopedAlias], baseSignal);
-      }
-    }
-  }
-
-  return map;
-}
-
-async function fetchOpenRouterSignals(
-  apiKey: string,
-): Promise<ExternalSignalMap> {
-  const response = await fetchJsonWithTimeout(
-    'https://openrouter.ai/api/v1/models',
-    {
-      headers: {
-        Authorization: `Bearer ${apiKey}`,
-      },
-    },
-    8000,
-  );
-
-  if (!response.ok) {
-    throw new Error(
-      `OpenRouter request failed (${response.status} ${response.statusText})`,
-    );
-  }
-
-  const parsed = (await response.json()) as OpenRouterModelsResponse;
-  const map: ExternalSignalMap = {};
-
-  for (const model of parsed.data ?? []) {
-    if (!model.id) continue;
-    const key = normalizeKey(model.id);
-    const providerPrefix = key.split('/')[0];
-    const signal: ExternalModelSignal = {
-      inputPricePer1M: parseOpenRouterPrice(model.pricing?.prompt),
-      outputPricePer1M: parseOpenRouterPrice(model.pricing?.completion),
-      source: 'openrouter',
-    };
-
-    for (const alias of baseAliases(key)) {
-      if (alias.includes('/')) {
-        map[alias] = mergeSignal(map[alias], signal);
-      }
-
-      const scopedAlias = providerScopedAlias(alias, providerPrefix);
-      map[scopedAlias] = mergeSignal(map[scopedAlias], signal);
-    }
-  }
-
-  return map;
-}
-
-export async function fetchExternalModelSignals(options?: {
-  artificialAnalysisApiKey?: string;
-  openRouterApiKey?: string;
-}): Promise<{
-  signals: ExternalSignalMap;
-  warnings: string[];
-}> {
-  const warnings: string[] = [];
-  const aggregate: ExternalSignalMap = {};
-
-  const aaKey =
-    options?.artificialAnalysisApiKey ?? getEnv('ARTIFICIAL_ANALYSIS_API_KEY');
-  const orKey = options?.openRouterApiKey ?? getEnv('OPENROUTER_API_KEY');
-
-  const aaPromise: Promise<ExternalSignalMap> = aaKey
-    ? fetchArtificialAnalysisSignals(aaKey)
-    : Promise.resolve({});
-  const orPromise: Promise<ExternalSignalMap> = orKey
-    ? fetchOpenRouterSignals(orKey)
-    : Promise.resolve({});
-
-  const [aaResult, orResult] = await Promise.allSettled([aaPromise, orPromise]);
-
-  if (aaResult.status === 'fulfilled') {
-    for (const [key, signal] of Object.entries(aaResult.value)) {
-      aggregate[key] = mergeSignal(aggregate[key], signal);
-    }
-  } else if (aaKey) {
-    warnings.push(
-      `Artificial Analysis unavailable: ${aaResult.reason instanceof Error ? aaResult.reason.message : String(aaResult.reason)}`,
-    );
-  }
-
-  if (orResult.status === 'fulfilled') {
-    for (const [key, signal] of Object.entries(orResult.value)) {
-      aggregate[key] = mergeSignal(aggregate[key], signal);
-    }
-  } else if (orKey) {
-    warnings.push(
-      `OpenRouter unavailable: ${orResult.reason instanceof Error ? orResult.reason.message : String(orResult.reason)}`,
-    );
-  }
-
-  return { signals: aggregate, warnings };
-}

+ 8 - 49
src/cli/index.ts

@@ -10,38 +10,12 @@ function parseArgs(args: string[]): InstallArgs {
   for (const arg of args) {
     if (arg === '--no-tui') {
       result.tui = false;
-    } else if (arg.startsWith('--kimi=')) {
-      result.kimi = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--openai=')) {
-      result.openai = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--anthropic=')) {
-      result.anthropic = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--copilot=')) {
-      result.copilot = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--zai-plan=')) {
-      result.zaiPlan = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--antigravity=')) {
-      result.antigravity = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--chutes=')) {
-      result.chutes = arg.split('=')[1] as BooleanArg;
     } else if (arg.startsWith('--tmux=')) {
       result.tmux = arg.split('=')[1] as BooleanArg;
     } else if (arg.startsWith('--skills=')) {
       result.skills = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--opencode-free=')) {
-      result.opencodeFree = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--balanced-spend=')) {
-      result.balancedSpend = arg.split('=')[1] as BooleanArg;
-    } else if (arg.startsWith('--opencode-free-model=')) {
-      result.opencodeFreeModel = arg.split('=')[1];
-    } else if (arg.startsWith('--aa-key=')) {
-      result.aaKey = arg.slice('--aa-key='.length);
-    } else if (arg.startsWith('--openrouter-key=')) {
-      result.openrouterKey = arg.slice('--openrouter-key='.length);
     } else if (arg === '--dry-run') {
       result.dryRun = true;
-    } else if (arg === '--models-only') {
-      result.modelsOnly = true;
     } else if (arg === '-h' || arg === '--help') {
       printHelp();
       process.exit(0);
@@ -56,44 +30,29 @@ function printHelp(): void {
 oh-my-opencode-slim installer
 
 Usage: bunx oh-my-opencode-slim install [OPTIONS]
-       bunx oh-my-opencode-slim models [OPTIONS]
 
 Options:
-  --kimi=yes|no          Kimi API access (yes/no)
-  --openai=yes|no        OpenAI API access (yes/no)
-  --anthropic=yes|no     Anthropic access (yes/no)
-  --copilot=yes|no       GitHub Copilot access (yes/no)
-  --zai-plan=yes|no      ZAI Coding Plan access (yes/no)
-  --antigravity=yes|no   Antigravity/Google models (yes/no)
-  --chutes=yes|no        Chutes models (yes/no)
-  --opencode-free=yes|no Use OpenCode free models (opencode/*)
-  --balanced-spend=yes|no Evenly spread usage across selected providers when score gaps are within tolerance
-  --opencode-free-model  Preferred OpenCode model id or "auto"
-  --aa-key               Artificial Analysis API key (optional)
-  --openrouter-key       OpenRouter API key (optional)
   --tmux=yes|no          Enable tmux integration (yes/no)
   --skills=yes|no        Install recommended skills (yes/no)
-  --no-tui               Non-interactive mode (requires all flags)
-  --dry-run              Simulate install without writing files or requiring OpenCode
-  --models-only          Update model assignments only (skip plugin/auth/skills)
+  --no-tui               Non-interactive mode
+  --dry-run              Simulate install without writing files
   -h, --help             Show this help message
 
+The installer generates an OpenAI configuration by default.
+For alternative providers, see docs/provider-configurations.md.
+
 Examples:
   bunx oh-my-opencode-slim install
-  bunx oh-my-opencode-slim models
-  bunx oh-my-opencode-slim install --no-tui --kimi=yes --openai=yes --anthropic=yes --copilot=no --zai-plan=no --antigravity=yes --chutes=no --opencode-free=yes --balanced-spend=yes --opencode-free-model=auto --aa-key=YOUR_AA_KEY --openrouter-key=YOUR_OR_KEY --tmux=no --skills=yes
+  bunx oh-my-opencode-slim install --no-tui --tmux=no --skills=yes
 `);
 }
 
 async function main(): Promise<void> {
   const args = process.argv.slice(2);
 
-  if (args.length === 0 || args[0] === 'install' || args[0] === 'models') {
-    const hasSubcommand = args[0] === 'install' || args[0] === 'models';
+  if (args.length === 0 || args[0] === 'install') {
+    const hasSubcommand = args[0] === 'install';
     const installArgs = parseArgs(args.slice(hasSubcommand ? 1 : 0));
-    if (args[0] === 'models') {
-      installArgs.modelsOnly = true;
-    }
     const exitCode = await install(installArgs);
     process.exit(exitCode);
   } else if (args[0] === '-h' || args[0] === '--help') {

File diff suppressed because it is too large
+ 44 - 1326
src/cli/install.ts


+ 0 - 65
src/cli/model-selection.test.ts

@@ -1,65 +0,0 @@
-/// <reference types="bun-types" />
-
-import { describe, expect, test } from 'bun:test';
-import {
-  type ModelSelectionCandidate,
-  pickBestModel,
-  pickPrimaryAndSupport,
-  rankModels,
-} from './model-selection';
-
-interface Candidate extends ModelSelectionCandidate {
-  speed: number;
-  quality: number;
-}
-
-describe('model-selection', () => {
-  test('pickBestModel returns null for empty list', () => {
-    const best = pickBestModel([], () => 1);
-    expect(best).toBeNull();
-  });
-
-  test('rankModels applies deterministic tie-break by model id', () => {
-    const models: Candidate[] = [
-      { model: 'provider/zeta', speed: 1, quality: 1 },
-      { model: 'provider/alpha', speed: 1, quality: 1 },
-    ];
-
-    const ranked = rankModels(models, () => 10);
-    expect(ranked[0]?.candidate.model).toBe('provider/alpha');
-    expect(ranked[1]?.candidate.model).toBe('provider/zeta');
-  });
-
-  test('pickPrimaryAndSupport avoids duplicate support when possible', () => {
-    const models: Candidate[] = [
-      { model: 'provider/main', speed: 20, quality: 90 },
-      { model: 'provider/helper', speed: 95, quality: 60 },
-    ];
-
-    const picked = pickPrimaryAndSupport(
-      models,
-      {
-        primary: (model) => model.quality,
-        support: (model) => model.speed,
-      },
-      'provider/main',
-    );
-
-    expect(picked.primary?.model).toBe('provider/main');
-    expect(picked.support?.model).toBe('provider/helper');
-  });
-
-  test('pickPrimaryAndSupport falls back to same model when only one exists', () => {
-    const models: Candidate[] = [
-      { model: 'provider/solo', speed: 10, quality: 10 },
-    ];
-
-    const picked = pickPrimaryAndSupport(models, {
-      primary: (model) => model.quality,
-      support: (model) => model.speed,
-    });
-
-    expect(picked.primary?.model).toBe('provider/solo');
-    expect(picked.support?.model).toBe('provider/solo');
-  });
-});

+ 0 - 87
src/cli/model-selection.ts

@@ -1,87 +0,0 @@
-export interface ModelSelectionCandidate {
-  model: string;
-  status?: 'alpha' | 'beta' | 'deprecated' | 'active';
-  contextLimit?: number;
-  outputLimit?: number;
-  reasoning?: boolean;
-  toolcall?: boolean;
-  attachment?: boolean;
-  tags?: string[];
-  meta?: Record<string, unknown>;
-}
-
-export interface RankedModel<T extends ModelSelectionCandidate> {
-  candidate: T;
-  score: number;
-}
-
-export interface SelectionOptions<T extends ModelSelectionCandidate> {
-  excludeModels?: string[];
-  tieBreaker?: (left: T, right: T) => number;
-}
-
-export type ScoreFunction<T extends ModelSelectionCandidate> = (
-  candidate: T,
-) => number;
-
-export interface RoleScoring<T extends ModelSelectionCandidate> {
-  primary: ScoreFunction<T>;
-  support: ScoreFunction<T>;
-}
-
-function defaultTieBreaker<T extends ModelSelectionCandidate>(
-  left: T,
-  right: T,
-): number {
-  return left.model.localeCompare(right.model);
-}
-
-export function rankModels<T extends ModelSelectionCandidate>(
-  models: T[],
-  scoreFn: ScoreFunction<T>,
-  options: SelectionOptions<T> = {},
-): RankedModel<T>[] {
-  const excluded = new Set(options.excludeModels ?? []);
-  const tieBreaker = options.tieBreaker ?? defaultTieBreaker;
-
-  return models
-    .filter((model) => !excluded.has(model.model))
-    .map((candidate) => ({
-      candidate,
-      score: scoreFn(candidate),
-    }))
-    .sort((left, right) => {
-      if (left.score !== right.score) return right.score - left.score;
-      return tieBreaker(left.candidate, right.candidate);
-    });
-}
-
-export function pickBestModel<T extends ModelSelectionCandidate>(
-  models: T[],
-  scoreFn: ScoreFunction<T>,
-  options: SelectionOptions<T> = {},
-): T | null {
-  return rankModels(models, scoreFn, options)[0]?.candidate ?? null;
-}
-
-export function pickPrimaryAndSupport<T extends ModelSelectionCandidate>(
-  models: T[],
-  scoring: RoleScoring<T>,
-  preferredPrimaryModel?: string,
-): { primary: T | null; support: T | null } {
-  if (models.length === 0) return { primary: null, support: null };
-
-  const preferredPrimary = preferredPrimaryModel
-    ? models.find((candidate) => candidate.model === preferredPrimaryModel)
-    : undefined;
-  const primary = preferredPrimary ?? pickBestModel(models, scoring.primary);
-
-  if (!primary) return { primary: null, support: null };
-
-  const support =
-    pickBestModel(models, scoring.support, {
-      excludeModels: [primary.model],
-    }) ?? pickBestModel(models, scoring.support);
-
-  return { primary, support };
-}

+ 0 - 63
src/cli/opencode-models.test.ts

@@ -1,63 +0,0 @@
-/// <reference types="bun-types" />
-
-import { describe, expect, test } from 'bun:test';
-import { parseOpenCodeModelsVerboseOutput } from './opencode-models';
-
-const SAMPLE_OUTPUT = `
-opencode/gpt-5-nano
-{
-  "id": "gpt-5-nano",
-  "providerID": "opencode",
-  "name": "GPT 5 Nano",
-  "status": "active",
-  "cost": { "input": 0, "output": 0, "cache": { "read": 0, "write": 0 } },
-  "limit": { "context": 400000, "output": 128000 },
-  "capabilities": { "reasoning": true, "toolcall": true, "attachment": true }
-}
-chutes/minimax-m2.1-5000
-{
-  "id": "minimax-m2.1-5000",
-  "providerID": "chutes",
-  "name": "MiniMax M2.1 5000 req/day",
-  "status": "active",
-  "cost": { "input": 0, "output": 0, "cache": { "read": 0, "write": 0 } },
-  "limit": { "context": 500000, "output": 64000 },
-  "capabilities": { "reasoning": true, "toolcall": true, "attachment": false }
-}
-chutes/qwen3-coder-30b
-{
-  "id": "qwen3-coder-30b",
-  "providerID": "chutes",
-  "name": "Qwen3 Coder 30B",
-  "status": "active",
-  "cost": { "input": 0.4, "output": 0.8, "cache": { "read": 0, "write": 0 } },
-  "limit": { "context": 262144, "output": 32768 },
-  "capabilities": { "reasoning": true, "toolcall": true, "attachment": false }
-}
-`;
-
-describe('opencode-models parser', () => {
-  test('filters by provider and keeps free models', () => {
-    const models = parseOpenCodeModelsVerboseOutput(SAMPLE_OUTPUT, 'opencode');
-    expect(models.length).toBe(1);
-    expect(models[0]?.model).toBe('opencode/gpt-5-nano');
-    expect(models[0]?.providerID).toBe('opencode');
-  });
-
-  test('extracts chutes daily request limit from model metadata', () => {
-    const models = parseOpenCodeModelsVerboseOutput(SAMPLE_OUTPUT, 'chutes');
-    expect(models.length).toBe(1);
-    expect(models[0]?.model).toBe('chutes/minimax-m2.1-5000');
-    expect(models[0]?.dailyRequestLimit).toBe(5000);
-  });
-
-  test('includes non-free chutes models when freeOnly is disabled', () => {
-    const models = parseOpenCodeModelsVerboseOutput(
-      SAMPLE_OUTPUT,
-      'chutes',
-      false,
-    );
-    expect(models.length).toBe(2);
-    expect(models[1]?.model).toBe('chutes/qwen3-coder-30b');
-  });
-});

+ 0 - 252
src/cli/opencode-models.ts

@@ -1,252 +0,0 @@
-import { resolveOpenCodePath } from './system';
-import type { DiscoveredModel, OpenCodeFreeModel } from './types';
-
-interface OpenCodeModelVerboseRecord {
-  id: string;
-  providerID: string;
-  name?: string;
-  status?: 'alpha' | 'beta' | 'deprecated' | 'active';
-  cost?: {
-    input?: number;
-    output?: number;
-    cache?: {
-      read?: number;
-      write?: number;
-    };
-  };
-  limit?: {
-    context?: number;
-    output?: number;
-  };
-  capabilities?: {
-    reasoning?: boolean;
-    toolcall?: boolean;
-    attachment?: boolean;
-  };
-  quota?: {
-    requestsPerDay?: number;
-  };
-  meta?: {
-    requestsPerDay?: number;
-    dailyLimit?: number;
-  };
-}
-
-function isFreeModel(record: OpenCodeModelVerboseRecord): boolean {
-  const inputCost = record.cost?.input ?? 0;
-  const outputCost = record.cost?.output ?? 0;
-  const cacheReadCost = record.cost?.cache?.read ?? 0;
-  const cacheWriteCost = record.cost?.cache?.write ?? 0;
-
-  return (
-    inputCost === 0 &&
-    outputCost === 0 &&
-    cacheReadCost === 0 &&
-    cacheWriteCost === 0
-  );
-}
-
-function parseDailyRequestLimit(
-  record: OpenCodeModelVerboseRecord,
-): number | undefined {
-  const explicitLimit =
-    record.quota?.requestsPerDay ??
-    record.meta?.requestsPerDay ??
-    record.meta?.dailyLimit;
-
-  if (typeof explicitLimit === 'number' && Number.isFinite(explicitLimit)) {
-    return explicitLimit;
-  }
-
-  const source = `${record.id} ${record.name ?? ''}`.toLowerCase();
-  const match = source.match(
-    /\b(300|2000|5000)\b(?:\s*(?:req|requests|rpd|\/day))?/,
-  );
-  if (!match) return undefined;
-  const parsed = Number.parseInt(match[1], 10);
-  return Number.isFinite(parsed) ? parsed : undefined;
-}
-
-function normalizeDiscoveredModel(
-  record: OpenCodeModelVerboseRecord,
-  providerFilter?: string,
-): DiscoveredModel | null {
-  if (providerFilter && record.providerID !== providerFilter) return null;
-
-  const fullModel = `${record.providerID}/${record.id}`;
-
-  return {
-    providerID: record.providerID,
-    model: fullModel,
-    name: record.name ?? record.id,
-    status: record.status ?? 'active',
-    contextLimit: record.limit?.context ?? 0,
-    outputLimit: record.limit?.output ?? 0,
-    reasoning: record.capabilities?.reasoning === true,
-    toolcall: record.capabilities?.toolcall === true,
-    attachment: record.capabilities?.attachment === true,
-    dailyRequestLimit: parseDailyRequestLimit(record),
-    costInput: record.cost?.input,
-    costOutput: record.cost?.output,
-  };
-}
-
-export function parseOpenCodeModelsVerboseOutput(
-  output: string,
-  providerFilter?: string,
-  freeOnly = true,
-): DiscoveredModel[] {
-  const lines = output.split(/\r?\n/);
-  const models: DiscoveredModel[] = [];
-  const modelHeaderPattern = /^[a-z0-9-]+\/.+$/i;
-
-  for (let index = 0; index < lines.length; index++) {
-    const line = lines[index]?.trim();
-    if (!line || !line.includes('/')) continue;
-
-    if (!modelHeaderPattern.test(line)) continue;
-
-    let jsonStart = -1;
-    for (let search = index + 1; search < lines.length; search++) {
-      if (lines[search]?.trim().startsWith('{')) {
-        jsonStart = search;
-        break;
-      }
-
-      if (modelHeaderPattern.test(lines[search]?.trim() ?? '')) {
-        break;
-      }
-    }
-
-    if (jsonStart === -1) continue;
-
-    let braceDepth = 0;
-    const jsonLines: string[] = [];
-    let jsonEnd = -1;
-
-    for (let cursor = jsonStart; cursor < lines.length; cursor++) {
-      const current = lines[cursor] ?? '';
-      jsonLines.push(current);
-
-      for (const char of current) {
-        if (char === '{') braceDepth++;
-        if (char === '}') braceDepth--;
-      }
-
-      if (braceDepth === 0 && jsonLines.length > 0) {
-        jsonEnd = cursor;
-        break;
-      }
-    }
-
-    if (jsonEnd === -1) continue;
-
-    try {
-      const parsed = JSON.parse(
-        jsonLines.join('\n'),
-      ) as OpenCodeModelVerboseRecord;
-      const normalized = normalizeDiscoveredModel(parsed, providerFilter);
-      if (!normalized) continue;
-      if (freeOnly && !isFreeModel(parsed)) continue;
-      if (normalized) models.push(normalized);
-    } catch {
-      // Ignore malformed blocks and continue parsing the next model.
-    }
-
-    index = jsonEnd;
-  }
-
-  return models;
-}
-
-async function discoverModelsByProvider(
-  providerID?: string,
-  freeOnly = true,
-): Promise<{
-  models: DiscoveredModel[];
-  error?: string;
-}> {
-  try {
-    const opencodePath = resolveOpenCodePath();
-    const proc = Bun.spawn([opencodePath, 'models', '--refresh', '--verbose'], {
-      stdout: 'pipe',
-      stderr: 'pipe',
-    });
-
-    const stdout = await new Response(proc.stdout).text();
-    const stderr = await new Response(proc.stderr).text();
-    await proc.exited;
-
-    if (proc.exitCode !== 0) {
-      return {
-        models: [],
-        error: stderr.trim() || 'Failed to fetch OpenCode models.',
-      };
-    }
-
-    return {
-      models: parseOpenCodeModelsVerboseOutput(stdout, providerID, freeOnly),
-    };
-  } catch {
-    return {
-      models: [],
-      error: 'Unable to run `opencode models --refresh --verbose`.',
-    };
-  }
-}
-
-export async function discoverModelCatalog(): Promise<{
-  models: DiscoveredModel[];
-  error?: string;
-}> {
-  try {
-    const opencodePath = resolveOpenCodePath();
-    const proc = Bun.spawn([opencodePath, 'models', '--refresh', '--verbose'], {
-      stdout: 'pipe',
-      stderr: 'pipe',
-    });
-
-    const stdout = await new Response(proc.stdout).text();
-    const stderr = await new Response(proc.stderr).text();
-    await proc.exited;
-
-    if (proc.exitCode !== 0) {
-      return {
-        models: [],
-        error: stderr.trim() || 'Failed to fetch OpenCode models.',
-      };
-    }
-
-    return {
-      models: parseOpenCodeModelsVerboseOutput(stdout, undefined, false),
-    };
-  } catch {
-    return {
-      models: [],
-      error: 'Unable to run `opencode models --refresh --verbose`.',
-    };
-  }
-}
-
-export async function discoverOpenCodeFreeModels(): Promise<{
-  models: OpenCodeFreeModel[];
-  error?: string;
-}> {
-  const result = await discoverModelsByProvider('opencode', true);
-  return { models: result.models as OpenCodeFreeModel[], error: result.error };
-}
-
-export async function discoverProviderFreeModels(providerID: string): Promise<{
-  models: OpenCodeFreeModel[];
-  error?: string;
-}> {
-  const result = await discoverModelsByProvider(providerID, true);
-  return { models: result.models as OpenCodeFreeModel[], error: result.error };
-}
-
-export async function discoverProviderModels(providerID: string): Promise<{
-  models: DiscoveredModel[];
-  error?: string;
-}> {
-  return discoverModelsByProvider(providerID, false);
-}

+ 0 - 63
src/cli/opencode-selection.ts

@@ -1,63 +0,0 @@
-import {
-  pickBestModel,
-  pickPrimaryAndSupport,
-  type ScoreFunction,
-} from './model-selection';
-import type { OpenCodeFreeModel } from './types';
-
-const scoreOpenCodePrimaryForCoding: ScoreFunction<OpenCodeFreeModel> = (
-  model,
-) => {
-  return (
-    (model.reasoning ? 100 : 0) +
-    (model.toolcall ? 80 : 0) +
-    (model.attachment ? 20 : 0) +
-    Math.min(model.contextLimit, 1_000_000) / 10_000 +
-    Math.min(model.outputLimit, 300_000) / 10_000 +
-    (model.status === 'active' ? 10 : 0)
-  );
-};
-
-function speedBonus(modelName: string): number {
-  const lower = modelName.toLowerCase();
-  let score = 0;
-  if (lower.includes('nano')) score += 60;
-  if (lower.includes('flash')) score += 45;
-  if (lower.includes('mini')) score += 25;
-  if (lower.includes('preview')) score += 10;
-  return score;
-}
-
-const scoreOpenCodeSupportForCoding: ScoreFunction<OpenCodeFreeModel> = (
-  model,
-) => {
-  return (
-    (model.toolcall ? 90 : 0) +
-    (model.reasoning ? 50 : 0) +
-    speedBonus(model.model) +
-    Math.min(model.contextLimit, 400_000) / 20_000 +
-    (model.status === 'active' ? 5 : 0)
-  );
-};
-
-export function pickBestCodingOpenCodeModel(
-  models: OpenCodeFreeModel[],
-): OpenCodeFreeModel | null {
-  return pickBestModel(models, scoreOpenCodePrimaryForCoding);
-}
-
-export function pickSupportOpenCodeModel(
-  models: OpenCodeFreeModel[],
-  primaryModel?: string,
-): OpenCodeFreeModel | null {
-  const { support } = pickPrimaryAndSupport(
-    models,
-    {
-      primary: scoreOpenCodePrimaryForCoding,
-      support: scoreOpenCodeSupportForCoding,
-    },
-    primaryModel,
-  );
-
-  return support;
-}

+ 0 - 37
src/cli/precedence-resolver.test.ts

@@ -1,37 +0,0 @@
-/// <reference types="bun-types" />
-
-import { describe, expect, test } from 'bun:test';
-import { resolveAgentWithPrecedence } from './precedence-resolver';
-
-describe('precedence-resolver', () => {
-  test('resolves deterministic winner with provenance', () => {
-    const result = resolveAgentWithPrecedence({
-      agentName: 'oracle',
-      manualUserPlan: ['openai/gpt-5.3-codex', 'openai/gpt-5.1-codex-mini'],
-      dynamicRecommendation: ['anthropic/claude-opus-4-6'],
-      providerFallbackPolicy: ['chutes/kimi-k2.5'],
-      systemDefault: ['opencode/big-pickle'],
-    });
-
-    expect(result.model).toBe('openai/gpt-5.3-codex');
-    expect(result.provenance.winnerLayer).toBe('manual-user-plan');
-    expect(result.chain).toEqual([
-      'openai/gpt-5.3-codex',
-      'openai/gpt-5.1-codex-mini',
-      'anthropic/claude-opus-4-6',
-      'chutes/kimi-k2.5',
-      'opencode/big-pickle',
-    ]);
-  });
-
-  test('uses system default when no other layer is provided', () => {
-    const result = resolveAgentWithPrecedence({
-      agentName: 'explorer',
-      systemDefault: ['opencode/gpt-5-nano'],
-    });
-
-    expect(result.model).toBe('opencode/gpt-5-nano');
-    expect(result.provenance.winnerLayer).toBe('system-default');
-    expect(result.chain).toEqual(['opencode/gpt-5-nano']);
-  });
-});

+ 0 - 93
src/cli/precedence-resolver.ts

@@ -1,93 +0,0 @@
-import type { AgentResolutionProvenance, ResolutionLayerName } from './types';
-
-export interface AgentLayerInput {
-  agentName: string;
-  openCodeDirectOverride?: string;
-  manualUserPlan?: string[];
-  pinnedModel?: string;
-  dynamicRecommendation?: string[];
-  providerFallbackPolicy?: string[];
-  systemDefault: string[];
-}
-
-export interface ResolvedAgentLayerResult {
-  model: string;
-  chain: string[];
-  provenance: AgentResolutionProvenance;
-}
-
-type LayerCandidate = {
-  layer: ResolutionLayerName;
-  models: string[];
-};
-
-function dedupe(models: Array<string | undefined>): string[] {
-  const seen = new Set<string>();
-  const result: string[] = [];
-  for (const model of models) {
-    if (!model || seen.has(model)) continue;
-    seen.add(model);
-    result.push(model);
-  }
-  return result;
-}
-
-function buildLayerOrder(input: AgentLayerInput): LayerCandidate[] {
-  return [
-    {
-      layer: 'opencode-direct-override',
-      models: input.openCodeDirectOverride
-        ? [input.openCodeDirectOverride]
-        : [],
-    },
-    {
-      layer: 'manual-user-plan',
-      models: input.manualUserPlan ?? [],
-    },
-    {
-      layer: 'pinned-model',
-      models: input.pinnedModel ? [input.pinnedModel] : [],
-    },
-    {
-      layer: 'dynamic-recommendation',
-      models: input.dynamicRecommendation ?? [],
-    },
-    {
-      layer: 'provider-fallback-policy',
-      models: input.providerFallbackPolicy ?? [],
-    },
-    {
-      layer: 'system-default',
-      models: input.systemDefault,
-    },
-  ];
-}
-
-export function resolveAgentWithPrecedence(
-  input: AgentLayerInput,
-): ResolvedAgentLayerResult {
-  const ordered = buildLayerOrder(input);
-  const firstWinningIndex = ordered.findIndex(
-    (layer) => layer.models.length > 0,
-  );
-  const winnerIndex =
-    firstWinningIndex >= 0 ? firstWinningIndex : ordered.length - 1;
-  const winnerLayer = ordered[winnerIndex];
-
-  const chain = dedupe(
-    ordered
-      .slice(winnerIndex)
-      .flatMap((layer) => layer.models)
-      .concat(input.systemDefault),
-  );
-  const model = chain[0] ?? input.systemDefault[0] ?? 'opencode/big-pickle';
-
-  return {
-    model,
-    chain,
-    provenance: {
-      winnerLayer: winnerLayer?.layer ?? 'system-default',
-      winnerModel: model,
-    },
-  };
-}

+ 21 - 435
src/cli/providers.test.ts

@@ -1,236 +1,53 @@
 /// <reference types="bun-types" />
 
 import { describe, expect, test } from 'bun:test';
-import {
-  generateAntigravityMixedPreset,
-  generateLiteConfig,
-  MODEL_MAPPINGS,
-} from './providers';
+import { generateLiteConfig, MODEL_MAPPINGS } from './providers';
 
 describe('providers', () => {
-  test('generateLiteConfig generates kimi config when only kimi selected', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: true,
-      hasOpenAI: false,
-      hasOpencodeZen: false,
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    expect(config.preset).toBe('kimi');
-    const agents = (config.presets as any).kimi;
-    expect(agents).toBeDefined();
-    expect(agents.orchestrator.model).toBe('kimi-for-coding/k2p5');
-    expect(agents.orchestrator.variant).toBeUndefined();
-    expect(agents.fixer.model).toBe('kimi-for-coding/k2p5');
-    expect(agents.fixer.variant).toBe('low');
-    // Should NOT include other presets
-    expect((config.presets as any).openai).toBeUndefined();
-    expect((config.presets as any)['zen-free']).toBeUndefined();
+  test('MODEL_MAPPINGS has exactly 4 providers', () => {
+    const keys = Object.keys(MODEL_MAPPINGS);
+    expect(keys.sort()).toEqual(['copilot', 'kimi', 'openai', 'zai-plan']);
   });
 
-  test('generateLiteConfig generates kimi-openai preset when both selected', () => {
+  test('generateLiteConfig always generates openai preset', () => {
     const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: true,
-      hasOpenAI: true,
-      hasOpencodeZen: false,
       hasTmux: false,
       installSkills: false,
       installCustomSkills: false,
     });
 
-    expect(config.preset).toBe('kimi');
-    const agents = (config.presets as any).kimi;
+    expect(config.preset).toBe('openai');
+    const agents = (config.presets as any).openai;
     expect(agents).toBeDefined();
-    expect(agents.orchestrator.model).toBe('kimi-for-coding/k2p5');
+    expect(agents.orchestrator.model).toBe('openai/gpt-5.4');
     expect(agents.orchestrator.variant).toBeUndefined();
-    // Oracle uses OpenAI when both kimi and openai are enabled
-    expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
-    expect(agents.oracle.variant).toBe('high');
-    // Should NOT include other presets
-    expect((config.presets as any).openai).toBeUndefined();
-    expect((config.presets as any)['zen-free']).toBeUndefined();
+    expect(agents.fixer.model).toBe('openai/gpt-5-codex');
+    expect(agents.fixer.variant).toBe('low');
   });
 
-  test('generateLiteConfig generates openai preset when only openai selected', () => {
+  test('generateLiteConfig uses correct OpenAI models', () => {
     const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: true,
-      hasOpencodeZen: false,
       hasTmux: false,
       installSkills: false,
       installCustomSkills: false,
     });
 
-    expect(config.preset).toBe('openai');
     const agents = (config.presets as any).openai;
-    expect(agents).toBeDefined();
     expect(agents.orchestrator.model).toBe(
       MODEL_MAPPINGS.openai.orchestrator.model,
     );
-    expect(agents.orchestrator.variant).toBeUndefined();
-    // Should NOT include other presets
-    expect((config.presets as any).kimi).toBeUndefined();
-    expect((config.presets as any)['zen-free']).toBeUndefined();
-  });
-
-  test('generateLiteConfig generates chutes preset when only chutes selected', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: false,
-      hasChutes: true,
-      hasOpencodeZen: false,
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-      selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
-      selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
-    });
-
-    expect(config.preset).toBe('chutes');
-    const agents = (config.presets as any).chutes;
-    expect(agents).toBeDefined();
-    expect(agents.orchestrator.model).toBe('chutes/kimi-k2.5');
-    expect(agents.oracle.model).toBe('chutes/kimi-k2.5');
-    expect(agents.designer.model).toBe('chutes/kimi-k2.5');
-    expect(agents.explorer.model).toBe('chutes/minimax-m2.1');
-    expect(agents.librarian.model).toBe('chutes/minimax-m2.1');
-    expect(agents.fixer.model).toBe('chutes/minimax-m2.1');
-  });
-
-  test('generateLiteConfig generates anthropic preset when only anthropic selected', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: false,
-      hasAnthropic: true,
-      hasCopilot: false,
-      hasZaiPlan: false,
-      hasChutes: false,
-      hasOpencodeZen: false,
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    expect(config.preset).toBe('anthropic');
-    const agents = (config.presets as any).anthropic;
-    expect(agents.orchestrator.model).toBe('anthropic/claude-opus-4-6');
-    expect(agents.oracle.model).toBe('anthropic/claude-opus-4-6');
-    expect(agents.explorer.model).toBe('anthropic/claude-haiku-4-5');
-  });
-
-  test('generateLiteConfig prefers Chutes Kimi in mixed openai/antigravity when chutes is enabled', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: true,
-      hasKimi: false,
-      hasOpenAI: true,
-      hasChutes: true,
-      hasOpencodeZen: true,
-      useOpenCodeFreeModels: true,
-      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
-      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
-      selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
-      selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    expect(config.preset).toBe('antigravity-mixed-openai');
-    const agents = (config.presets as any)['antigravity-mixed-openai'];
-    expect(agents.orchestrator.model).toBe('chutes/kimi-k2.5');
-    expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
-    expect(agents.explorer.model).toBe('opencode/gpt-5-nano');
-  });
-
-  test('generateLiteConfig emits fallback chains for six agents', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: true,
-      hasKimi: true,
-      hasOpenAI: true,
-      hasChutes: true,
-      hasOpencodeZen: true,
-      useOpenCodeFreeModels: true,
-      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
-      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
-      selectedChutesPrimaryModel: 'chutes/kimi-k2.5',
-      selectedChutesSecondaryModel: 'chutes/minimax-m2.1',
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    expect((config.fallback as any).enabled).toBe(true);
-    expect((config.fallback as any).timeoutMs).toBe(15000);
-    const chains = (config.fallback as any).chains;
-    expect(Object.keys(chains).sort()).toEqual([
-      'designer',
-      'explorer',
-      'fixer',
-      'librarian',
-      'oracle',
-      'orchestrator',
-    ]);
-    expect(chains.orchestrator).toContain('openai/gpt-5.3-codex');
-    expect(chains.orchestrator).toContain('kimi-for-coding/k2p5');
-    expect(chains.orchestrator).toContain('google/antigravity-gemini-3-flash');
-    expect(chains.orchestrator).toContain('chutes/kimi-k2.5');
-    expect(chains.orchestrator).toContain('opencode/glm-4.7-free');
-  });
-
-  test('generateLiteConfig generates zen-free preset when no providers selected', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: false,
-      hasOpencodeZen: false,
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    expect(config.preset).toBe('zen-free');
-    const agents = (config.presets as any)['zen-free'];
-    expect(agents).toBeDefined();
-    expect(agents.orchestrator.model).toBe('opencode/big-pickle');
-    expect(agents.orchestrator.variant).toBeUndefined();
-    // Should NOT include other presets
-    expect((config.presets as any).kimi).toBeUndefined();
-    expect((config.presets as any).openai).toBeUndefined();
-  });
-
-  test('generateLiteConfig uses zen-free big-pickle models', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: false,
-      hasOpencodeZen: true,
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    expect(config.preset).toBe('zen-free');
-    const agents = (config.presets as any)['zen-free'];
-    expect(agents.orchestrator.model).toBe('opencode/big-pickle');
-    expect(agents.oracle.model).toBe('opencode/big-pickle');
+    expect(agents.oracle.model).toBe('openai/gpt-5.4');
     expect(agents.oracle.variant).toBe('high');
-    expect(agents.librarian.model).toBe('opencode/big-pickle');
+    expect(agents.librarian.model).toBe('openai/gpt-5-codex');
     expect(agents.librarian.variant).toBe('low');
+    expect(agents.explorer.model).toBe('openai/gpt-5-codex');
+    expect(agents.explorer.variant).toBe('low');
+    expect(agents.designer.model).toBe('openai/gpt-5-codex');
+    expect(agents.designer.variant).toBe('medium');
   });
 
   test('generateLiteConfig enables tmux when requested', () => {
     const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: false,
-      hasOpencodeZen: false,
       hasTmux: true,
       installSkills: false,
       installCustomSkills: false,
@@ -238,20 +55,17 @@ describe('providers', () => {
 
     expect(config.tmux).toBeDefined();
     expect((config.tmux as any).enabled).toBe(true);
+    expect((config.tmux as any).layout).toBe('main-vertical');
   });
 
   test('generateLiteConfig includes default skills', () => {
     const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: true,
-      hasOpenAI: false,
-      hasOpencodeZen: false,
       hasTmux: false,
       installSkills: true,
       installCustomSkills: false,
     });
 
-    const agents = (config.presets as any).kimi;
+    const agents = (config.presets as any).openai;
     // Orchestrator should always have '*'
     expect(agents.orchestrator.skills).toEqual(['*']);
 
@@ -264,258 +78,30 @@ describe('providers', () => {
 
   test('generateLiteConfig includes mcps field', () => {
     const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: true,
-      hasOpenAI: false,
-      hasOpencodeZen: false,
       hasTmux: false,
       installSkills: false,
       installCustomSkills: false,
     });
 
-    const agents = (config.presets as any).kimi;
+    const agents = (config.presets as any).openai;
     expect(agents.orchestrator.mcps).toBeDefined();
     expect(Array.isArray(agents.orchestrator.mcps)).toBe(true);
     expect(agents.librarian.mcps).toBeDefined();
     expect(Array.isArray(agents.librarian.mcps)).toBe(true);
   });
 
-  test('generateLiteConfig applies OpenCode free model overrides in hybrid mode', () => {
+  test('generateLiteConfig openai includes correct mcps', () => {
     const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: true,
-      hasOpencodeZen: true,
-      useOpenCodeFreeModels: true,
-      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
-      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
       hasTmux: false,
       installSkills: false,
       installCustomSkills: false,
     });
 
     const agents = (config.presets as any).openai;
-    expect(agents.orchestrator.model).toBe(
-      MODEL_MAPPINGS.openai.orchestrator.model,
-    );
-    expect(agents.oracle.model).toBe(MODEL_MAPPINGS.openai.oracle.model);
-    expect(agents.explorer.model).toBe('opencode/gpt-5-nano');
-    expect(agents.librarian.model).toBe('opencode/gpt-5-nano');
-    expect(agents.fixer.model).toBe('opencode/gpt-5-nano');
-  });
-
-  test('generateLiteConfig applies OpenCode free model overrides in OpenCode-only mode', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: false,
-      hasOpencodeZen: true,
-      useOpenCodeFreeModels: true,
-      selectedOpenCodePrimaryModel: 'opencode/glm-4.7-free',
-      selectedOpenCodeSecondaryModel: 'opencode/gpt-5-nano',
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    const agents = (config.presets as any)['zen-free'];
-    expect(agents.orchestrator.model).toBe('opencode/glm-4.7-free');
-    expect(agents.oracle.model).toBe('opencode/glm-4.7-free');
-    expect(agents.designer.model).toBe('opencode/glm-4.7-free');
-    expect(agents.explorer.model).toBe('opencode/gpt-5-nano');
-    expect(agents.librarian.model).toBe('opencode/gpt-5-nano');
-    expect(agents.fixer.model).toBe('opencode/gpt-5-nano');
-  });
-
-  test('generateLiteConfig zen-free includes correct mcps', () => {
-    const config = generateLiteConfig({
-      hasAntigravity: false,
-      hasKimi: false,
-      hasOpenAI: false,
-      hasOpencodeZen: false,
-      hasTmux: false,
-      installSkills: false,
-      installCustomSkills: false,
-    });
-
-    const agents = (config.presets as any)['zen-free'];
     expect(agents.orchestrator.mcps).toContain('websearch');
     expect(agents.librarian.mcps).toContain('websearch');
     expect(agents.librarian.mcps).toContain('context7');
     expect(agents.librarian.mcps).toContain('grep_app');
     expect(agents.designer.mcps).toEqual([]);
   });
-
-  // Antigravity tests
-  describe('Antigravity presets', () => {
-    test('generateLiteConfig generates antigravity-mixed-both preset when all providers selected', () => {
-      const config = generateLiteConfig({
-        hasKimi: true,
-        hasOpenAI: true,
-        hasAntigravity: true,
-        hasOpencodeZen: false,
-        hasTmux: false,
-        installSkills: false,
-        installCustomSkills: false,
-      });
-
-      expect(config.preset).toBe('antigravity-mixed-both');
-      const agents = (config.presets as any)['antigravity-mixed-both'];
-      expect(agents).toBeDefined();
-
-      // Orchestrator should use Kimi
-      expect(agents.orchestrator.model).toBe('kimi-for-coding/k2p5');
-
-      // Oracle should use OpenAI
-      expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
-      expect(agents.oracle.variant).toBe('high');
-
-      // Explorer/Librarian/Designer use Antigravity Flash; Fixer prefers OpenAI
-      expect(agents.explorer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.explorer.variant).toBe('low');
-      expect(agents.librarian.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.librarian.variant).toBe('low');
-      expect(agents.designer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.designer.variant).toBe('medium');
-      expect(agents.fixer.model).toBe('openai/gpt-5.3-codex');
-      expect(agents.fixer.variant).toBe('low');
-    });
-
-    test('generateLiteConfig generates antigravity-mixed-kimi preset when Kimi + Antigravity', () => {
-      const config = generateLiteConfig({
-        hasKimi: true,
-        hasOpenAI: false,
-        hasAntigravity: true,
-        hasOpencodeZen: false,
-        hasTmux: false,
-        installSkills: false,
-        installCustomSkills: false,
-      });
-
-      expect(config.preset).toBe('antigravity-mixed-kimi');
-      const agents = (config.presets as any)['antigravity-mixed-kimi'];
-      expect(agents).toBeDefined();
-
-      // Orchestrator should use Kimi
-      expect(agents.orchestrator.model).toBe('kimi-for-coding/k2p5');
-
-      // Oracle should use Antigravity (no OpenAI)
-      expect(agents.oracle.model).toBe('google/antigravity-gemini-3.1-pro');
-
-      // Others should use Antigravity Flash
-      expect(agents.explorer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.librarian.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.designer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.fixer.model).toBe('google/antigravity-gemini-3-flash');
-    });
-
-    test('generateLiteConfig generates antigravity-mixed-openai preset when OpenAI + Antigravity', () => {
-      const config = generateLiteConfig({
-        hasKimi: false,
-        hasOpenAI: true,
-        hasAntigravity: true,
-        hasOpencodeZen: false,
-        hasTmux: false,
-        installSkills: false,
-        installCustomSkills: false,
-      });
-
-      expect(config.preset).toBe('antigravity-mixed-openai');
-      const agents = (config.presets as any)['antigravity-mixed-openai'];
-      expect(agents).toBeDefined();
-
-      // Orchestrator should use Antigravity (no Kimi)
-      expect(agents.orchestrator.model).toBe(
-        'google/antigravity-gemini-3-flash',
-      );
-
-      // Oracle should use OpenAI
-      expect(agents.oracle.model).toBe('openai/gpt-5.3-codex');
-      expect(agents.oracle.variant).toBe('high');
-
-      // Explorer/Librarian/Designer use Antigravity Flash; Fixer prefers OpenAI
-      expect(agents.explorer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.librarian.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.designer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.fixer.model).toBe('openai/gpt-5.3-codex');
-    });
-
-    test('generateLiteConfig generates pure antigravity preset when only Antigravity', () => {
-      const config = generateLiteConfig({
-        hasKimi: false,
-        hasOpenAI: false,
-        hasAntigravity: true,
-        hasOpencodeZen: false,
-        hasTmux: false,
-        installSkills: false,
-        installCustomSkills: false,
-      });
-
-      expect(config.preset).toBe('antigravity');
-      const agents = (config.presets as any).antigravity;
-      expect(agents).toBeDefined();
-
-      // All agents should use Antigravity
-      expect(agents.orchestrator.model).toBe(
-        'google/antigravity-gemini-3-flash',
-      );
-      expect(agents.oracle.model).toBe('google/antigravity-gemini-3.1-pro');
-      expect(agents.explorer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.librarian.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.designer.model).toBe('google/antigravity-gemini-3-flash');
-      expect(agents.fixer.model).toBe('google/antigravity-gemini-3-flash');
-    });
-
-    test('generateAntigravityMixedPreset respects Kimi for orchestrator', () => {
-      const preset = generateAntigravityMixedPreset({
-        hasKimi: true,
-        hasOpenAI: false,
-        hasAntigravity: true,
-        hasOpencodeZen: false,
-        hasTmux: false,
-        installSkills: false,
-        installCustomSkills: false,
-      });
-
-      expect((preset.orchestrator as any).model).toBe('kimi-for-coding/k2p5');
-    });
-
-    test('generateAntigravityMixedPreset respects OpenAI for oracle', () => {
-      const preset = generateAntigravityMixedPreset({
-        hasKimi: false,
-        hasOpenAI: true,
-        hasAntigravity: true,
-        hasOpencodeZen: false,
-        hasTmux: false,
-        installSkills: false,
-        installCustomSkills: false,
-      });
-
-      expect((preset.oracle as any).model).toBe('openai/gpt-5.3-codex');
-      expect((preset.oracle as any).variant).toBe('high');
-    });
-
-    test('generateAntigravityMixedPreset uses OpenAI fixer and Antigravity support defaults', () => {
-      const preset = generateAntigravityMixedPreset({
-        hasKimi: true,
-        hasOpenAI: true,
-        hasAntigravity: true,
-        hasOpencodeZen: false,
-        hasTmux: false,
-        installSkills: false,
-        installCustomSkills: false,
-      });
-
-      expect((preset.explorer as any).model).toBe(
-        'google/antigravity-gemini-3-flash',
-      );
-      expect((preset.librarian as any).model).toBe(
-        'google/antigravity-gemini-3-flash',
-      );
-      expect((preset.designer as any).model).toBe(
-        'google/antigravity-gemini-3-flash',
-      );
-      expect((preset.fixer as any).model).toBe('openai/gpt-5.3-codex');
-    });
-  });
 });

+ 26 - 529
src/cli/providers.ts

@@ -13,8 +13,16 @@ const AGENT_NAMES = [
 
 type AgentName = (typeof AGENT_NAMES)[number];
 
-// Model mappings by provider priority
+// Model mappings by provider - only 4 supported providers
 export const MODEL_MAPPINGS = {
+  openai: {
+    orchestrator: { model: 'openai/gpt-5.4' },
+    oracle: { model: 'openai/gpt-5.4', variant: 'high' },
+    librarian: { model: 'openai/gpt-5-codex', variant: 'low' },
+    explorer: { model: 'openai/gpt-5-codex', variant: 'low' },
+    designer: { model: 'openai/gpt-5-codex', variant: 'medium' },
+    fixer: { model: 'openai/gpt-5-codex', variant: 'low' },
+  },
   kimi: {
     orchestrator: { model: 'kimi-for-coding/k2p5' },
     oracle: { model: 'kimi-for-coding/k2p5', variant: 'high' },
@@ -23,310 +31,38 @@ export const MODEL_MAPPINGS = {
     designer: { model: 'kimi-for-coding/k2p5', variant: 'medium' },
     fixer: { model: 'kimi-for-coding/k2p5', variant: 'low' },
   },
-  openai: {
-    orchestrator: { model: 'openai/gpt-5.3-codex' },
-    oracle: { model: 'openai/gpt-5.3-codex', variant: 'high' },
-    librarian: { model: 'openai/gpt-5.1-codex-mini', variant: 'low' },
-    explorer: { model: 'openai/gpt-5.1-codex-mini', variant: 'low' },
-    designer: { model: 'openai/gpt-5.1-codex-mini', variant: 'medium' },
-    fixer: { model: 'openai/gpt-5.1-codex-mini', variant: 'low' },
-  },
-  anthropic: {
-    orchestrator: { model: 'anthropic/claude-opus-4-6' },
-    oracle: { model: 'anthropic/claude-opus-4-6', variant: 'high' },
-    librarian: { model: 'anthropic/claude-sonnet-4-5', variant: 'low' },
-    explorer: { model: 'anthropic/claude-haiku-4-5', variant: 'low' },
-    designer: { model: 'anthropic/claude-sonnet-4-5', variant: 'medium' },
-    fixer: { model: 'anthropic/claude-sonnet-4-5', variant: 'low' },
-  },
   copilot: {
-    orchestrator: { model: 'github-copilot/grok-code-fast-1' },
-    oracle: { model: 'github-copilot/grok-code-fast-1', variant: 'high' },
+    orchestrator: { model: 'github-copilot/claude-opus-4.6' },
+    oracle: { model: 'github-copilot/claude-opus-4.6', variant: 'high' },
     librarian: { model: 'github-copilot/grok-code-fast-1', variant: 'low' },
     explorer: { model: 'github-copilot/grok-code-fast-1', variant: 'low' },
-    designer: { model: 'github-copilot/grok-code-fast-1', variant: 'medium' },
-    fixer: { model: 'github-copilot/grok-code-fast-1', variant: 'low' },
+    designer: { model: 'github-copilot/gemini-3.1-pro-preview', variant: 'medium' },
+    fixer: { model: 'github-copilot/claude-sonnet-4.6', variant: 'low' },
   },
   'zai-plan': {
-    orchestrator: { model: 'zai-coding-plan/glm-4.7' },
-    oracle: { model: 'zai-coding-plan/glm-4.7', variant: 'high' },
-    librarian: { model: 'zai-coding-plan/glm-4.7', variant: 'low' },
-    explorer: { model: 'zai-coding-plan/glm-4.7', variant: 'low' },
-    designer: { model: 'zai-coding-plan/glm-4.7', variant: 'medium' },
-    fixer: { model: 'zai-coding-plan/glm-4.7', variant: 'low' },
-  },
-  antigravity: {
-    orchestrator: { model: 'google/antigravity-gemini-3-flash' },
-    oracle: { model: 'google/antigravity-gemini-3.1-pro' },
-    librarian: {
-      model: 'google/antigravity-gemini-3-flash',
-      variant: 'low',
-    },
-    explorer: {
-      model: 'google/antigravity-gemini-3-flash',
-      variant: 'low',
-    },
-    designer: {
-      model: 'google/antigravity-gemini-3-flash',
-      variant: 'medium',
-    },
-    fixer: { model: 'google/antigravity-gemini-3-flash', variant: 'low' },
-  },
-  chutes: {
-    orchestrator: { model: 'chutes/kimi-k2.5' },
-    oracle: { model: 'chutes/kimi-k2.5', variant: 'high' },
-    librarian: { model: 'chutes/minimax-m2.1', variant: 'low' },
-    explorer: { model: 'chutes/minimax-m2.1', variant: 'low' },
-    designer: { model: 'chutes/kimi-k2.5', variant: 'medium' },
-    fixer: { model: 'chutes/minimax-m2.1', variant: 'low' },
-  },
-  'zen-free': {
-    orchestrator: { model: 'opencode/big-pickle' },
-    oracle: { model: 'opencode/big-pickle', variant: 'high' },
-    librarian: { model: 'opencode/big-pickle', variant: 'low' },
-    explorer: { model: 'opencode/big-pickle', variant: 'low' },
-    designer: { model: 'opencode/big-pickle', variant: 'medium' },
-    fixer: { model: 'opencode/big-pickle', variant: 'low' },
+    orchestrator: { model: 'zai-coding-plan/glm-5' },
+    oracle: { model: 'zai-coding-plan/glm-5', variant: 'high' },
+    librarian: { model: 'zai-coding-plan/glm-5', variant: 'low' },
+    explorer: { model: 'zai-coding-plan/glm-5', variant: 'low' },
+    designer: { model: 'zai-coding-plan/glm-5', variant: 'medium' },
+    fixer: { model: 'zai-coding-plan/glm-5', variant: 'low' },
   },
 } as const;
 
-export function generateAntigravityMixedPreset(
-  config: InstallConfig,
-  existingPreset?: Record<string, unknown>,
-): Record<string, unknown> {
-  const result: Record<string, unknown> = existingPreset
-    ? { ...existingPreset }
-    : {};
-
-  const createAgentConfig = (
-    agentName: string,
-    modelInfo: { model: string; variant?: string },
-  ) => {
-    const isOrchestrator = agentName === 'orchestrator';
-
-    // Skills: orchestrator gets "*", others get recommended skills for their role
-    const skills = isOrchestrator
-      ? ['*']
-      : RECOMMENDED_SKILLS.filter(
-          (s) =>
-            s.allowedAgents.includes('*') ||
-            s.allowedAgents.includes(agentName),
-        ).map((s) => s.skillName);
-
-    // Special case for designer and agent-browser skill
-    if (agentName === 'designer' && !skills.includes('agent-browser')) {
-      skills.push('agent-browser');
-    }
-
-    return {
-      model: modelInfo.model,
-      variant: modelInfo.variant,
-      skills,
-      mcps:
-        DEFAULT_AGENT_MCPS[agentName as keyof typeof DEFAULT_AGENT_MCPS] ?? [],
-    };
-  };
-
-  const antigravityFlash = {
-    model: 'google/antigravity-gemini-3-flash',
-  };
-
-  const chutesPrimary =
-    config.selectedChutesPrimaryModel ??
-    MODEL_MAPPINGS.chutes.orchestrator.model;
-  const chutesSupport =
-    config.selectedChutesSecondaryModel ?? MODEL_MAPPINGS.chutes.explorer.model;
-
-  // Orchestrator: Kimi if hasKimi, else Chutes Kimi if enabled, else antigravity
-  if (config.hasKimi) {
-    result.orchestrator = createAgentConfig(
-      'orchestrator',
-      MODEL_MAPPINGS.kimi.orchestrator,
-    );
-  } else if (config.hasChutes) {
-    result.orchestrator = createAgentConfig('orchestrator', {
-      model: chutesPrimary,
-    });
-  } else if (!result.orchestrator) {
-    result.orchestrator = createAgentConfig(
-      'orchestrator',
-      MODEL_MAPPINGS.antigravity.orchestrator,
-    );
-  }
-
-  // Oracle: GPT if hasOpenAI, else keep existing if exists, else antigravity
-  if (config.hasOpenAI) {
-    result.oracle = createAgentConfig('oracle', MODEL_MAPPINGS.openai.oracle);
-  } else if (!result.oracle) {
-    result.oracle = createAgentConfig(
-      'oracle',
-      MODEL_MAPPINGS.antigravity.oracle,
-    );
-  }
-
-  // Explorer stays flash-first for speed.
-  result.explorer = createAgentConfig('explorer', {
-    ...antigravityFlash,
-    variant: 'low',
-  });
-
-  // Librarian/Designer prefer Kimi-K2.5 via Chutes when available.
-  if (config.hasChutes) {
-    result.librarian = createAgentConfig('librarian', {
-      model: chutesSupport,
-      variant: 'low',
-    });
-    result.designer = createAgentConfig('designer', {
-      model: chutesPrimary,
-      variant: 'medium',
-    });
-  } else {
-    result.librarian = createAgentConfig('librarian', {
-      ...antigravityFlash,
-      variant: 'low',
-    });
-    result.designer = createAgentConfig('designer', {
-      ...antigravityFlash,
-      variant: 'medium',
-    });
-  }
-
-  // Fixer prefers OpenAI codex when available.
-  if (config.hasOpenAI) {
-    result.fixer = createAgentConfig('fixer', {
-      ...MODEL_MAPPINGS.openai.oracle,
-      variant: 'low',
-    });
-  } else if (config.hasChutes) {
-    result.fixer = createAgentConfig('fixer', {
-      model: chutesSupport,
-      variant: 'low',
-    });
-  } else {
-    result.fixer = createAgentConfig('fixer', {
-      ...antigravityFlash,
-      variant: 'low',
-    });
-  }
-
-  return result;
-}
-
 export function generateLiteConfig(
   installConfig: InstallConfig,
 ): Record<string, unknown> {
   const config: Record<string, unknown> = {
-    preset: 'zen-free',
+    preset: 'openai',
     presets: {},
-    balanceProviderUsage: installConfig.balanceProviderUsage ?? false,
   };
 
-  // Handle manual configuration mode
-  if (
-    installConfig.setupMode === 'manual' &&
-    installConfig.manualAgentConfigs
-  ) {
-    config.preset = 'manual';
-    const manualPreset: Record<string, unknown> = {};
-    const chains: Record<string, string[]> = {};
-
-    for (const agentName of AGENT_NAMES) {
-      const manualConfig = installConfig.manualAgentConfigs[agentName];
-      if (manualConfig) {
-        manualPreset[agentName] = {
-          model: manualConfig.primary,
-          skills:
-            agentName === 'orchestrator'
-              ? ['*']
-              : RECOMMENDED_SKILLS.filter(
-                  (s) =>
-                    s.allowedAgents.includes('*') ||
-                    s.allowedAgents.includes(agentName),
-                ).map((s) => s.skillName),
-          mcps:
-            DEFAULT_AGENT_MCPS[agentName as keyof typeof DEFAULT_AGENT_MCPS] ??
-            [],
-        };
-
-        // Build fallback chain from manual config
-        const fallbackChain = [
-          manualConfig.primary,
-          manualConfig.fallback1,
-          manualConfig.fallback2,
-          manualConfig.fallback3,
-        ].filter((m, i, arr) => m && arr.indexOf(m) === i); // dedupe
-        chains[agentName] = fallbackChain;
-      }
-    }
-
-    (config.presets as Record<string, unknown>).manual = manualPreset;
-    config.fallback = {
-      enabled: true,
-      timeoutMs: 15000,
-      chains,
-    };
-
-    if (installConfig.hasTmux) {
-      config.tmux = {
-        enabled: true,
-        layout: 'main-vertical',
-        main_pane_size: 60,
-      };
-    }
-
-    return config;
-  }
-
-  // Determine active preset name
-  let activePreset:
-    | 'kimi'
-    | 'openai'
-    | 'anthropic'
-    | 'copilot'
-    | 'zai-plan'
-    | 'antigravity'
-    | 'chutes'
-    | 'antigravity-mixed-both'
-    | 'antigravity-mixed-kimi'
-    | 'antigravity-mixed-openai'
-    | 'zen-free' = 'zen-free';
-
-  // Antigravity mixed presets have priority
-  if (
-    installConfig.hasAntigravity &&
-    installConfig.hasKimi &&
-    installConfig.hasOpenAI
-  ) {
-    activePreset = 'antigravity-mixed-both';
-  } else if (installConfig.hasAntigravity && installConfig.hasKimi) {
-    activePreset = 'antigravity-mixed-kimi';
-  } else if (installConfig.hasAntigravity && installConfig.hasOpenAI) {
-    activePreset = 'antigravity-mixed-openai';
-  } else if (installConfig.hasAntigravity) {
-    activePreset = 'antigravity';
-  } else if (installConfig.hasKimi) {
-    activePreset = 'kimi';
-  } else if (installConfig.hasOpenAI) {
-    activePreset = 'openai';
-  } else if (installConfig.hasAnthropic) {
-    activePreset = 'anthropic';
-  } else if (installConfig.hasCopilot) {
-    activePreset = 'copilot';
-  } else if (installConfig.hasZaiPlan) {
-    activePreset = 'zai-plan';
-  } else if (installConfig.hasChutes) {
-    activePreset = 'chutes';
-  }
-
-  config.preset = activePreset;
-
   const createAgentConfig = (
     agentName: string,
     modelInfo: { model: string; variant?: string },
   ) => {
     const isOrchestrator = agentName === 'orchestrator';
 
-    // Skills: orchestrator gets "*", others get recommended skills for their role
     const skills = isOrchestrator
       ? ['*']
       : RECOMMENDED_SKILLS.filter(
@@ -335,7 +71,6 @@ export function generateLiteConfig(
             s.allowedAgents.includes(agentName),
         ).map((s) => s.skillName);
 
-    // Special case for designer and agent-browser skill
     if (agentName === 'designer' && !skills.includes('agent-browser')) {
       skills.push('agent-browser');
     }
@@ -349,256 +84,18 @@ export function generateLiteConfig(
     };
   };
 
-  if (installConfig.dynamicModelPlan) {
-    const dynamicPreset = Object.fromEntries(
-      Object.entries(installConfig.dynamicModelPlan.agents).map(
-        ([agentName, assignment]) => [
-          agentName,
-          createAgentConfig(
-            agentName,
-            assignment as { model: string; variant?: string },
-          ),
-        ],
-      ),
-    );
-
-    config.preset = 'dynamic';
-    (config.presets as Record<string, unknown>).dynamic = dynamicPreset;
-    config.fallback = {
-      enabled: true,
-      timeoutMs: 15000,
-      chains: installConfig.dynamicModelPlan.chains,
-    };
-
-    if (installConfig.hasTmux) {
-      config.tmux = {
-        enabled: true,
-        layout: 'main-vertical',
-        main_pane_size: 60,
-      };
-    }
-
-    return config;
-  }
-
-  const applyOpenCodeFreeAssignments = (
-    presetAgents: Record<string, unknown>,
-    hasExternalProviders: boolean,
-  ) => {
-    if (!installConfig.useOpenCodeFreeModels) return;
-
-    const primaryModel = installConfig.selectedOpenCodePrimaryModel;
-    const secondaryModel =
-      installConfig.selectedOpenCodeSecondaryModel ?? primaryModel;
-
-    if (!primaryModel || !secondaryModel) return;
-
-    const setAgent = (agentName: string, model: string) => {
-      presetAgents[agentName] = createAgentConfig(agentName, { model });
-    };
-
-    if (!hasExternalProviders) {
-      setAgent('orchestrator', primaryModel);
-      setAgent('oracle', primaryModel);
-      setAgent('designer', primaryModel);
-    }
-
-    setAgent('librarian', secondaryModel);
-    setAgent('explorer', secondaryModel);
-    setAgent('fixer', secondaryModel);
-  };
-
-  const applyChutesAssignments = (presetAgents: Record<string, unknown>) => {
-    if (!installConfig.hasChutes) return;
-
-    const hasExternalProviders =
-      installConfig.hasKimi ||
-      installConfig.hasOpenAI ||
-      installConfig.hasAnthropic ||
-      installConfig.hasCopilot ||
-      installConfig.hasZaiPlan ||
-      installConfig.hasAntigravity;
-
-    if (hasExternalProviders && activePreset !== 'chutes') return;
-
-    const primaryModel = installConfig.selectedChutesPrimaryModel;
-    const secondaryModel =
-      installConfig.selectedChutesSecondaryModel ?? primaryModel;
-
-    if (!primaryModel || !secondaryModel) return;
-
-    const setAgent = (agentName: string, model: string) => {
-      presetAgents[agentName] = createAgentConfig(agentName, { model });
-    };
-
-    setAgent('orchestrator', primaryModel);
-    setAgent('oracle', primaryModel);
-    setAgent('designer', primaryModel);
-    setAgent('librarian', secondaryModel);
-    setAgent('explorer', secondaryModel);
-    setAgent('fixer', secondaryModel);
-  };
-
-  const dedupeModels = (models: Array<string | undefined>) => {
-    const seen = new Set<string>();
-    const result: string[] = [];
-
-    for (const model of models) {
-      if (!model || seen.has(model)) continue;
-      seen.add(model);
-      result.push(model);
-    }
-
-    return result;
-  };
-
-  const getOpenCodeFallbackForAgent = (agentName: AgentName) => {
-    if (!installConfig.useOpenCodeFreeModels) return undefined;
-    const isSupport =
-      agentName === 'explorer' ||
-      agentName === 'librarian' ||
-      agentName === 'fixer';
-    if (isSupport) {
-      return (
-        installConfig.selectedOpenCodeSecondaryModel ??
-        installConfig.selectedOpenCodePrimaryModel
-      );
-    }
-    return installConfig.selectedOpenCodePrimaryModel;
-  };
-
-  const getChutesFallbackForAgent = (agentName: AgentName) => {
-    if (!installConfig.hasChutes) return undefined;
-    const isSupport =
-      agentName === 'explorer' ||
-      agentName === 'librarian' ||
-      agentName === 'fixer';
-    if (isSupport) {
-      return (
-        installConfig.selectedChutesSecondaryModel ??
-        installConfig.selectedChutesPrimaryModel ??
-        MODEL_MAPPINGS.chutes[agentName].model
-      );
-    }
-    return (
-      installConfig.selectedChutesPrimaryModel ??
-      MODEL_MAPPINGS.chutes[agentName].model
-    );
-  };
-
-  const attachFallbackConfig = (presetAgents: Record<string, unknown>) => {
-    const chains: Partial<Record<AgentName, string[]>> = {};
-
-    for (const agentName of AGENT_NAMES) {
-      const currentModel = (
-        presetAgents[agentName] as { model?: string } | undefined
-      )?.model;
-
-      const chain = dedupeModels([
-        currentModel,
-        installConfig.hasOpenAI
-          ? MODEL_MAPPINGS.openai[agentName].model
-          : undefined,
-        installConfig.hasAnthropic
-          ? MODEL_MAPPINGS.anthropic[agentName].model
-          : undefined,
-        installConfig.hasCopilot
-          ? MODEL_MAPPINGS.copilot[agentName].model
-          : undefined,
-        installConfig.hasZaiPlan
-          ? MODEL_MAPPINGS['zai-plan'][agentName].model
-          : undefined,
-        installConfig.hasKimi
-          ? MODEL_MAPPINGS.kimi[agentName].model
-          : undefined,
-        installConfig.hasAntigravity
-          ? MODEL_MAPPINGS.antigravity[agentName].model
-          : undefined,
-        getChutesFallbackForAgent(agentName),
-        getOpenCodeFallbackForAgent(agentName),
-        MODEL_MAPPINGS['zen-free'][agentName].model,
-      ]);
-
-      if (chain.length > 0) {
-        chains[agentName] = chain;
-      }
-    }
-
-    config.fallback = {
-      enabled: true,
-      timeoutMs: 15000,
-      chains,
-    };
-  };
-
   const buildPreset = (mappingName: keyof typeof MODEL_MAPPINGS) => {
     const mapping = MODEL_MAPPINGS[mappingName];
     return Object.fromEntries(
-      Object.entries(mapping).map(([agentName, modelInfo]) => {
-        let activeModelInfo = { ...modelInfo };
-
-        // Hybrid case: Kimi + OpenAI (use OpenAI for Oracle, Kimi for orchestrator/designer)
-        if (
-          activePreset === 'kimi' &&
-          installConfig.hasOpenAI &&
-          agentName === 'oracle'
-        ) {
-          activeModelInfo = { ...MODEL_MAPPINGS.openai.oracle };
-        }
-
-        return [agentName, createAgentConfig(agentName, activeModelInfo)];
-      }),
+      Object.entries(mapping).map(([agentName, modelInfo]) => [
+        agentName,
+        createAgentConfig(agentName, modelInfo),
+      ]),
     );
   };
 
-  // Build preset based on type
-  if (
-    activePreset === 'antigravity-mixed-both' ||
-    activePreset === 'antigravity-mixed-kimi' ||
-    activePreset === 'antigravity-mixed-openai'
-  ) {
-    // Use dedicated mixed preset generator
-    (config.presets as Record<string, unknown>)[activePreset] =
-      generateAntigravityMixedPreset(installConfig);
-
-    applyOpenCodeFreeAssignments(
-      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
-      installConfig.hasKimi ||
-        installConfig.hasOpenAI ||
-        installConfig.hasAnthropic ||
-        installConfig.hasCopilot ||
-        installConfig.hasZaiPlan ||
-        installConfig.hasAntigravity ||
-        installConfig.hasChutes === true,
-    );
-    applyChutesAssignments(
-      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
-    );
-    attachFallbackConfig(
-      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
-    );
-  } else {
-    // Use standard buildPreset for pure presets
-    (config.presets as Record<string, unknown>)[activePreset] =
-      buildPreset(activePreset);
-
-    applyOpenCodeFreeAssignments(
-      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
-      installConfig.hasKimi ||
-        installConfig.hasOpenAI ||
-        installConfig.hasAnthropic ||
-        installConfig.hasCopilot ||
-        installConfig.hasZaiPlan ||
-        installConfig.hasAntigravity ||
-        installConfig.hasChutes === true,
-    );
-    applyChutesAssignments(
-      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
-    );
-    attachFallbackConfig(
-      (config.presets as Record<string, Record<string, unknown>>)[activePreset],
-    );
-  }
+  // Always use OpenAI as default
+  (config.presets as Record<string, unknown>).openai = buildPreset('openai');
 
   if (installConfig.hasTmux) {
     config.tmux = {

+ 0 - 155
src/cli/scoring-v2/engine.test.ts

@@ -1,155 +0,0 @@
-/// <reference types="bun-types" />
-
-import { describe, expect, test } from 'bun:test';
-import type { DiscoveredModel, ExternalSignalMap } from '../types';
-import { rankModelsV2, scoreCandidateV2 } from './engine';
-
-function model(
-  input: Partial<DiscoveredModel> & { model: string },
-): DiscoveredModel {
-  const [providerID] = input.model.split('/');
-  return {
-    providerID: providerID ?? 'openai',
-    model: input.model,
-    name: input.name ?? input.model,
-    status: input.status ?? 'active',
-    contextLimit: input.contextLimit ?? 200000,
-    outputLimit: input.outputLimit ?? 32000,
-    reasoning: input.reasoning ?? true,
-    toolcall: input.toolcall ?? true,
-    attachment: input.attachment ?? false,
-    dailyRequestLimit: input.dailyRequestLimit,
-    costInput: input.costInput,
-    costOutput: input.costOutput,
-  };
-}
-
-describe('scoring-v2', () => {
-  test('returns explain breakdown with deterministic total', () => {
-    const candidate = model({ model: 'openai/gpt-5.3-codex' });
-    const signalMap: ExternalSignalMap = {
-      'openai/gpt-5.3-codex': {
-        source: 'artificial-analysis',
-        qualityScore: 70,
-        codingScore: 75,
-        latencySeconds: 1.2,
-        inputPricePer1M: 1,
-        outputPricePer1M: 3,
-      },
-    };
-
-    const first = scoreCandidateV2(candidate, 'oracle', signalMap);
-    const second = scoreCandidateV2(candidate, 'oracle', signalMap);
-
-    expect(first.totalScore).toBe(second.totalScore);
-    expect(first.scoreBreakdown.features.quality).toBe(0.7);
-    expect(first.scoreBreakdown.weighted.coding).toBeGreaterThan(0);
-  });
-
-  test('uses stable tie-break when scores are equal', () => {
-    const ranked = rankModelsV2(
-      [
-        model({ model: 'zai-coding-plan/glm-4.7', reasoning: false }),
-        model({ model: 'openai/gpt-5.3-codex', reasoning: false }),
-      ],
-      'explorer',
-    );
-
-    expect(ranked[0]?.model.providerID).toBe('openai');
-    expect(ranked[1]?.model.providerID).toBe('zai-coding-plan');
-  });
-
-  test('matches external signals for multi-segment chutes ids', () => {
-    const candidate = model({
-      model: 'chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8-TEE',
-    });
-    const signalMap: ExternalSignalMap = {
-      'qwen/qwen3-coder-480b-a35b-instruct': {
-        source: 'artificial-analysis',
-        qualityScore: 95,
-        codingScore: 92,
-      },
-    };
-
-    const scored = scoreCandidateV2(candidate, 'fixer', signalMap);
-    expect(scored.scoreBreakdown.features.quality).toBe(0.95);
-    expect(scored.scoreBreakdown.features.coding).toBe(0.92);
-  });
-
-  test('applies designer output threshold rule', () => {
-    const belowThreshold = model({
-      model: 'chutes/moonshotai/Kimi-K2.5-TEE',
-      outputLimit: 63999,
-    });
-    const aboveThreshold = model({
-      model: 'zai-coding-plan/glm-4.7',
-      outputLimit: 64000,
-    });
-
-    const low = scoreCandidateV2(belowThreshold, 'designer');
-    const high = scoreCandidateV2(aboveThreshold, 'designer');
-
-    expect(low.scoreBreakdown.features.output).toBe(-1);
-    expect(low.scoreBreakdown.weighted.output).toBe(-10);
-    expect(high.scoreBreakdown.features.output).toBe(0);
-    expect(high.scoreBreakdown.weighted.output).toBe(0);
-  });
-
-  test('prefers kimi k2.5 over kimi k2 when otherwise equal', () => {
-    const ranked = rankModelsV2(
-      [
-        model({
-          model: 'chutes/moonshotai/Kimi-K2-TEE',
-          contextLimit: 262144,
-          outputLimit: 65535,
-          reasoning: true,
-          toolcall: true,
-          attachment: false,
-        }),
-        model({
-          model: 'chutes/moonshotai/Kimi-K2.5-TEE',
-          contextLimit: 262144,
-          outputLimit: 65535,
-          reasoning: true,
-          toolcall: true,
-          attachment: false,
-        }),
-      ],
-      'designer',
-    );
-
-    expect(ranked[0]?.model.model).toBe('chutes/moonshotai/Kimi-K2.5-TEE');
-    expect(ranked[1]?.model.model).toBe('chutes/moonshotai/Kimi-K2-TEE');
-  });
-
-  test('downranks chutes qwen3 against kimi/minimax priors', () => {
-    const ranked = rankModelsV2(
-      [
-        model({
-          model: 'chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8-TEE',
-          contextLimit: 262144,
-          outputLimit: 262144,
-          reasoning: true,
-          toolcall: true,
-        }),
-        model({
-          model: 'chutes/moonshotai/Kimi-K2.5-TEE',
-          contextLimit: 262144,
-          outputLimit: 65535,
-          reasoning: true,
-          toolcall: true,
-        }),
-        model({
-          model: 'chutes/minimax-m2.1',
-          contextLimit: 500000,
-          outputLimit: 64000,
-          reasoning: true,
-          toolcall: true,
-        }),
-      ],
-      'fixer',
-    );
-
-    expect(ranked[0]?.model.model).not.toContain('Qwen3-Coder-480B');
-  });
-});

+ 0 - 91
src/cli/scoring-v2/engine.ts

@@ -1,91 +0,0 @@
-import type { DiscoveredModel, ExternalSignalMap } from '../types';
-import { extractFeatureVector } from './features';
-import type {
-  FeatureVector,
-  FeatureWeights,
-  ScoredCandidate,
-  ScoringAgentName,
-} from './types';
-import { getFeatureWeights } from './weights';
-
-function weightedFeatures(
-  features: FeatureVector,
-  weights: FeatureWeights,
-): FeatureVector {
-  return {
-    status: features.status * weights.status,
-    context: features.context * weights.context,
-    output: features.output * weights.output,
-    versionBonus: features.versionBonus * weights.versionBonus,
-    reasoning: features.reasoning * weights.reasoning,
-    toolcall: features.toolcall * weights.toolcall,
-    attachment: features.attachment * weights.attachment,
-    quality: features.quality * weights.quality,
-    coding: features.coding * weights.coding,
-    latencyPenalty: features.latencyPenalty * weights.latencyPenalty,
-    pricePenalty: features.pricePenalty * weights.pricePenalty,
-  };
-}
-
-function sumFeatures(features: FeatureVector): number {
-  return (
-    features.status +
-    features.context +
-    features.output +
-    features.versionBonus +
-    features.reasoning +
-    features.toolcall +
-    features.attachment +
-    features.quality +
-    features.coding +
-    features.latencyPenalty +
-    features.pricePenalty
-  );
-}
-
-function withStableTieBreak(
-  left: ScoredCandidate,
-  right: ScoredCandidate,
-): number {
-  if (left.totalScore !== right.totalScore) {
-    return right.totalScore - left.totalScore;
-  }
-
-  const providerDelta = left.model.providerID.localeCompare(
-    right.model.providerID,
-  );
-  if (providerDelta !== 0) {
-    return providerDelta;
-  }
-
-  return left.model.model.localeCompare(right.model.model);
-}
-
-export function scoreCandidateV2(
-  model: DiscoveredModel,
-  agent: ScoringAgentName,
-  externalSignals?: ExternalSignalMap,
-): ScoredCandidate {
-  const features = extractFeatureVector(model, agent, externalSignals);
-  const weights = getFeatureWeights(agent);
-  const weighted = weightedFeatures(features, weights);
-
-  return {
-    model,
-    totalScore: Math.round(sumFeatures(weighted) * 1000) / 1000,
-    scoreBreakdown: {
-      features,
-      weighted,
-    },
-  };
-}
-
-export function rankModelsV2(
-  models: DiscoveredModel[],
-  agent: ScoringAgentName,
-  externalSignals?: ExternalSignalMap,
-): ScoredCandidate[] {
-  return models
-    .map((model) => scoreCandidateV2(model, agent, externalSignals))
-    .sort(withStableTieBreak);
-}

+ 0 - 116
src/cli/scoring-v2/features.ts

@@ -1,116 +0,0 @@
-import { buildModelKeyAliases } from '../model-key-normalization';
-import type {
-  DiscoveredModel,
-  ExternalModelSignal,
-  ExternalSignalMap,
-} from '../types';
-import type { FeatureVector, ScoringAgentName } from './types';
-
-function modelLookupKeys(model: DiscoveredModel): string[] {
-  return buildModelKeyAliases(model.model);
-}
-
-function findSignal(
-  model: DiscoveredModel,
-  externalSignals?: ExternalSignalMap,
-): ExternalModelSignal | undefined {
-  if (!externalSignals) return undefined;
-  return modelLookupKeys(model)
-    .map((key) => externalSignals[key])
-    .find((item) => item !== undefined);
-}
-
-function statusValue(status: DiscoveredModel['status']): number {
-  if (status === 'active') return 1;
-  if (status === 'beta') return 0.4;
-  if (status === 'alpha') return -0.25;
-  return -1;
-}
-
-function capability(value: boolean): number {
-  return value ? 1 : 0;
-}
-
-function blendedPrice(signal: ExternalModelSignal | undefined): number {
-  if (!signal) return 0;
-  if (
-    signal.inputPricePer1M !== undefined &&
-    signal.outputPricePer1M !== undefined
-  ) {
-    return signal.inputPricePer1M * 0.75 + signal.outputPricePer1M * 0.25;
-  }
-  return signal.inputPricePer1M ?? signal.outputPricePer1M ?? 0;
-}
-
-function kimiVersionBonus(
-  agent: ScoringAgentName,
-  model: DiscoveredModel,
-): number {
-  const lowered = `${model.model} ${model.name}`.toLowerCase();
-  const isChutes = model.providerID === 'chutes';
-  const isQwen3 = isChutes && /qwen3/.test(lowered);
-  const isKimiK25 = /kimi-k2\.5|k2\.5/.test(lowered);
-  const isMinimaxM21 = isChutes && /minimax[-_ ]?m2\.1/.test(lowered);
-
-  const qwenPenalty: Record<ScoringAgentName, number> = {
-    orchestrator: -6,
-    oracle: -6,
-    designer: -8,
-    explorer: -6,
-    librarian: -12,
-    fixer: -12,
-  };
-  const kimiBonus: Record<ScoringAgentName, number> = {
-    orchestrator: 1,
-    oracle: 1,
-    designer: 3,
-    explorer: 2,
-    librarian: 2,
-    fixer: 3,
-  };
-  const minimaxBonus: Record<ScoringAgentName, number> = {
-    orchestrator: 1,
-    oracle: 1,
-    designer: 2,
-    explorer: 4,
-    librarian: 4,
-    fixer: 4,
-  };
-
-  if (isQwen3) return qwenPenalty[agent];
-  if (isKimiK25) return kimiBonus[agent];
-  if (isMinimaxM21) return minimaxBonus[agent];
-  return 0;
-}
-
-export function extractFeatureVector(
-  model: DiscoveredModel,
-  agent: ScoringAgentName,
-  externalSignals?: ExternalSignalMap,
-): FeatureVector {
-  const signal = findSignal(model, externalSignals);
-  const latency = signal?.latencySeconds ?? 0;
-  const normalizedContext = Math.min(model.contextLimit, 1_000_000) / 100_000;
-  const normalizedOutput = Math.min(model.outputLimit, 300_000) / 30_000;
-  const designerOutputScore = model.outputLimit < 64_000 ? -1 : 0;
-  const versionBonus = kimiVersionBonus(agent, model);
-  const quality = (signal?.qualityScore ?? 0) / 100;
-  const coding = (signal?.codingScore ?? 0) / 100;
-  const pricePenalty = Math.min(blendedPrice(signal), 50) / 10;
-
-  const explorerLatencyMultiplier = agent === 'explorer' ? 1.4 : 1;
-
-  return {
-    status: statusValue(model.status),
-    context: normalizedContext,
-    output: agent === 'designer' ? designerOutputScore : normalizedOutput,
-    versionBonus,
-    reasoning: capability(model.reasoning),
-    toolcall: capability(model.toolcall),
-    attachment: capability(model.attachment),
-    quality,
-    coding,
-    latencyPenalty: Math.min(latency, 20) * explorerLatencyMultiplier,
-    pricePenalty,
-  };
-}

+ 0 - 8
src/cli/scoring-v2/index.ts

@@ -1,8 +0,0 @@
-export { rankModelsV2, scoreCandidateV2 } from './engine';
-export { extractFeatureVector } from './features';
-export type {
-  FeatureVector,
-  ScoredCandidate,
-  ScoringAgentName,
-} from './types';
-export { getFeatureWeights } from './weights';

+ 0 - 40
src/cli/scoring-v2/types.ts

@@ -1,40 +0,0 @@
-import type { DiscoveredModel, ExternalSignalMap } from '../types';
-
-export type ScoreFeatureName =
-  | 'status'
-  | 'context'
-  | 'output'
-  | 'versionBonus'
-  | 'reasoning'
-  | 'toolcall'
-  | 'attachment'
-  | 'quality'
-  | 'coding'
-  | 'latencyPenalty'
-  | 'pricePenalty';
-
-export type FeatureVector = Record<ScoreFeatureName, number>;
-
-export type FeatureWeights = Record<ScoreFeatureName, number>;
-
-export type ScoringAgentName =
-  | 'orchestrator'
-  | 'oracle'
-  | 'designer'
-  | 'explorer'
-  | 'librarian'
-  | 'fixer';
-
-export interface ScoringContext {
-  agent: ScoringAgentName;
-  externalSignals?: ExternalSignalMap;
-}
-
-export interface ScoredCandidate {
-  model: DiscoveredModel;
-  totalScore: number;
-  scoreBreakdown: {
-    features: FeatureVector;
-    weighted: FeatureVector;
-  };
-}

+ 0 - 67
src/cli/scoring-v2/weights.ts

@@ -1,67 +0,0 @@
-import type { FeatureWeights, ScoringAgentName } from './types';
-
-const BASE_WEIGHTS: FeatureWeights = {
-  status: 22,
-  context: 6,
-  output: 6,
-  versionBonus: 8,
-  reasoning: 10,
-  toolcall: 16,
-  attachment: 2,
-  quality: 14,
-  coding: 18,
-  latencyPenalty: -3,
-  pricePenalty: -2,
-};
-
-const AGENT_WEIGHT_OVERRIDES: Record<
-  ScoringAgentName,
-  Partial<FeatureWeights>
-> = {
-  orchestrator: {
-    reasoning: 22,
-    toolcall: 22,
-    quality: 16,
-    coding: 16,
-    latencyPenalty: -2,
-  },
-  oracle: {
-    reasoning: 26,
-    quality: 20,
-    coding: 18,
-    latencyPenalty: -2,
-    output: 7,
-  },
-  designer: {
-    attachment: 12,
-    output: 10,
-    quality: 16,
-    coding: 10,
-  },
-  explorer: {
-    latencyPenalty: -8,
-    toolcall: 24,
-    reasoning: 2,
-    context: 4,
-    output: 4,
-  },
-  librarian: {
-    context: 14,
-    output: 10,
-    quality: 18,
-    coding: 14,
-  },
-  fixer: {
-    coding: 28,
-    toolcall: 22,
-    reasoning: 12,
-    output: 10,
-  },
-};
-
-export function getFeatureWeights(agent: ScoringAgentName): FeatureWeights {
-  return {
-    ...BASE_WEIGHTS,
-    ...AGENT_WEIGHT_OVERRIDES[agent],
-  };
-}

+ 0 - 90
src/cli/system.test.ts

@@ -1,11 +1,6 @@
 /// <reference types="bun-types" />
 
 import { describe, expect, mock, test } from 'bun:test';
-import { parseOpenCodeModelsVerboseOutput } from './opencode-models';
-import {
-  pickBestCodingOpenCodeModel,
-  pickSupportOpenCodeModel,
-} from './opencode-selection';
 import {
   fetchLatestVersion,
   getOpenCodeVersion,
@@ -68,89 +63,4 @@ describe('system', () => {
       expect(version).toBeNull();
     }
   });
-
-  test('parseOpenCodeModelsVerboseOutput extracts only opencode free models', () => {
-    const output = `opencode/glm-4.7-free
-{
-  "id": "glm-4.7-free",
-  "providerID": "opencode",
-  "name": "GLM-4.7 Free",
-  "status": "active",
-  "cost": { "input": 0, "output": 0, "cache": { "read": 0, "write": 0 } },
-  "limit": { "context": 204800, "output": 131072 },
-  "capabilities": { "reasoning": true, "toolcall": true, "attachment": false }
-}
-
-openai/gpt-5.3-codex
-{
-  "id": "gpt-5.3-codex",
-  "providerID": "openai",
-  "name": "GPT-5.3 Codex",
-  "status": "active",
-  "cost": { "input": 1, "output": 1, "cache": { "read": 0, "write": 0 } },
-  "limit": { "context": 400000, "output": 128000 },
-  "capabilities": { "reasoning": true, "toolcall": true, "attachment": true }
-}
-`;
-
-    const models = parseOpenCodeModelsVerboseOutput(output);
-    expect(models).toHaveLength(1);
-    expect(models[0]?.model).toBe('opencode/glm-4.7-free');
-  });
-
-  test('pickBestCodingOpenCodeModel prefers stronger coding profile', () => {
-    const models = [
-      {
-        model: 'opencode/gpt-5-nano',
-        name: 'GPT-5 Nano',
-        status: 'active' as const,
-        contextLimit: 400000,
-        outputLimit: 128000,
-        reasoning: true,
-        toolcall: true,
-        attachment: true,
-      },
-      {
-        model: 'opencode/trinity-large-preview-free',
-        name: 'Trinity Large Preview',
-        status: 'active' as const,
-        contextLimit: 131072,
-        outputLimit: 131072,
-        reasoning: false,
-        toolcall: true,
-        attachment: false,
-      },
-    ];
-
-    const best = pickBestCodingOpenCodeModel(models);
-    expect(best?.model).toBe('opencode/gpt-5-nano');
-  });
-
-  test('pickSupportOpenCodeModel picks helper model different from primary', () => {
-    const models = [
-      {
-        model: 'opencode/glm-4.7-free',
-        name: 'GLM-4.7 Free',
-        status: 'active' as const,
-        contextLimit: 204800,
-        outputLimit: 131072,
-        reasoning: true,
-        toolcall: true,
-        attachment: false,
-      },
-      {
-        model: 'opencode/gpt-5-nano',
-        name: 'GPT-5 Nano',
-        status: 'active' as const,
-        contextLimit: 400000,
-        outputLimit: 128000,
-        reasoning: true,
-        toolcall: true,
-        attachment: true,
-      },
-    ];
-
-    const support = pickSupportOpenCodeModel(models, 'opencode/glm-4.7-free');
-    expect(support?.model).toBe('opencode/gpt-5-nano');
-  });
 });

+ 0 - 116
src/cli/types.ts

@@ -2,103 +2,11 @@ export type BooleanArg = 'yes' | 'no';
 
 export interface InstallArgs {
   tui: boolean;
-  kimi?: BooleanArg;
-  openai?: BooleanArg;
-  anthropic?: BooleanArg;
-  copilot?: BooleanArg;
-  zaiPlan?: BooleanArg;
-  antigravity?: BooleanArg;
-  chutes?: BooleanArg;
   tmux?: BooleanArg;
   skills?: BooleanArg;
-  opencodeFree?: BooleanArg;
-  balancedSpend?: BooleanArg;
-  opencodeFreeModel?: string;
-  aaKey?: string;
-  openrouterKey?: string;
   dryRun?: boolean;
-  modelsOnly?: boolean;
 }
 
-export interface OpenCodeFreeModel {
-  providerID: string;
-  model: string;
-  name: string;
-  status: 'alpha' | 'beta' | 'deprecated' | 'active';
-  contextLimit: number;
-  outputLimit: number;
-  reasoning: boolean;
-  toolcall: boolean;
-  attachment: boolean;
-  dailyRequestLimit?: number;
-}
-
-export interface DiscoveredModel {
-  providerID: string;
-  model: string;
-  name: string;
-  status: 'alpha' | 'beta' | 'deprecated' | 'active';
-  contextLimit: number;
-  outputLimit: number;
-  reasoning: boolean;
-  toolcall: boolean;
-  attachment: boolean;
-  dailyRequestLimit?: number;
-  costInput?: number;
-  costOutput?: number;
-}
-
-export interface DynamicAgentAssignment {
-  model: string;
-  variant?: string;
-}
-
-export type ScoringEngineVersion = 'v1' | 'v2-shadow' | 'v2';
-
-export type ResolutionLayerName =
-  | 'opencode-direct-override'
-  | 'manual-user-plan'
-  | 'pinned-model'
-  | 'dynamic-recommendation'
-  | 'provider-fallback-policy'
-  | 'system-default';
-
-export interface AgentResolutionProvenance {
-  winnerLayer: ResolutionLayerName;
-  winnerModel: string;
-}
-
-export interface DynamicPlanScoringMeta {
-  engineVersionApplied: 'v1' | 'v2';
-  shadowCompared: boolean;
-  diffs?: Record<string, { v1TopModel?: string; v2TopModel?: string }>;
-}
-
-export interface DynamicModelPlan {
-  agents: Record<string, DynamicAgentAssignment>;
-  chains: Record<string, string[]>;
-  provenance?: Record<string, AgentResolutionProvenance>;
-  scoring?: DynamicPlanScoringMeta;
-}
-
-export interface ExternalModelSignal {
-  qualityScore?: number;
-  codingScore?: number;
-  latencySeconds?: number;
-  inputPricePer1M?: number;
-  outputPricePer1M?: number;
-  source: 'artificial-analysis' | 'openrouter' | 'merged';
-}
-
-export type ExternalSignalMap = Record<string, ExternalModelSignal>;
-
-export type ManualAgentConfig = {
-  primary: string;
-  fallback1: string;
-  fallback2: string;
-  fallback3: string;
-};
-
 export interface OpenCodeConfig {
   plugin?: string[];
   provider?: Record<string, unknown>;
@@ -107,34 +15,10 @@ export interface OpenCodeConfig {
 }
 
 export interface InstallConfig {
-  hasKimi: boolean;
-  hasOpenAI: boolean;
-  hasAnthropic?: boolean;
-  hasCopilot?: boolean;
-  hasZaiPlan?: boolean;
-  hasAntigravity: boolean;
-  hasChutes?: boolean;
-  hasOpencodeZen: boolean;
-  useOpenCodeFreeModels?: boolean;
-  preferredOpenCodeModel?: string;
-  selectedOpenCodePrimaryModel?: string;
-  selectedOpenCodeSecondaryModel?: string;
-  availableOpenCodeFreeModels?: OpenCodeFreeModel[];
-  selectedChutesPrimaryModel?: string;
-  selectedChutesSecondaryModel?: string;
-  availableChutesModels?: DiscoveredModel[];
-  dynamicModelPlan?: DynamicModelPlan;
-  scoringEngineVersion?: ScoringEngineVersion;
-  artificialAnalysisApiKey?: string;
-  openRouterApiKey?: string;
-  balanceProviderUsage?: boolean;
   hasTmux: boolean;
   installSkills: boolean;
   installCustomSkills: boolean;
-  setupMode: 'quick' | 'manual';
-  manualAgentConfigs?: Record<string, ManualAgentConfig>;
   dryRun?: boolean;
-  modelsOnly?: boolean;
 }
 
 export interface ConfigMergeResult {

+ 4 - 4
src/config/codemap.md

@@ -229,11 +229,11 @@ deepMerge(base, override)
 | Agent      | Model                          |
 |------------|--------------------------------|
 | orchestrator | `kimi-for-coding/k2p5`        |
-| oracle      | `openai/gpt-5.2-codex`        |
-| librarian   | `openai/gpt-5.1-codex-mini`   |
-| explorer    | `openai/gpt-5.1-codex-mini`   |
+| oracle      | `openai/gpt-5.4`        |
+| librarian   | `openai/gpt-5-codex`   |
+| explorer    | `openai/gpt-5-codex`   |
 | designer    | `kimi-for-coding/k2p5`        |
-| fixer       | `openai/gpt-5.1-codex-mini`   |
+| fixer       | `openai/gpt-5-codex`   |
 
 ## File Organization
 

+ 5 - 4
src/config/constants.ts

@@ -38,11 +38,11 @@ export const SUBAGENT_DELEGATION_RULES: Record<AgentName, readonly string[]> = {
 // orchestrator is undefined so its model is fully resolved at runtime via priority fallback
 export const DEFAULT_MODELS: Record<AgentName, string | undefined> = {
   orchestrator: undefined,
-  oracle: 'openai/gpt-5.2-codex',
-  librarian: 'openai/gpt-5.1-codex-mini',
-  explorer: 'openai/gpt-5.1-codex-mini',
+  oracle: 'openai/gpt-5.4',
+  librarian: 'openai/gpt-5-codex',
+  explorer: 'openai/gpt-5-codex',
   designer: 'kimi-for-coding/k2p5',
-  fixer: 'openai/gpt-5.1-codex-mini',
+  fixer: 'openai/gpt-5-codex',
 };
 
 // Polling configuration
@@ -57,3 +57,4 @@ export const FALLBACK_FAILOVER_TIMEOUT_MS = 15_000;
 
 // Polling stability
 export const STABLE_POLLS_THRESHOLD = 3;
+

+ 10 - 10
src/config/loader.test.ts

@@ -88,37 +88,37 @@ describe('loadPluginConfig', () => {
       JSON.stringify({
         manualPlan: {
           orchestrator: {
-            primary: 'openai/gpt-5.3-codex',
+            primary: 'openai/gpt-5.4',
             fallback1: 'anthropic/claude-opus-4-6',
             fallback2: 'chutes/kimi-k2.5',
             fallback3: 'opencode/gpt-5-nano',
           },
           oracle: {
-            primary: 'openai/gpt-5.3-codex',
+            primary: 'openai/gpt-5.4',
             fallback1: 'anthropic/claude-opus-4-6',
             fallback2: 'chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8-TEE',
             fallback3: 'opencode/gpt-5-nano',
           },
           designer: {
-            primary: 'openai/gpt-5.3-codex',
+            primary: 'openai/gpt-5.4',
             fallback1: 'anthropic/claude-opus-4-6',
             fallback2: 'chutes/kimi-k2.5',
             fallback3: 'opencode/gpt-5-nano',
           },
           explorer: {
-            primary: 'openai/gpt-5.3-codex',
+            primary: 'openai/gpt-5.4',
             fallback1: 'anthropic/claude-opus-4-6',
             fallback2: 'chutes/kimi-k2.5',
             fallback3: 'opencode/gpt-5-nano',
           },
           librarian: {
-            primary: 'openai/gpt-5.3-codex',
+            primary: 'openai/gpt-5.4',
             fallback1: 'anthropic/claude-opus-4-6',
             fallback2: 'chutes/kimi-k2.5',
             fallback3: 'opencode/gpt-5-nano',
           },
           fixer: {
-            primary: 'openai/gpt-5.3-codex',
+            primary: 'openai/gpt-5.4',
             fallback1: 'anthropic/claude-opus-4-6',
             fallback2: 'chutes/kimi-k2.5',
             fallback3: 'opencode/gpt-5-nano',
@@ -350,7 +350,7 @@ describe('deepMerge behavior', () => {
         fallback: {
           timeoutMs: 15000,
           chains: {
-            oracle: ['openai/gpt-5.2-codex', 'opencode/glm-4.7-free'],
+            oracle: ['openai/gpt-5.4', 'opencode/glm-4.7-free'],
           },
         },
       }),
@@ -373,7 +373,7 @@ describe('deepMerge behavior', () => {
     const config = loadPluginConfig(projectDir);
     expect(config.fallback?.timeoutMs).toBe(15000);
     expect(config.fallback?.chains.oracle).toEqual([
-      'openai/gpt-5.2-codex',
+      'openai/gpt-5.4',
       'opencode/glm-4.7-free',
     ]);
     expect(config.fallback?.chains.explorer).toEqual([
@@ -390,14 +390,14 @@ describe('deepMerge behavior', () => {
       JSON.stringify({
         fallback: {
           chains: {
-            writing: ['openai/gpt-5.2-codex'],
+            writing: ['openai/gpt-5.4'],
           },
         },
       }),
     );
 
     const config = loadPluginConfig(projectDir);
-    expect(config.fallback?.chains.writing).toEqual(['openai/gpt-5.2-codex']);
+    expect(config.fallback?.chains.writing).toEqual(['openai/gpt-5.4']);
   });
 });