Browse Source

deprecate(conclave): Move to standalone CLI tool

Conclave is now a separate Go CLI at github.com/0xDarkMatter/conclave.
The command file now contains only a deprecation notice with install
instructions pointing to the new repository.

- Replace 877-line command spec with deprecation notice
- Update README command count (11 → 10)
- Add .grok/ to gitignore
- Add grok, glm, source to allowed permissions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
0xDarkMatter 3 months ago
parent
commit
1af9b6dec5
4 changed files with 56 additions and 845 deletions
  1. 25 1
      .claude/settings.local.json
  2. 1 0
      .gitignore
  3. 2 2
      README.md
  4. 28 842
      commands/conclave.md

+ 25 - 1
.claude/settings.local.json

@@ -52,6 +52,9 @@
       "Bash(claude:*)",
       "Bash(claude:*)",
       "Bash(codex:*)",
       "Bash(codex:*)",
       "Bash(perplexity:*)",
       "Bash(perplexity:*)",
+      "Bash(grok:*)",
+      "Bash(glm:*)",
+      "Bash(source:*)",
       "Bash(md5:*)",
       "Bash(md5:*)",
       "Bash(test:*)",
       "Bash(test:*)",
       "Bash(head:*)",
       "Bash(head:*)",
@@ -59,7 +62,28 @@
       "WebFetch(domain:www.anthropic.com)",
       "WebFetch(domain:www.anthropic.com)",
       "SlashCommand(/pulse:*)",
       "SlashCommand(/pulse:*)",
       "WebFetch(domain:simonwillison.net)",
       "WebFetch(domain:simonwillison.net)",
-      "Skill(sync)"
+      "Skill(sync)",
+      "Bash(grep:*)",
+      "Bash(~/.zshrc)",
+      "Bash(env)",
+      "Bash(security find-generic-password:*)",
+      "Bash(security find-generic-password -s \"GEMINI_API_KEY\" -w)",
+      "Bash(~/.env)",
+      "Bash(source /Users/mack/.env)",
+      "Bash(pip3 install:*)",
+      "Bash(/usr/bin/python3:*)",
+      "Bash(/opt/homebrew/bin/python3:*)",
+      "Bash(/opt/homebrew/bin/pip3 install:*)",
+      "WebFetch(domain:docs.z.ai)",
+      "Bash(droid --version:*)",
+      "Bash(droid --help:*)",
+      "Bash(opencode --version:*)",
+      "Bash(droid exec \"What is 2+2? One word.\")",
+      "Bash(opencode --help:*)",
+      "Bash(opencode run \"What is 2+2? One word.\")",
+      "Bash(opencode auth:*)",
+      "Bash(opencode models:*)",
+      "Bash(opencode run:*)"
     ],
     ],
     "deny": [],
     "deny": [],
     "ask": [
     "ask": [

+ 1 - 0
.gitignore

@@ -29,3 +29,4 @@ Thumbs.db
 # Scraped/generated data
 # Scraped/generated data
 *_scraped.json
 *_scraped.json
 *_scraped.csv
 *_scraped.csv
+.grok/

+ 2 - 2
README.md

@@ -2,7 +2,7 @@
 
 
 A comprehensive extension toolkit for [Claude Code](https://docs.anthropic.com/en/docs/claude-code) that transforms your AI coding assistant into a powerhouse development environment.
 A comprehensive extension toolkit for [Claude Code](https://docs.anthropic.com/en/docs/claude-code) that transforms your AI coding assistant into a powerhouse development environment.
 
 
-**23 expert agents. 11 slash commands. 30 skills. 4 rules. Custom output styles. One plugin install.**
+**23 expert agents. 10 slash commands. 30 skills. 4 rules. Custom output styles. One plugin install.**
 
 
 ## Why claude-mods?
 ## Why claude-mods?
 
 
@@ -100,7 +100,7 @@ Then symlink or copy to your Claude directories:
 | [testgen](commands/testgen.md) | Generate tests with expert routing, framework detection, focus/depth modes. |
 | [testgen](commands/testgen.md) | Generate tests with expert routing, framework detection, focus/depth modes. |
 | [explain](commands/explain.md) | Deep explanation of complex code, files, or concepts. Architecture, data flow, design decisions. |
 | [explain](commands/explain.md) | Deep explanation of complex code, files, or concepts. Architecture, data flow, design decisions. |
 | [spawn](commands/spawn.md) | Generate expert agents with PhD-level patterns and code examples. |
 | [spawn](commands/spawn.md) | Generate expert agents with PhD-level patterns and code examples. |
-| [conclave](commands/conclave.md) | Summon external LLMs (Gemini, OpenAI, Claude, Perplexity, Grok, GLM) for analysis, research, and consensus verdicts. |
+| [conclave](commands/conclave.md) | **[DEPRECATED]** Use [Conclave CLI](https://github.com/0xDarkMatter/conclave) instead. |
 | [atomise](commands/atomise.md) | Atom of Thoughts reasoning - decompose problems into atomic units with confidence tracking and backtracking. |
 | [atomise](commands/atomise.md) | Atom of Thoughts reasoning - decompose problems into atomic units with confidence tracking and backtracking. |
 | [pulse](commands/pulse.md) | Generate Claude Code ecosystem news digest from blogs, repos, and community sources. |
 | [pulse](commands/pulse.md) | Generate Claude Code ecosystem news digest from blogs, repos, and community sources. |
 | [setperms](commands/setperms.md) | Set tool permissions and CLI preferences. |
 | [setperms](commands/setperms.md) | Set tool permissions and CLI preferences. |

+ 28 - 842
commands/conclave.md

@@ -1,870 +1,56 @@
 ---
 ---
-description: Summon external LLM CLIs (Gemini, OpenAI, Claude, Perplexity, Grok, GLM) for analysis, research, and consensus. Use each model's strengths or convene multiple for verdicts.
+description: "[DEPRECATED] Use the standalone Conclave CLI instead: https://github.com/0xDarkMatter/conclave"
 ---
 ---
 
 
-# Conclave - Multi-LLM Council
+# Conclave - DEPRECATED
 
 
-> Convene a council of AI models. Use each model's strengths: Gemini's massive context, OpenAI's reasoning depth, Claude's coding excellence, Perplexity's web search, Grok's real-time knowledge, GLM's multilingual capability, or summon them all for consensus verdicts.
-
-```
-/conclave [provider] [target] [--flags]
-    |
-    +-> Step 1: Detect Provider
-    |     +- gemini (default) -> Gemini CLI
-    |     +- openai / codex   -> OpenAI Codex CLI
-    |     +- claude           -> Claude Code CLI (headless)
-    |     +- perplexity / pplx -> Perplexity CLI (web search)
-    |     +- grok             -> Grok CLI (xAI)
-    |     +- glm              -> GLM CLI (Zhipu AI)
-    |     +- --all            -> Multiple providers (consensus mode)
-    |
-    +-> Step 2: Select Mode
-    |     +- analyze (default) -> Codebase analysis
-    |     +- ask "<question>"  -> Direct question
-    |     +- verify "<stmt>"   -> Yes/no verification
-    |     +- compare <a> <b>   -> Diff analysis
-    |
-    +-> Step 3: Configure Autonomy
-    |     +- --auto            -> Enable autonomous mode
-    |     +- --thinking        -> Extended reasoning (where supported)
-    |     +- --quiet           -> Non-interactive, parseable output
-    |
-    +-> Step 4: Execute & Distill
-          +- Run conclaved command
-          +- Parse structured output
-          +- Return distilled results to Claude
-```
-
----
-
-## Quick Start
-
-```bash
-# Basic usage (Gemini, analyze current directory)
-/conclave .
-
-# Ask Gemini a question about the codebase
-/conclave ask "Where is authentication implemented?"
-
-# Use OpenAI Codex for deep reasoning
-/conclave openai . --thinking
-
-# Use Claude CLI for another perspective
-/conclave claude "Review this authentication flow"
-
-# Use Perplexity for web-grounded research
-/conclave perplexity "What are the latest React 19 breaking changes?"
-
-# Use Grok for real-time knowledge
-/conclave grok "What's the current status of this GitHub issue?"
-
-# Use GLM for multilingual analysis
-/conclave glm "Analyze this Chinese documentation"
-
-# Get consensus from multiple models
-/conclave --all "Is this code secure?"
-```
-
----
-
-## Arguments
-
-```
-$ARGUMENTS
-
-Providers:
-- (none) / gemini    -> Gemini CLI (default, 2M context)
-- openai / codex     -> OpenAI Codex CLI (deep reasoning)
-- claude             -> Claude Code CLI (coding expertise, headless mode)
-- perplexity / pplx  -> Perplexity CLI (web search + citations)
-- grok               -> Grok CLI (xAI, real-time knowledge)
-- glm                -> GLM CLI (Zhipu AI, multilingual)
-- --all              -> Multiple providers (consensus mode)
-
-Modes:
-- <path>             -> Analyze target (default mode)
-- ask "<question>"   -> Direct question about codebase
-- verify "<stmt>"    -> Yes/no verification with evidence
-- compare <a> <b>    -> Compare two paths/branches
-
-Flags:
-- --auto             -> Enable autonomous mode (no prompts)
-- --thinking         -> Extended reasoning mode
-- --quiet            -> Non-interactive output (JSON where supported)
-- --model <name>     -> Override default model
-- --save <file>      -> Save output to file
-- --raw              -> Return unprocessed output
-- --brief            -> ~500 char summary
-- --detailed         -> ~5000 char comprehensive breakdown
-- --setup            -> Interactive configuration wizard
-
-Focus flags:
-- --arch             -> Architecture, patterns, structure
-- --security         -> Vulnerabilities, auth, injection
-- --perf             -> Performance bottlenecks
-- --quality          -> Code quality, tech debt
-- --test             -> Test coverage, gaps
-```
-
----
-
-## Configuration
-
-### Configuration File (Preferred)
-
-Create `~/.claude/conclave.yaml` for all settings including API keys:
-
-```yaml
-# ~/.claude/conclave.yaml
-
-# API Keys (preferred storage method)
-# These take precedence over environment variables
-api_keys:
-  gemini: "your-gemini-api-key-here"
-  openai: "your-openai-api-key-here"
-  anthropic: "your-anthropic-api-key-here"  # For Claude CLI
-  perplexity: "your-perplexity-api-key-here"
-  grok: "your-xai-api-key-here"              # For Grok CLI
-  glm: "your-zhipu-api-key-here"             # For GLM CLI
-
-# Provider Configuration
-providers:
-  gemini:
-    model: gemini-2.5-pro       # Strongest: gemini-2.5-pro
-    default_flags:              # Flash: gemini-2.5-flash (faster, cheaper)
-      - --output-format
-      - json
-
-  openai:
-    model: gpt-5.2               # Strongest (requires ChatGPT login)
-    default_flags:               # Alternatives: gpt-5.1-codex-max, gpt-4o
-      - --quiet
-
-  claude:
-    model: sonnet                # Options: sonnet, opus, haiku
-    default_flags:               # Uses Claude Code CLI in headless mode
-      - --print
-      - --output-format
-      - json
-    # Note: Uses ANTHROPIC_API_KEY env var or Claude Code login
-
-  perplexity:
-    model: sonar-pro             # Best balance: complex queries, more citations
-    default_flags:               # Alternatives: sonar, sonar-reasoning, sonar-reasoning-pro
-      - --citations
-    # Unique: returns web citations with every response
-
-  grok:
-    model: grok-3                # Latest: grok-3, grok-3-fast
-    default_flags:               # Alternatives: grok-2
-      - --output-format
-      - json
-    # Unique: real-time X/Twitter knowledge, up-to-date info
-
-  glm:
-    model: glm-4-plus            # Best: glm-4-plus (GLM 4.6 equivalent)
-    default_flags:               # Alternatives: glm-4-air, glm-4-flash
-      - --output-format
-      - json
-    # Unique: strong multilingual, Chinese language excellence
-
-# Consensus Mode (--all) Settings
-consensus:
-  providers: [gemini, openai, claude]  # Which providers participate
-  require_consensus: true              # All must agree for YES/NO verdicts
-```
-
-### Default Models (Tested \& Working)
-
-| Provider | Default Model | Context | Strengths |
-|----------|---------------|---------|-----------|
-| **Gemini** | `gemini-2.5-pro` | 1M tokens | **Agentic coding assistant** - best for code analysis (refuses non-code tasks) |
-| **Gemini** | `gemini-2.5-flash` | 1M tokens | General-purpose, fast, answers any question |
-| **Gemini** | `gemini-3-pro` | 1M tokens | Most intelligent (requires [Ultra subscription](https://ai.google.dev/gemini-api/docs/models)) |
-| **OpenAI** | gpt-5.2 | 128K tokens | Deep reasoning, best knowledge (ChatGPT login) |
-| **Claude** | `sonnet` | 200K tokens | **Coding excellence** - best for code review, refactoring, security analysis |
-| **Claude** | `opus` | 200K tokens | Most capable - complex reasoning, nuanced analysis |
-| **Claude** | `haiku` | 200K tokens | Fast, efficient - quick analysis, simple tasks |
-| **Perplexity** | `sonar-pro` | 200K tokens | **Real-time web search** - best for research, current info, fact-checking |
-| **Grok** | `grok-3` | 128K tokens | **Real-time X/Twitter knowledge** - current events, social context |
-| **Grok** | `grok-3-fast` | 128K tokens | Faster, cheaper - quick queries |
-| **GLM** | `glm-4-plus` | 128K tokens | **Multilingual excellence** - best for Chinese, cross-lingual analysis |
-| **GLM** | `glm-4-flash` | 128K tokens | Fast, efficient - quick multilingual tasks |
-
-> **Notes:**
-> - `gemini-2.5-pro` is tuned as a "CLI agent for software engineering" - it deliberately refuses non-coding questions. Perfect for `/conclave` code analysis.
-> - `gemini-2.5-flash` answers general questions but is less capable for complex code tasks.
-> - Gemini 3 Pro requires Google AI Ultra subscription. Free tier users can [join the waitlist](https://developers.googleblog.com/en/5-things-to-try-with-gemini-3-pro-in-gemini-cli/).
-> - Claude CLI uses your existing Claude Code authentication or `ANTHROPIC_API_KEY` environment variable.
-
-### API Key Resolution Order
-
-Keys are resolved in this priority order:
-
-1. **Config file** - `~/.claude/conclave.yaml` (recommended)
-2. **Environment variables** - `GEMINI_API_KEY`, `OPENAI_API_KEY`
-3. **Interactive prompt** - If missing, prompt user in interactive mode
-
-```
-Key Resolution:
-    ~/.claude/conclave.yaml (api_keys.gemini)
-        ↓ (not found)
-    Environment: GEMINI_API_KEY
-        ↓ (not found)
-    Interactive: "Enter Gemini API key: ___"
-        ↓ (entered)
-    Save to config? [y/N]
-```
-
-### First-Time Setup
-
-When `/conclave` is run without configuration:
-
-1. Check for `~/.claude/conclave.yaml`
-2. If missing or incomplete, prompt:
-   ```
-   Gemini API key not found.
-   Enter key (or press Enter to skip Gemini): ___
-
-   OpenAI API key not found.
-   Enter key (or press Enter to skip OpenAI): ___
-
-   Save to ~/.claude/conclave.yaml? [Y/n]
-   ```
-3. Create config file with entered keys
-4. Inform user: "Config saved. Future runs will use stored keys."
-
-### Environment Variables (Fallback)
-
-If you prefer environment variables over config file:
-
-| Variable | Purpose |
-|----------|---------|
-| `GEMINI_API_KEY` | Gemini CLI authentication |
-| `OPENAI_API_KEY` | OpenAI Codex authentication |
-| `ANTHROPIC_API_KEY` | Claude CLI authentication |
-| `PERPLEXITY_API_KEY` | Perplexity CLI authentication |
-| `GROK_API_KEY` | Grok CLI authentication (xAI) |
-| `GLM_API_KEY` | GLM CLI authentication (Zhipu AI) |
-
-```bash
-# Add to ~/.bashrc or ~/.zshrc
-export GEMINI_API_KEY="your-key-here"
-export OPENAI_API_KEY="your-key-here"
-export ANTHROPIC_API_KEY="your-key-here"
-export PERPLEXITY_API_KEY="your-key-here"
-export GROK_API_KEY="your-key-here"
-export GLM_API_KEY="your-key-here"
-```
-
-### Security Notes
-
-- Config file is stored in user's `.claude/` directory (not in project)
-- Never commit API keys to git
-- The `conclave.yaml` should be in `.gitignore` by default
-- Keys in config file take precedence over environment variables
+> **This command has been deprecated.** Conclave is now a standalone CLI tool.
 
 
 ---
 ---
 
 
-## Provider-Specific Flags
+## Migration
 
 
-### Gemini CLI
+The `/conclave` command has been replaced by the **Conclave CLI** - a standalone, universal multi-LLM consensus tool that works from any context (Claude Code, OpenCode, Gemini CLI, terminal, CI/CD).
 
 
-| Flag | Mapped From | Effect |
-|------|-------------|--------|
-| `-p "<prompt>"` | (default) | One-off prompt execution |
-| `--yolo` / `-y` | `--auto` | Auto-approve all operations |
-| `--output-format json` | `--quiet` | Structured JSON output |
-| `--output-format text` | (default) | Plain text output |
-| `-m <model>` | `--model` | Specify model |
+### Install
 
 
-**Gemini Command Template:**
 ```bash
 ```bash
-# Pipe file content via stdin and use positional prompt arg
-cat file.md | gemini -m gemini-2.5-pro "Analyze this code" [--output-format json]
+# macOS/Linux
+curl -fsSL https://raw.githubusercontent.com/0xDarkMatter/conclave/main/install.sh | bash
 
 
-# Or for simple prompts without file content
-gemini -m gemini-2.5-pro "Your prompt here"
+# Or with Go
+go install github.com/0xDarkMatter/conclave@latest
 ```
 ```
 
 
-> **⚠️ Important Notes:**
-> - Always specify model: `-m gemini-2.5-pro` (recommended) or `-m gemini-2.5-flash` (faster)
-> - Use stdin piping for file content: `cat file.md | gemini -m gemini-2.5-pro "prompt"`
-> - The `-p` flag is deprecated - use positional argument for prompts
-> - DO NOT use `@file.md` syntax - it requires GCP auth (`GOOGLE_CLOUD_PROJECT` env var)
+### Usage
 
 
-### OpenAI Codex CLI
-
-**Authentication Modes:**
-
-| Auth Method | Command | Default Model | Billing |
-|-------------|---------|---------------|---------|
-| **ChatGPT Login** | `codex login --device-auth` | `gpt-5.2` | Subscription |
-| **API Key** | `codex login --with-api-key` | `gpt-4o` | Pay per token |
-
-> **Recommended:** Use ChatGPT login for subscription-based access. API key mode doesn't support `gpt-5.x-codex` models.
-
-**Available Models (ChatGPT Login):**
-
-| Model | Capability |
-|-------|------------|
-| `gpt-5.2` | Latest frontier - best knowledge, reasoning, coding |
-| `gpt-5.1-codex-max` | Default - flagship for deep/fast reasoning |
-| `gpt-5.1-codex` | Optimized for codex |
-| `gpt-5.1-codex-mini` | Cheaper, faster, less capable |
-| `gpt-5.1` | Broad world knowledge, general reasoning |
-
-| Flag | Mapped From | Effect |
-|------|-------------|--------|
-| `--full-auto` | `--auto` | Autonomous sandboxed execution |
-| `--json` | `--quiet` | JSONL output for parsing |
-| `-m <model>` | `--model` | Specify model |
-| `codex exec` | (CI mode) | Non-interactive runs |
-| `-s read-only` | (default) | Read-only sandbox |
-
-**Codex Command Template:**
 ```bash
 ```bash
-codex exec "<prompt>" [--full-auto] [--json] [-m <model>] --skip-git-repo-check
-```
-
-### Perplexity CLI
-
-**Authentication:**
-
-| Auth Method | Setup | Notes |
-|-------------|-------|-------|
-| **Environment Variable** | `export PERPLEXITY_API_KEY="key"` | Standard approach |
-| **Config File** | Add to `~/.claude/conclave.yaml` | Preferred for persistence |
-
-> Get your API key at [perplexity.ai/settings/api](https://www.perplexity.ai/settings/api)
-
-**Available Models:**
-
-| Model | Use Case |
-|-------|----------|
-| `sonar` | Fast, cost-effective for quick facts |
-| `sonar-pro` | Complex queries, more citations (default) |
-| `sonar-reasoning` | Multi-step problem solving |
-| `sonar-reasoning-pro` | Deep reasoning (DeepSeek-R1 based) |
-
-| Flag | Mapped From | Effect |
-|------|-------------|--------|
-| `-m <model>` | `--model` | Specify model |
-| `--no-citations` | (n/a) | Disable citation output |
-| `--json` | `--quiet` | Output raw JSON response |
-| `-s <prompt>` | (n/a) | System prompt |
-
-**Perplexity Command Template:**
-```bash
-# Direct question with citations
-perplexity "What are the latest React 19 breaking changes?"
-
-# Pipe content and analyze
-cat code.py | perplexity -m sonar-pro "Review this code for security issues"
-
-# Use reasoning model for complex analysis
-perplexity -m sonar-reasoning "Explain the tradeoffs of microservices vs monolith"
-```
-
-**Unique Capability:** Every response includes web citations. Use Perplexity when you need:
-- Current information (not in Claude/Gemini training data)
-- Verification with sources
-- Research questions requiring web search
-- Fact-checking claims with citations
-
-### Claude Code CLI
-
-**Authentication:**
-
-| Auth Method | Setup | Notes |
-|-------------|-------|-------|
-| **Claude Code Login** | `claude` (interactive first use) | Uses existing login |
-| **API Key** | `export ANTHROPIC_API_KEY="key"` | Direct API access |
-
-> Claude CLI uses your existing Claude Code authentication. If not logged in, it will prompt on first use.
-
-**Available Models:**
-
-| Model | Capability |
-|-------|------------|
-| `sonnet` | Best balance of speed and capability (default) |
-| `opus` | Most capable - complex reasoning, nuanced analysis |
-| `haiku` | Fastest - quick analysis, simple tasks |
-
-| Flag | Mapped From | Effect |
-|------|-------------|--------|
-| `-p` / `--print` | (required) | Headless mode - print and exit |
-| `--output-format json` | `--quiet` | JSON output for parsing |
-| `--output-format text` | (default) | Plain text output |
-| `--model <model>` | `--model` | Specify model (sonnet/opus/haiku) |
-| `--dangerously-skip-permissions` | `--auto` | Bypass permission checks (sandbox only) |
-| `--system-prompt <prompt>` | (n/a) | Custom system prompt |
-
-**Claude CLI Command Template:**
-```bash
-# Basic headless query
-claude -p "Analyze this code for security issues" --output-format json
-
-# With specific model
-claude -p --model opus "Review this architecture decision"
+# Query multiple LLMs with a judge for synthesis
+conclave gemini,openai,glm "Is this code secure?" --judge claude
 
 
-# Pipe content and analyze
-cat src/auth.ts | claude -p "Review this authentication code"
+# Pipe file content
+cat src/auth.ts | conclave gemini,openai "Review this" --judge claude
 
 
-# With custom system prompt
-claude -p --system-prompt "You are a security auditor" "Check for vulnerabilities"
-```
-
-**Unique Capability:** Claude Code CLI has full access to Claude's coding expertise in headless mode. Use when you need:
-- Code review with Claude's understanding of best practices
-- Security analysis with nuanced reasoning
-- A different perspective from Gemini/OpenAI on the same problem
-- Fast iteration with `haiku` for simple checks
-
-### Grok CLI
-
-**Authentication:**
-
-| Auth Method | Setup | Notes |
-|-------------|-------|-------|
-| **Environment Variable** | `export GROK_API_KEY="key"` | Standard approach |
-| **User Settings** | `~/.grok/user-settings.json` | Persistent storage |
-
-> Get your API key at [x.ai](https://x.ai)
-
-**Available Models:**
-
-| Model | Use Case |
-|-------|----------|
-| `grok-3-latest` | Latest Grok 3 - best reasoning |
-| `grok-3-fast` | Faster, cheaper Grok 3 |
-| `grok-code-fast-1` | Optimized for code tasks (default) |
-| `grok-4-latest` | Grok 4 when available |
+# Multiple files
+conclave gemini,openai "Compare these" --file a.go --file b.go --judge claude
 
 
-| Flag | Mapped From | Effect |
-|------|-------------|--------|
-| `-p, --prompt` | (required) | Headless mode - single prompt and exit |
-| `-m, --model` | `--model` | Specify model |
-| `--max-tool-rounds` | (n/a) | Limit tool execution rounds |
-
-**Grok Command Template:**
-```bash
-# Headless single prompt
-grok -p "Analyze this code for security issues"
-
-# With specific model
-grok -p "Review this architecture" -m grok-3-latest
-
-# Pipe content and analyze
-cat src/auth.ts | grok -p "Review this authentication code"
+# Raw results (no synthesis)
+conclave gemini,openai "Analyze this" --no-judge --json
 ```
 ```
 
 
-**Unique Capability:** Real-time X/Twitter knowledge, current events awareness. Use when you need:
-- Up-to-date information beyond training cutoffs
-- Social/community context around technologies
-- Current discussions and trends
-
-### GLM CLI (Zhipu AI)
-
-**Authentication:**
-
-| Auth Method | Setup | Notes |
-|-------------|-------|-------|
-| **Environment Variable** | `export GLM_API_KEY="key"` | Standard approach |
-| **Config File** | `~/.glm/config.json` | Persistent storage |
-
-> Get your API key at [open.bigmodel.cn](https://open.bigmodel.cn)
+### Why the change?
 
 
-**Available Models:**
+The original `/conclave` command was 877 lines of documentation that Claude had to parse and manually execute every time. The new CLI:
 
 
-| Model | Use Case |
-|-------|----------|
-| `glm-4-plus` | Most capable - complex tasks (GLM 4.6) |
-| `glm-4-air` | Balanced speed and capability |
-| `glm-4-flash` | Fast, efficient for simple tasks |
-| `glm-4-long` | Extended context (1M tokens) |
-
-| Flag | Mapped From | Effect |
-|------|-------------|--------|
-| `-p, --prompt` | (required) | Single prompt mode |
-| `-m, --model` | `--model` | Specify model |
-| `--json` | `--quiet` | JSON output |
-
-**GLM Command Template:**
-```bash
-# Basic prompt
-glm -p "Analyze this code"
-
-# With specific model
-glm -p "Review this design" -m glm-4-plus
-
-# Multilingual analysis
-glm -p "分析这段代码的安全性" -m glm-4-plus
-```
-
-**Unique Capability:** Strong multilingual support, especially Chinese. Use when you need:
-- Chinese language analysis or documentation
-- Cross-lingual code review
-- Asian market/ecosystem context
+- **Self-contained** - Handles orchestration and synthesis internally
+- **Universal** - Works from any LLM tool, not just Claude Code
+- **Faster** - Single binary, parallel execution, structured output
+- **Configurable judge** - Any LLM can synthesize the verdict
 
 
 ---
 ---
 
 
-## Consensus Mode (--all)
-
-The `--all` flag dispatches tasks to multiple LLMs and has **Claude arbitrate** the results.
-
-```
-/conclave --all "<query>"
-    │
-    ├─→ Step 1: Dispatch (parallel)
-    │     ├─ Gemini: detailed analysis request
-    │     ├─ Codex:  detailed analysis request
-    │     └─ Claude: detailed analysis request
-    │
-    ├─→ Step 2: Collect Raw Responses
-    │     ├─ Gemini returns: analysis, evidence, reasoning
-    │     ├─ Codex returns:  analysis, evidence, reasoning
-    │     └─ Claude returns: analysis, evidence, reasoning
-    │
-    └─→ Step 3: Synthesize & Arbitrate
-          ├─ Parse all full responses
-          ├─ Identify agreements & disagreements
-          ├─ Evaluate reasoning quality
-          ├─ Weigh evidence strength
-          └─ Synthesize final verdict
-```
-
-### Claude's Arbitration Role
-
-Claude doesn't just pattern-match YES/NO. Claude:
-
-1. **Reads both full responses** - understands reasoning, not just conclusions
-2. **Identifies agreements** - shared facts, common ground
-3. **Identifies disagreements** - conflicting claims, different interpretations
-4. **Evaluates argument quality** - which reasoning is more sound? more evidence-backed?
-5. **Synthesizes a verdict** - may agree with one, take parts from both, or note uncertainty
-
-### Expert Prompt Template
-
-Request rich, detailed output from each model:
-
-```markdown
-Analyze the following and provide a detailed response:
-
-<query>
-
-Structure your response:
-1. **Assessment** - Your conclusion/finding
-2. **Evidence** - Specific facts, code references, documentation
-3. **Reasoning** - How you reached this conclusion
-4. **Confidence** - How certain (high/medium/low) and why
-5. **Caveats** - What could change this assessment
-```
-
-### Conclave Output Format
-
-```markdown
-## Conclave Analysis: <query>
-
-### Expert Responses
-
-**Gemini's Analysis:**
-<full response with assessment, evidence, reasoning>
-
-**OpenAI's Analysis:**
-<full response with assessment, evidence, reasoning>
-
-**Claude's Analysis:**
-<full response with assessment, evidence, reasoning>
-
----
-
-### Synthesis
-
-**Agreements:**
-- All models concur that...
-- Shared evidence: ...
-
-**Disagreements:**
-- Gemini argues: ... (because...)
-- OpenAI argues: ... (because...)
-- Claude argues: ... (because...)
-
-**Evaluation of Reasoning:**
-- Gemini's strength: ... / weakness: ...
-- OpenAI's strength: ... / weakness: ...
-- Claude's strength: ... / weakness: ...
-
----
-
-### Verdict
-
-<Synthesized conclusion drawing on strongest arguments from all models>
-
-**Confidence:** HIGH / MEDIUM / LOW
-**Primary basis:** <which reasoning was most convincing>
-**Caveat:** <remaining uncertainty, if any>
-```
-
-### Usage Examples
-
-```bash
-# Security review with arbitration
-/conclave --all "Is this authentication implementation secure?"
-
-# Architecture decision
-/conclave --all "Should we use microservices or monolith for this project?"
-
-# Code quality assessment
-/conclave --all . --quality "Evaluate the test coverage and error handling"
-
-# Verify a claim with multiple perspectives
-/conclave --all verify "All database queries are properly parameterized"
-```
-
-### When to Use Conclave
-
-| Scenario | Use Conclave? |
-|----------|---------------|
-| High-stakes security review | ✅ Yes |
-| Architecture decisions | ✅ Yes |
-| Resolving ambiguous requirements | ✅ Yes |
-| Routine code analysis | ❌ Single model sufficient |
-| Simple questions | ❌ Overkill |
-
----
-
-## Execution Protocol
-
-### Step 1: Verify CLI Availability
-
-```bash
-# Check Gemini
-which gemini || echo "Install: https://github.com/google-gemini/gemini-cli"
-
-# Check OpenAI Codex
-which codex || echo "Install: https://github.com/openai/codex"
-
-# Check Perplexity (included in claude-mods toolkit)
-which perplexity || echo "Install: Run tools/install-*.sh from claude-mods"
-
-# Check Grok
-which grok || echo "Install: bun add -g @vibe-kit/grok-cli"
-
-# Check GLM
-which glm || echo "Install: pip install zhipuai-cli (or check zhipuai docs)"
-```
-
-### Step 2: Parse Provider & Mode
-
-| Input Pattern | Provider | Mode |
-|---------------|----------|------|
-| `/conclave .` | gemini | analyze |
-| `/conclave openai .` | openai | analyze |
-| `/conclave perplexity "..."` | perplexity | ask |
-| `/conclave grok "..."` | grok | ask |
-| `/conclave glm "..."` | glm | ask |
-| `/conclave ask "..."` | gemini | ask |
-| `/conclave codex ask "..."` | openai | ask |
-| `/conclave pplx ask "..."` | perplexity | ask |
-| `/conclave --all .` | configured | analyze |
-| `/conclave verify "..."` | gemini | verify |
-
-### Step 3: Construct Command
-
-**Always add read-only instruction:**
-> "IMPORTANT: This is a read-only analysis. Do not execute code or modify files."
-
-**Map flags to provider-specific equivalents:**
-
-| Generic Flag | Gemini | OpenAI Codex | Perplexity | Grok | GLM |
-|--------------|--------|--------------|------------|------|-----|
-| `--auto` | `--yolo` | `--full-auto` | (n/a) | (n/a) | (n/a) |
-| `--quiet` | `--output-format json` | `--json` | `--json` | (n/a) | `--json` |
-| `--thinking` | (n/a) | (use `gpt-5.2`) | `-m sonar-reasoning-pro` | `-m grok-3-latest` | `-m glm-4-plus` |
-| `--model X` | `-m X` | `-m X` | `-m X` | `-m X` | `-m X` |
-
-> **Safety note:** Default mode is read-only analysis. `--auto` explicitly opts into tool execution - use with caution.
-
-### Step 4: Execute
-
-**Important:** Claude reads files using Read tool, then passes content via stdin.
-
-```bash
-# Gemini example: pipe content via stdin
-cat src/main.ts | gemini -m gemini-2.5-pro "IMPORTANT: Read-only analysis. Analyze architecture and patterns."
-
-# OpenAI example (ChatGPT subscription)
-codex exec "Analyze @src/ - architecture and patterns. Read-only, no modifications." --skip-git-repo-check
-
-# Perplexity example: web-grounded research
-perplexity -m sonar-pro "What are the security best practices for JWT tokens in 2025?"
-```
-
-> **Notes:**
-> - For Gemini: Use stdin piping with explicit model (`-m gemini-2.5-pro`)
-> - For Codex: The `@` syntax works (Codex handles file references internally)
-> - Claude should use Read tool to fetch file content for Gemini, pass via stdin
-
-### Step 5: Distill Results
-
-**Brief (~500 chars):** Executive summary only
-**Default (~2000 chars):** Architecture, patterns, issues, recommendations
-**Detailed (~5000 chars):** Full breakdown with file references
-
----
-
-## Usage Examples
-
-### Basic Analysis
-
-```bash
-# Analyze with Gemini (default)
-/conclave src/
-
-# Analyze with OpenAI Codex
-/conclave openai src/
-
-# Quick architecture overview
-/conclave . --arch --brief
-```
-
-### Questions & Verification
-
-```bash
-# Ask a question (Gemini)
-/conclave ask "How does the authentication flow work?"
-
-# Ask with deep reasoning (OpenAI)
-/conclave openai ask "What are the security implications of this design?" --thinking
-
-# Ask with web-grounded research (Perplexity)
-/conclave perplexity "What are the latest OWASP Top 10 vulnerabilities for 2025?"
-
-# Verify a claim
-/conclave verify "All database queries use parameterized statements"
-```
-
-### Research & Current Info (Perplexity)
-
-```bash
-# Get current information with sources
-/conclave pplx "Is this npm package actively maintained?"
-
-# Research best practices with citations
-/conclave perplexity "What are the recommended JWT token expiration times in 2025?"
-
-# Fact-check a claim
-/conclave pplx "Does React 19 remove support for class components?"
-```
-
-### Autonomous Mode
-
-```bash
-# Let Gemini run without prompts
-/conclave . --auto --security
-
-# Full autonomous with OpenAI
-/conclave openai . --auto --detailed
-```
-
-### Conclave (Multi-Model)
-
-```bash
-# Get consensus on architecture
-/conclave --all . --arch
-
-# Security verification with multiple opinions
-/conclave --all verify "This code is safe from SQL injection"
-
-# Compare complex analysis
-/conclave --all ask "What's the biggest technical debt in this codebase?"
-```
-
-### Saving Output
-
-```bash
-# Save analysis to file
-/conclave . --detailed --save analysis.md
-
-# Save conclave results
-/conclave --all . --security --save security-audit.md
-```
-
----
-
-## Error Handling
-
-| Error | Action |
-|-------|--------|
-| CLI not found | Provide install instructions with links |
-| API key missing (interactive) | Prompt for key, offer to save to config |
-| API key missing (non-interactive) | Error with config setup instructions |
-| Rate limited | Suggest waiting or reducing scope |
-| Timeout | Suggest narrower target or `--brief` |
-| Provider unavailable | Fall back to available provider |
-| Config file invalid | Show YAML parse error, suggest fix |
-
-### Missing API Key Flow
-
-**Interactive Mode:**
-```
-Gemini API key not found.
-
-Options:
-  1. Enter key now (will save to ~/.claude/conclave.yaml)
-  2. Set env: export GEMINI_API_KEY="your-key"
-  3. Edit config: ~/.claude/conclave.yaml
-
-Enter key (or 'skip' for OpenAI only): ___
-```
-
-**Non-Interactive Mode (--quiet, CI):**
-```
-ERROR: Gemini API key not found.
-
-Configure in ~/.claude/conclave.yaml:
-  api_keys:
-    gemini: "your-key"
-
-Or set: export GEMINI_API_KEY="your-key"
-```
-
----
-
-## Migration from /g-slave
-
-The `/conclave` command is the successor to `/g-slave` with expanded capabilities:
-
-| Old | New | Notes |
-|-----|-----|-------|
-| `/g-slave .` | `/conclave .` | Same behavior |
-| `/g-slave ask "..."` | `/conclave ask "..."` | Same behavior |
-| `/g-slave --raw` | `/conclave --raw` | Same behavior |
-| (n/a) | `/conclave openai .` | NEW: OpenAI support |
-| (n/a) | `/conclave --all .` | NEW: Multi-model consensus |
-| (n/a) | `/conclave --thinking` | NEW: Extended reasoning |
-
----
-
-## Remember
-
-1. **Claude commands, LLMs execute.** You conclave heavy lifting, receive distilled intel.
-2. **Read-only always.** Never let conclaves modify files (unless explicitly autonomous).
-3. **Default to strongest.** Use best available model unless user specifies otherwise.
-4. **Distill by default.** Only pass raw output when requested.
-5. **Conclave for confidence.** When stakes are high, get multiple opinions.
-
----
+## Repository
 
 
-## Sources
+**GitHub:** [https://github.com/0xDarkMatter/conclave](https://github.com/0xDarkMatter/conclave)
 
 
-- [Gemini CLI Headless Mode](https://geminicli.com/docs/cli/headless/)
-- [Gemini CLI GitHub](https://github.com/google-gemini/gemini-cli)
-- [OpenAI Codex CLI Reference](https://developers.openai.com/codex/cli/reference)
-- [OpenAI Codex GitHub](https://github.com/openai/codex)
-- [Perplexity API Docs](https://docs.perplexity.ai/)
-- [Perplexity Model Cards](https://docs.perplexity.ai/guides/model-cards)
-- [Grok CLI GitHub](https://github.com/vibe-kit/grok-cli)
-- [xAI API](https://x.ai)
-- [Zhipu AI (GLM) API](https://open.bigmodel.cn)
+See the repository for full documentation, configuration options, and provider support.