Browse Source

feat: Major expansion of agents, skills, and review command

Changes:
- Expand python-expert agent (+1600 lines) with comprehensive patterns
- Expand firecrawl-expert agent (+500 lines) with SDK examples
- Expand review command with expert routing, semantic diffs, auto-TodoWrite
- Add 7 new skills: file-search, find-replace, mcp-patterns, rest-patterns,
  sql-patterns, sqlite-ops, tailwind-patterns
- Add pulse command and infrastructure for ecosystem news digests
- Consolidate agent-discovery into tool-discovery skill
- Convert lightweight agents (rest, tailwind, fetch) to pattern skills
- Remove install scripts (use plugin format instead)
- Add legacy state file to .gitignore for migration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
0xDarkMatter 4 months ago
parent
commit
7099761305
49 changed files with 7234 additions and 1198 deletions
  1. 2 2
      .claude-plugin/plugin.json
  2. 403 0
      .claude/commands/pulse.md
  3. 12 1
      .claude/settings.local.json
  4. 1 0
      .gitignore
  5. 29 16
      README.md
  6. 21 0
      agents/claude-architect.md
  7. 0 176
      agents/fetch-expert.md
  8. 506 0
      agents/firecrawl-expert.md
  9. 1588 46
      agents/python-expert.md
  10. 0 149
      agents/rest-expert.md
  11. 9 27
      agents/sql-expert.md
  12. 0 84
      agents/tailwind-expert.md
  13. 151 0
      analysis/CURRENT_PROJECTS.md
  14. 2 2
      commands/archive/session-manager/README.md
  15. 5 5
      commands/archive/session-manager/load.md
  16. 4 4
      commands/archive/session-manager/save.md
  17. 1 1
      commands/archive/session-manager/status.md
  18. 246 188
      commands/explain.md
  19. 6 6
      commands/loadplan.md
  20. 2 2
      commands/plan.md
  21. 398 0
      commands/pulse.md
  22. 370 116
      commands/review.md
  23. 5 5
      commands/saveplan.md
  24. 1 1
      commands/showplan.md
  25. 1 1
      commands/sync.md
  26. 26 11
      docs/DASH.md
  27. 1 1
      docs/PLAN.md
  28. 0 0
      news/.gitkeep
  29. 152 0
      news/2025-12-12_pulse.md
  30. 154 0
      news/2025-12-13_pulse.md
  31. 0 0
      pulse/.gitkeep
  32. 173 0
      pulse/BRAND_VOICE.md
  33. 159 0
      pulse/NEWS_TEMPLATE.md
  34. 255 0
      pulse/articles_cache.json
  35. 366 0
      pulse/fetch.py
  36. 168 0
      pulse/fetch_cache.json
  37. 31 0
      pulse/state.json
  38. 0 101
      scripts/install.ps1
  39. 0 78
      scripts/install.sh
  40. 0 77
      skills/agent-discovery/SKILL.md
  41. 206 0
      skills/file-search/SKILL.md
  42. 221 0
      skills/find-replace/SKILL.md
  43. 369 0
      skills/mcp-patterns/SKILL.md
  44. 187 44
      skills/python-env/SKILL.md
  45. 117 0
      skills/rest-patterns/SKILL.md
  46. 222 0
      skills/sql-patterns/SKILL.md
  47. 266 0
      skills/sqlite-ops/SKILL.md
  48. 217 0
      skills/tailwind-patterns/SKILL.md
  49. 181 54
      skills/tool-discovery/SKILL.md

+ 2 - 2
.claude-plugin/plugin.json

@@ -25,7 +25,8 @@
       "commands/explain.md",
       "commands/agent-genesis.md",
       "commands/g-slave.md",
-      "commands/init-tools.md"
+      "commands/init-tools.md",
+      "commands/pulse.md"
     ],
     "agents": [
       "agents/astro-expert.md",
@@ -36,7 +37,6 @@
       "agents/cloudflare-expert.md",
       "agents/craftcms-expert.md",
       "agents/cypress-expert.md",
-      "agents/fetch-expert.md",
       "agents/firecrawl-expert.md",
       "agents/javascript-expert.md",
       "agents/laravel-expert.md",

+ 403 - 0
.claude/commands/pulse.md

@@ -0,0 +1,403 @@
+---
+description: "Generate Claude Code ecosystem news digest. Fetches blogs, repos, and community sources via Firecrawl. Output to news/{date}_pulse.md."
+---
+
+# Pulse - Claude Code News Feed
+
+Fetch and summarize the latest developments in the Claude Code ecosystem.
+
+## What This Command Does
+
+1. **Fetches** content from blogs, official repos, and community sources
+2. **Deduplicates** against previously seen URLs (stored in `news/state.json`)
+3. **Summarizes** each item with an engaging precis + relevance assessment
+4. **Writes** digest to `news/{YYYY-MM-DD}_pulse.md`
+5. **Updates** state to prevent duplicate entries in future runs
+
+## Arguments
+
+- `--force` - Regenerate digest even if today's already exists
+- `--days N` - Look back N days instead of 1 (default: 1)
+- `--dry-run` - Show sources that would be fetched without actually fetching
+
+## Sources
+
+### Official (Priority: Critical)
+
+```json
+[
+  {"name": "Anthropic Engineering", "url": "https://www.anthropic.com/engineering", "type": "blog"},
+  {"name": "Claude Blog", "url": "https://claude.com/blog", "type": "blog"},
+  {"name": "Claude Code Docs", "url": "https://code.claude.com", "type": "docs"},
+  {"name": "anthropics/claude-code", "url": "https://github.com/anthropics/claude-code", "type": "repo"},
+  {"name": "anthropics/skills", "url": "https://github.com/anthropics/skills", "type": "repo"},
+  {"name": "anthropics/claude-code-action", "url": "https://github.com/anthropics/claude-code-action", "type": "repo"},
+  {"name": "anthropics/claude-agent-sdk-demos", "url": "https://github.com/anthropics/claude-agent-sdk-demos", "type": "repo"}
+]
+```
+
+### Community Blogs (Priority: High)
+
+```json
+[
+  {"name": "Simon Willison", "url": "https://simonwillison.net", "type": "blog"},
+  {"name": "Every", "url": "https://every.to", "type": "blog"},
+  {"name": "SSHH Blog", "url": "https://blog.sshh.io", "type": "blog"},
+  {"name": "Lee Han Chung", "url": "https://leehanchung.github.io", "type": "blog"},
+  {"name": "Nick Nisi", "url": "https://nicknisi.com", "type": "blog"},
+  {"name": "HumanLayer", "url": "https://www.humanlayer.dev/blog", "type": "blog"},
+  {"name": "Chris Dzombak", "url": "https://www.dzombak.com/blog", "type": "blog"},
+  {"name": "GitButler", "url": "https://blog.gitbutler.com", "type": "blog"},
+  {"name": "Docker Blog", "url": "https://www.docker.com/blog", "type": "blog"},
+  {"name": "Nx Blog", "url": "https://nx.dev/blog", "type": "blog"},
+  {"name": "Yee Fei Ooi", "url": "https://medium.com/@ooi_yee_fei", "type": "blog"}
+]
+```
+
+### Community Indexes (Priority: Medium)
+
+```json
+[
+  {"name": "Awesome Claude Skills", "url": "https://github.com/travisvn/awesome-claude-skills", "type": "repo"},
+  {"name": "Awesome Claude Code", "url": "https://github.com/hesreallyhim/awesome-claude-code", "type": "repo"},
+  {"name": "Awesome Claude", "url": "https://github.com/alvinunreal/awesome-claude", "type": "repo"},
+  {"name": "SkillsMP", "url": "https://skillsmp.com", "type": "marketplace"},
+  {"name": "Awesome Claude AI", "url": "https://awesomeclaude.ai", "type": "directory"}
+]
+```
+
+### Tools (Priority: Medium)
+
+```json
+[
+  {"name": "Worktree", "url": "https://github.com/agenttools/worktree", "type": "repo"}
+]
+```
+
+### GitHub Search Queries (Priority: High)
+
+Use `gh search repos` and `gh search code` for discovery:
+
+```bash
+# Repos with recent Claude Code activity (last 7 days)
+gh search repos "claude code" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated --limit=10
+
+# Hooks and skills hotspots
+gh search repos "claude code hooks" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
+gh search repos "claude code skills" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
+gh search repos "CLAUDE.md agent" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
+
+# Topic-based discovery (often better signal)
+gh search repos --topic=claude-code --sort=updated --limit=10
+gh search repos --topic=model-context-protocol --sort=updated --limit=10
+
+# Code search for specific patterns
+gh search code "PreToolUse" --language=json --limit=5
+gh search code "PostToolUse" --language=json --limit=5
+```
+
+### Reddit Search (Priority: Medium)
+
+Use Firecrawl or web search for Reddit threads:
+
+```
+site:reddit.com/r/ClaudeAI "Claude Code" (hooks OR skills OR worktree OR tmux)
+```
+
+### Official Docs Search (Priority: High)
+
+Check for documentation updates:
+
+```
+site:code.claude.com hooks
+site:code.claude.com skills
+site:code.claude.com github-actions
+site:code.claude.com mcp
+```
+
+## Execution Steps
+
+### Step 1: Check State
+
+Read `news/state.json` to get:
+- `last_run` timestamp
+- `seen_urls` array for deduplication
+- `seen_commits` object for repo tracking
+
+If today's digest exists AND `--force` not specified:
+```
+Pulse digest already exists for today: news/2025-12-12_pulse.md
+Use --force to regenerate.
+```
+
+### Step 2: Fetch Sources
+
+**For Blogs** - Use Firecrawl to fetch and extract recent articles:
+
+```bash
+# Fetch blog content via firecrawl
+firecrawl https://simonwillison.net --format markdown
+```
+
+Look for articles with dates in the last N days (default 1).
+
+**For GitHub Repos** - Use `gh` CLI:
+
+```bash
+# Get latest release
+gh api repos/anthropics/claude-code/releases/latest --jq '.tag_name, .published_at, .body'
+
+# Get recent commits
+gh api repos/anthropics/claude-code/commits --jq '.[:5] | .[] | {sha: .sha[:7], message: .commit.message, date: .commit.author.date}'
+
+# Get recent discussions (if enabled)
+gh api repos/anthropics/claude-code/discussions --jq '.[:3]'
+```
+
+**For Marketplaces/Directories** - Use Firecrawl:
+
+```bash
+firecrawl https://skillsmp.com --format markdown
+firecrawl https://awesomeclaude.ai --format markdown
+```
+
+### Step 3: Filter & Deduplicate
+
+For each item found:
+1. Check if URL is in `seen_urls` - skip if yes
+2. Check if date is within lookback window - skip if older
+3. For repos, check if commit SHA matches `seen_commits[repo]` - skip if same
+
+### Step 4: Generate Summaries
+
+For each new item, generate:
+
+**Precis** (2-3 sentences):
+> Engaging summary that captures the key points and why someone would want to read this.
+
+**Relevance** (1 sentence):
+> How this specifically relates to or could improve our claude-mods project.
+
+Use this prompt pattern for each item:
+```
+Article: [title]
+URL: [url]
+Content: [extracted content]
+
+Generate:
+1. A 2-3 sentence engaging summary (precis)
+2. A 1-sentence assessment of relevance to "claude-mods" (a collection of Claude Code extensions including agents, skills, and commands)
+
+Format:
+PRECIS: [summary]
+RELEVANCE: [assessment]
+```
+
+### Step 5: Write Digest
+
+Create `news/{YYYY-MM-DD}_pulse.md` with format:
+
+```markdown
+# Pulse: {date}
+
+> Claude Code ecosystem digest
+
+**Generated**: {timestamp} | **Sources**: {count} | **New Items**: {count}
+
+---
+
+## Critical Updates
+
+[Items from official Anthropic sources - releases, major announcements]
+
+### [{title}]({url})
+**Source**: {source_name} | **Type**: {release/post/commit}
+> {precis}
+
+**Why it matters**: {relevance}
+
+---
+
+## Official
+
+[Other items from official sources]
+
+---
+
+## Community
+
+[Items from community blogs and indexes]
+
+---
+
+## Stats
+
+| Category | Items |
+|----------|-------|
+| Official releases | {n} |
+| Blog posts | {n} |
+| Repo updates | {n} |
+| Community | {n} |
+
+---
+
+*Generated by `/pulse`*
+```
+
+### Step 6: Update State
+
+Update `news/state.json`:
+
+```json
+{
+  "version": "1.0",
+  "last_run": "{ISO timestamp}",
+  "seen_urls": [
+    "...existing...",
+    "...new urls from this run..."
+  ],
+  "seen_commits": {
+    "anthropics/claude-code": "{latest_sha}",
+    "anthropics/skills": "{latest_sha}",
+    ...
+  }
+}
+```
+
+Keep only last 30 days of URLs to prevent unbounded growth.
+
+### Step 7: Display Summary
+
+```
+Pulse: 2025-12-12
+
+Fetched 23 sources
+Found 8 new items (15 deduplicated)
+
+Critical:
+  - anthropics/claude-code v1.2.0 released
+
+Digest written to: news/2025-12-12_pulse.md
+```
+
+## Fetching Strategy
+
+**Priority Order**:
+1. Try `WebFetch` first (fastest, built-in)
+2. If 403/blocked/JS-heavy, use Firecrawl
+3. For GitHub repos, always use `gh` CLI
+
+**Parallel Fetching**:
+- Fetch multiple sources simultaneously
+- Use retry with exponential backoff (2s, 4s, 8s, 16s)
+- Report progress: `[====------] 12/23 sources`
+
+**Error Handling**:
+- If source fails after 4 retries, log and continue
+- Include failed sources in digest footer
+- Don't fail entire run for single source failure
+
+## Output Example
+
+```markdown
+# Pulse: 2025-12-12
+
+> Claude Code ecosystem digest
+
+**Generated**: 2025-12-12 08:00 UTC | **Sources**: 23 | **New Items**: 8
+
+---
+
+## Critical Updates
+
+### [Claude Code 1.2.0 Released](https://github.com/anthropics/claude-code/releases/tag/v1.2.0)
+**Source**: anthropics/claude-code | **Type**: Release
+> New MCP server auto-discovery feature enables Claude to find and connect to local MCP servers without manual configuration. Also includes 40% faster tool execution and improved subagent context handling.
+
+**Why it matters**: May require updates to our hook patterns; the MCP auto-discovery could simplify our tool-discovery skill.
+
+---
+
+## Official
+
+### [Building Production Skills](https://claude.com/blog/production-skills)
+**Source**: Claude Blog | **Type**: Post
+> Comprehensive guide to designing skills that scale, including caching strategies, permission scoping, and testing patterns. Covers common pitfalls and performance optimization.
+
+**Why it matters**: Validates our skill structure; consider adding the caching patterns to tool-discovery.
+
+---
+
+## Community
+
+### [Multi-Agent Orchestration Patterns](https://blog.sshh.io/p/multi-agent-patterns)
+**Source**: SSHH Blog | **Type**: Post
+> Practical patterns for coordinating multiple Claude instances on complex tasks, including the "conductor" pattern and parallel execution strategies.
+
+**Why it matters**: Could inform improvements to our firecrawl-expert parallel processing approach.
+
+### [15 New Skills Added to Awesome Claude Skills](https://github.com/travisvn/awesome-claude-skills/commits/main)
+**Source**: travisvn/awesome-claude-skills | **Type**: Commits
+> Batch of community-contributed skills covering Docker, Kubernetes, and database management. Notable additions include a Postgres optimizer and K8s deployment helper.
+
+**Why it matters**: Review for quality patterns; potential additions to our agent roster.
+
+---
+
+## Stats
+
+| Category | Items |
+|----------|-------|
+| Official releases | 1 |
+| Blog posts | 4 |
+| Repo updates | 2 |
+| Community | 1 |
+
+---
+
+*Generated by `/pulse`*
+```
+
+## Edge Cases
+
+### No New Items
+```
+Pulse: 2025-12-12
+
+Fetched 23 sources
+No new items found (all 12 items already seen)
+
+Last digest: news/2025-12-11_pulse.md
+```
+
+### Source Failures
+```
+Pulse: 2025-12-12
+
+Fetched 23 sources (2 failed)
+Found 6 new items
+
+Failed sources:
+  - skillsmp.com (timeout after 4 retries)
+  - every.to (403 Forbidden)
+
+Digest written to: news/2025-12-12_pulse.md
+```
+
+### First Run (No State)
+Initialize state.json with empty arrays/objects before proceeding.
+
+## Integration
+
+The `/pulse` command is standalone but integrates with:
+- **claude-architect agent** - Reviews digests for actionable insights (configured in agent's startup)
+- **news/state.json** - Persistent deduplication state
+- **Firecrawl** - Primary fetching mechanism for blocked/JS sites
+- **gh CLI** - GitHub API access for repo updates
+
+## Notes
+
+- Run manually when you want ecosystem updates: `/pulse`
+- Use `--force` to regenerate today's digest with fresh data
+- Use `--days 7` for weekly catchup after vacation
+- Digests are git-trackable for historical reference

+ 12 - 1
.claude/settings.local.json

@@ -35,7 +35,18 @@
       "Bash(pip3 list:*)",
       "Bash(python3:*)",
       "Bash(where:*)",
-      "Bash(\"C:\\Users\\Mack\\AppData\\Local\\Programs\\Python\\Python313\\python.exe\" -c \"import os; print(''FIRECRAWL_API_KEY is set:'', ''Yes'' if os.getenv(''FIRECRAWL_API_KEY'') else ''No'')\")"
+      "Bash(\"C:\\Users\\Mack\\AppData\\Local\\Programs\\Python\\Python313\\python.exe\" -c \"import os; print(''FIRECRAWL_API_KEY is set:'', ''Yes'' if os.getenv(''FIRECRAWL_API_KEY'') else ''No'')\")",
+      "Bash(md5:*)",
+      "Bash(certutil:*)",
+      "Bash(certutil -hashfile:*)",
+      "Bash(grep:*)",
+      "Bash(xargs basename:*)",
+      "Bash(FIRECRAWL_API_KEY=fc-62620190118f4aa0907781f3f6f3278e python scripts/pulse_fetch.py:*)",
+      "Bash(FIRECRAWL_API_KEY=fc-62620190118f4aa0907781f3f6f3278e python:*)",
+      "Bash(dir:*)",
+      "Bash(cut:*)",
+      "Bash(tr:*)",
+      "Bash(test:*)"
     ],
     "deny": [],
     "ask": []

+ 1 - 0
.gitignore

@@ -2,6 +2,7 @@
 .claude/.context-init.md
 
 # Session state (user-specific)
+.claude/session-cache.json
 .claude/claude-state.json
 .claude/claude-progress.md
 .claude/sync-cache.json

+ 29 - 16
README.md

@@ -2,7 +2,7 @@
 
 A comprehensive extension toolkit for [Claude Code](https://docs.anthropic.com/en/docs/claude-code) that transforms your AI coding assistant into a powerhouse development environment.
 
-**24 expert agents. 11 slash commands. 11 skills. One plugin install.**
+**21 expert agents. 12 slash commands. 18 skills. One plugin install.**
 
 ## Why claude-mods?
 
@@ -33,9 +33,9 @@ Claude Code is powerful out of the box, but it has gaps. This toolkit fills them
 ```
 claude-mods/
 ├── .claude-plugin/     # Plugin metadata
-├── agents/             # Expert subagents (24)
-├── commands/           # Slash commands (11)
-├── skills/             # Custom skills (11)
+├── agents/             # Expert subagents (21)
+├── commands/           # Slash commands (12)
+├── skills/             # Custom skills (18)
 ├── hooks/              # Hook examples & docs
 ├── rules/              # Claude Code rules
 ├── tools/              # Modern CLI toolkit docs
@@ -104,18 +104,34 @@ Then symlink or copy to your Claude directories:
 
 ### Skills
 
+#### Pattern Reference Skills
 | Skill | Description |
 |-------|-------------|
-| [agent-discovery](skills/agent-discovery/) | Analyze tasks and recommend specialized agents |
+| [rest-patterns](skills/rest-patterns/) | HTTP methods, status codes, REST design patterns |
+| [tailwind-patterns](skills/tailwind-patterns/) | Tailwind utilities, responsive breakpoints, config |
+| [sql-patterns](skills/sql-patterns/) | CTEs, window functions, JOIN patterns, indexing |
+| [sqlite-ops](skills/sqlite-ops/) | SQLite schemas, Python sqlite3/aiosqlite patterns |
+| [mcp-patterns](skills/mcp-patterns/) | MCP server structure, tool handlers, resources |
+
+#### CLI Tool Skills
+| Skill | Description |
+|-------|-------------|
+| [file-search](skills/file-search/) | Find files with fd, search code with rg, select with fzf |
+| [find-replace](skills/find-replace/) | Modern find-and-replace with sd |
 | [code-stats](skills/code-stats/) | Analyze codebase with tokei and difft |
 | [data-processing](skills/data-processing/) | Process JSON with jq, YAML/TOML with yq |
+| [structural-search](skills/structural-search/) | Search code by AST structure with ast-grep |
+
+#### Workflow Skills
+| Skill | Description |
+|-------|-------------|
+| [tool-discovery](skills/tool-discovery/) | Recommend agents and skills for any task |
 | [git-workflow](skills/git-workflow/) | Enhanced git operations with lazygit, gh, delta |
 | [project-docs](skills/project-docs/) | Scan and synthesize project documentation |
+| [project-planner](skills/project-planner/) | Track stale plans, suggest /plan command |
 | [python-env](skills/python-env/) | Fast Python environment management with uv |
 | [safe-file-reader](skills/safe-file-reader/) | Read files without permission prompts |
-| [structural-search](skills/structural-search/) | Search code by AST structure with ast-grep |
 | [task-runner](skills/task-runner/) | Run project commands with just |
-| [tool-discovery](skills/tool-discovery/) | Find the right library/tool for any task |
 
 ### Agents
 
@@ -125,26 +141,23 @@ Then symlink or copy to your Claude directories:
 | [asus-router-expert](agents/asus-router-expert.md) | Asus routers, network hardening, Asuswrt-Merlin |
 | [aws-fargate-ecs-expert](agents/aws-fargate-ecs-expert.md) | Amazon ECS on Fargate, container deployment |
 | [bash-expert](agents/bash-expert.md) | Defensive Bash scripting, CI/CD pipelines |
+| [claude-architect](agents/claude-architect.md) | Claude Code architecture, extensions, MCP, plugins, debugging |
 | [cloudflare-expert](agents/cloudflare-expert.md) | Cloudflare Workers, Pages, DNS, security |
 | [craftcms-expert](agents/craftcms-expert.md) | Craft CMS content modeling, Twig, plugins, GraphQL |
 | [cypress-expert](agents/cypress-expert.md) | Cypress E2E and component testing, custom commands, CI/CD |
-| [fetch-expert](agents/fetch-expert.md) | Parallel web fetching with retry logic |
-| [firecrawl-expert](agents/firecrawl-expert.md) | Web scraping, crawling, structured extraction |
+| [firecrawl-expert](agents/firecrawl-expert.md) | Web scraping, crawling, parallel fetching, structured extraction |
 | [javascript-expert](agents/javascript-expert.md) | Modern JavaScript, async patterns, optimization |
 | [laravel-expert](agents/laravel-expert.md) | Laravel framework, Eloquent, testing |
-| [react-expert](agents/react-expert.md) | React hooks, state management, Server Components, performance |
-| [typescript-expert](agents/typescript-expert.md) | TypeScript type system, generics, utility types, strict mode |
-| [vue-expert](agents/vue-expert.md) | Vue 3, Composition API, Pinia state management, performance |
 | [payloadcms-expert](agents/payloadcms-expert.md) | Payload CMS architecture and configuration |
 | [playwright-roulette-expert](agents/playwright-roulette-expert.md) | Playwright automation for casino testing |
 | [postgres-expert](agents/postgres-expert.md) | PostgreSQL management and optimization |
 | [project-organizer](agents/project-organizer.md) | Reorganize directory structures, cleanup |
 | [python-expert](agents/python-expert.md) | Advanced Python, testing, optimization |
-| [rest-expert](agents/rest-expert.md) | RESTful API design, HTTP methods, status codes |
+| [react-expert](agents/react-expert.md) | React hooks, state management, Server Components, performance |
 | [sql-expert](agents/sql-expert.md) | Complex SQL queries, optimization, indexing |
-| [tailwind-expert](agents/tailwind-expert.md) | Tailwind CSS, responsive design |
+| [typescript-expert](agents/typescript-expert.md) | TypeScript type system, generics, utility types, strict mode |
+| [vue-expert](agents/vue-expert.md) | Vue 3, Composition API, Pinia state management, performance |
 | [wrangler-expert](agents/wrangler-expert.md) | Cloudflare Workers deployment, wrangler.toml |
-| [claude-architect](agents/claude-architect.md) | Claude Code architecture, extensions, MCP, plugins, debugging |
 
 ### Tools, Rules & Hooks
 
@@ -229,7 +242,7 @@ TodoWrite tasks are stored at `~/.claude/todos/[session-id].json` and deleted wh
 Session 1:
   /sync                              # Bootstrap - read project context
   [work on tasks]
-  /saveplan "Stopped at auth module" # Writes .claude/claude-state.json
+  /saveplan "Stopped at auth module" # Writes .claude/session-cache.json
 
 Session 2:
   /sync                              # Read project context

+ 21 - 0
agents/claude-architect.md

@@ -1200,6 +1200,26 @@ claude-mods/
 ### Validation
 Run `just test` to validate all extensions before committing.
 
+## Startup: Check Pulse News
+
+**On every invocation**, check for recent ecosystem news:
+
+1. **Look for latest digest**: `ls -t news/*_pulse.md | head -1`
+2. **If exists and less than 7 days old**: Read and scan for actionable items
+3. **Identify opportunities**: Look for:
+   - New official features we should integrate
+   - Community patterns worth adopting
+   - Tools or skills we're missing
+   - Best practices updates
+4. **Brief the user**: If actionable items found, mention them:
+   ```
+   Pulse Check: Found 2 actionable items from recent ecosystem news:
+   - anthropics/skills added new caching patterns (consider for tool-discovery)
+   - Community worktree patterns could improve our parallel agent workflow
+   ```
+
+This keeps claude-mods current with ecosystem developments.
+
 ## When to Use This Agent
 
 Deploy this agent when:
@@ -1210,6 +1230,7 @@ Deploy this agent when:
 - Understanding Claude Code internals
 - Optimizing claude-mods tooling
 - Making architectural decisions about extensions
+- Reviewing ecosystem news for improvements (via Pulse digests)
 
 ## Output Expectations
 

+ 0 - 176
agents/fetch-expert.md

@@ -1,176 +0,0 @@
----
-name: fetch-expert
-description: Parallel web fetching specialist. Accelerates research by fetching multiple URLs simultaneously with retry logic, progress tracking, and error recovery. Use for ANY multi-URL operations.
-model: sonnet
----
-
-# Fetch Expert
-
-You are a specialized agent for intelligent, high-speed web fetching operations.
-
-## Your Core Purpose
-
-**Accelerate research and data gathering through parallel URL fetching.**
-
-When someone needs to fetch multiple URLs (documentation, research, data collection):
-- Fetch them ALL in parallel (10x-20x faster than serial)
-- Show real-time progress
-- Handle retries and errors automatically
-- Return all content for synthesis
-
-**You're not just for "simple" tasks - you make ANY multi-URL operation dramatically faster.**
-
-## Tools You Use
-
-- **WebFetch**: For fetching web content with AI processing
-- **Bash**: For launching background processes
-- **BashOutput**: For monitoring background processes
-
-## Retry Strategy (Exponential Backoff)
-
-When a fetch fails, use exponential backoff with jitter:
-1. **First retry**: Wait 2 seconds (2^1)
-2. **Second retry**: Wait 4 seconds (2^2)
-3. **Third retry**: Wait 8 seconds (2^3)
-4. **Fourth retry**: Wait 16 seconds (2^4)
-5. **After 4 failures**: Report error with details to user
-
-Add slight randomization to prevent thundering herd (±20% jitter).
-
-```
-Example:
-Attempt 1: Failed (timeout)
-→ Wait 2s (exponential backoff: 2^1)
-Attempt 2: Failed (503)
-→ Wait 4s (exponential backoff: 2^2)
-Attempt 3: Failed (connection reset)
-→ Wait 8s (exponential backoff: 2^3)
-Attempt 4: Success!
-```
-
-**Why exponential backoff:**
-- Gives servers time to recover from load
-- Prevents hammering failing endpoints
-- Industry standard for retry logic
-- More respectful to rate limits
-
-## Redirect Handling
-
-WebFetch will tell you when a redirect occurs. When it does:
-1. Make a new WebFetch request to the redirect URL
-2. Track the redirect chain (max 5 redirects to prevent loops)
-3. Report the final URL to the user
-
-```
-Example:
-https://example.com → (302) → https://www.example.com → (200) Success
-```
-
-## Background Process Monitoring
-
-When user asks to fetch multiple URLs in parallel:
-
-1. **Launch**: Use Bash to start background processes
-2. **Monitor**: Check BashOutput every 30 seconds
-3. **Report**: Give progress updates to user
-4. **Handle failures**: Retry failed fetches automatically
-5. **Summarize**: Provide final report when all complete
-
-```
-Example pattern:
-- Launch 5 fetch processes in background
-- Check status every 30s
-- Report: "3/5 complete, 2 running"
-- When done: "All 5 fetches complete. 4 succeeded, 1 failed after retries."
-```
-
-## Response Guidelines
-
-- **Be concise**: Don't over-explain
-- **Show progress**: For long operations, update user periodically with progress indicators
-- **Report errors clearly**: What failed, why, what you tried
-- **Provide results**: Structured format when possible
-
-## Progress Reporting
-
-**Always show progress for multi-URL operations:**
-
-```
-Fetching 5 URLs...
-[====------] 2/5 (40%) - 2 complete, 0 failed, 3 pending
-[========--] 4/5 (80%) - 3 complete, 1 failed, 1 pending
-[==========] 5/5 (100%) - 4 complete, 1 failed
-
-Results:
-✓ url1 (2.3s)
-✓ url2 (1.8s)
-✗ url3 (failed after 4 retries)
-✓ url4 (3.1s)
-✓ url5 (2.0s)
-```
-
-**For single URL with retries:**
-
-```
-Fetching https://example.com...
-[Attempt 1/4] Failed (timeout) - retrying in 2s...
-[Attempt 2/4] Failed (503) - retrying in 4s...
-[Attempt 3/4] Success! (1.2s)
-```
-
-**Progress bar format:**
-- Use `[====------]` style bars (10 chars wide)
-- Show `X/Y (Z%)` completion
-- Update every significant change (not too spammy)
-- Include timing when useful
-
-## What You DON'T Do
-
-- Complex data analysis (just fetch the content)
-- File operations beyond fetching
-- Code generation
-- Database operations
-
-**Focus on fetching excellence, nothing more.**
-
-## Example Interactions
-
-**User**: "Fetch https://example.com and retry if it fails"
-
-**You**:
-```
-Fetching https://example.com...
-[Attempt 1/4] Success! (200 OK, 1.2s)
-Content retrieved (2.3 KB)
-```
-
-**User**: "Fetch these 5 URLs in parallel"
-
-**You**:
-```
-Fetching 5 URLs...
-[==--------] 1/5 (20%) - 1 complete, 0 failed, 4 pending
-[====------] 2/5 (40%) - 2 complete, 0 failed, 3 pending
-[=======---] 3/5 (60%) - 3 complete, 0 failed, 2 pending
-[=========-] 4/5 (80%) - 3 complete, 1 failed, 1 pending
-[==========] 5/5 (100%) - 4 complete, 1 failed
-
-Summary:
-✓ url1 (1.2s)
-✓ url2 (2.1s)
-✓ url3 (1.8s)
-✗ url4 (failed after 4 retries - timeout)
-✓ url5 (2.3s)
-
-4/5 successful (80%)
-```
-
-## Keep It Simple
-
-This is an MVP. Don't overengineer. Focus on:
-- Reliable fetching
-- Clear communication
-- Graceful error handling
-- Background process management
-
-That's it. Be the best fetch agent, nothing fancy.

+ 506 - 0
agents/firecrawl-expert.md

@@ -466,6 +466,512 @@ All implementations must include:
 - Results can vary for personalized/dynamic content
 - Complex logical queries may miss expected pages
 
+---
+
+## Part 6: Complete Code Examples
+
+### Python SDK Setup
+
+```python
+from firecrawl import FirecrawlApp
+import os
+
+# Initialize client
+app = FirecrawlApp(api_key=os.getenv("FIRECRAWL_API_KEY"))
+```
+
+### Scrape Examples
+
+**Basic Scrape:**
+
+```python
+# Simple markdown extraction
+result = app.scrape_url("https://example.com", params={
+    "formats": ["markdown"],
+    "onlyMainContent": True
+})
+
+print(result["markdown"])
+print(result["metadata"]["title"])
+```
+
+**Scrape with Content Filtering:**
+
+```python
+# Extract only article content, exclude noise
+result = app.scrape_url("https://news-site.com/article", params={
+    "formats": ["markdown", "html"],
+    "onlyMainContent": True,
+    "includeTags": ["article", "main", ".content"],
+    "excludeTags": ["nav", "footer", "aside", ".ads", ".comments"],
+    "waitFor": 3000,  # Wait for JS rendering
+})
+
+# Access different formats
+markdown = result.get("markdown", "")
+html = result.get("html", "")
+metadata = result.get("metadata", {})
+
+print(f"Title: {metadata.get('title')}")
+print(f"Content length: {len(markdown)} chars")
+```
+
+**Scrape with Authentication:**
+
+```python
+# Protected page with cookies/headers
+result = app.scrape_url("https://protected-site.com/dashboard", params={
+    "formats": ["markdown"],
+    "headers": {
+        "Cookie": "session=abc123; auth_token=xyz789",
+        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)",
+        "Authorization": "Bearer your-api-token"
+    },
+    "timeout": 60000,
+})
+```
+
+**Interactive Scrape (Click, Scroll, Fill):**
+
+```python
+# Scrape content that requires interaction
+result = app.scrape_url("https://infinite-scroll-site.com", params={
+    "formats": ["markdown"],
+    "actions": [
+        # Click "Load More" button
+        {"type": "click", "selector": "#load-more-btn"},
+        # Wait for content
+        {"type": "wait", "milliseconds": 2000},
+        # Scroll down
+        {"type": "scroll", "direction": "down", "amount": 500},
+        # Wait again
+        {"type": "wait", "milliseconds": 1000},
+        # Take screenshot
+        {"type": "screenshot"}
+    ]
+})
+
+# For login-protected content
+result = app.scrape_url("https://site.com/login", params={
+    "formats": ["markdown"],
+    "actions": [
+        {"type": "write", "selector": "#email", "text": "user@example.com"},
+        {"type": "write", "selector": "#password", "text": "password123"},
+        {"type": "click", "selector": "#login-btn"},
+        {"type": "wait", "milliseconds": 3000},
+        {"type": "screenshot"}
+    ]
+})
+```
+
+**Screenshot Capture:**
+
+```python
+import base64
+
+result = app.scrape_url("https://example.com", params={
+    "formats": ["screenshot", "markdown"],
+    "screenshot": True,
+})
+
+# Save screenshot
+if "screenshot" in result:
+    screenshot_data = base64.b64decode(result["screenshot"])
+    with open("page_screenshot.png", "wb") as f:
+        f.write(screenshot_data)
+```
+
+### Crawl Examples
+
+**Basic Crawl:**
+
+```python
+# Crawl entire blog section
+result = app.crawl_url("https://example.com/blog", params={
+    "limit": 50,
+    "scrapeOptions": {
+        "formats": ["markdown"],
+        "onlyMainContent": True
+    }
+})
+
+for page in result["data"]:
+    print(f"URL: {page['metadata']['sourceURL']}")
+    print(f"Title: {page['metadata']['title']}")
+    print(f"Content: {page['markdown'][:200]}...")
+    print("---")
+```
+
+**Focused Crawl with Filters:**
+
+```python
+# Only crawl documentation pages, exclude examples
+result = app.crawl_url("https://docs.example.com", params={
+    "limit": 100,
+    "includePaths": ["/docs/*", "/api/*", "/guides/*"],
+    "excludePaths": ["/docs/archive/*", "/api/deprecated/*"],
+    "maxDiscoveryDepth": 3,
+    "scrapeOptions": {
+        "formats": ["markdown"],
+        "onlyMainContent": True,
+        "excludeTags": ["nav", "footer", ".sidebar"]
+    }
+})
+
+# Filter results further
+docs = [
+    page for page in result["data"]
+    if "/docs/" in page["metadata"]["sourceURL"]
+]
+print(f"Found {len(docs)} documentation pages")
+```
+
+**Async Crawl with Polling:**
+
+```python
+import time
+
+# Start async crawl
+job = app.async_crawl_url("https://large-site.com", params={
+    "limit": 500,
+    "scrapeOptions": {"formats": ["markdown"]}
+})
+
+job_id = job["id"]
+print(f"Started crawl job: {job_id}")
+
+# Poll for completion
+while True:
+    status = app.check_crawl_status(job_id)
+
+    print(f"Status: {status['status']}, "
+          f"Completed: {status.get('completed', 0)}/{status.get('total', '?')}")
+
+    if status["status"] == "completed":
+        break
+    elif status["status"] == "failed":
+        raise Exception(f"Crawl failed: {status.get('error')}")
+
+    time.sleep(5)  # Poll every 5 seconds
+
+# Get results
+results = app.get_crawl_status(job_id)
+print(f"Crawled {len(results['data'])} pages")
+```
+
+**Async Crawl with Webhooks:**
+
+```python
+# Start crawl with webhook notification
+job = app.async_crawl_url("https://example.com", params={
+    "limit": 100,
+    "webhook": "https://your-server.com/webhook/firecrawl",
+    "scrapeOptions": {"formats": ["markdown"]}
+})
+
+# Your webhook endpoint receives events:
+# POST /webhook/firecrawl
+# {
+#   "type": "crawl.page",
+#   "jobId": "abc123",
+#   "data": { "markdown": "...", "metadata": {...} }
+# }
+# OR
+# {
+#   "type": "crawl.completed",
+#   "jobId": "abc123",
+#   "data": { "total": 100, "completed": 100 }
+# }
+```
+
+### Map Examples
+
+**Discover All URLs:**
+
+```python
+# Get all accessible URLs on a site
+result = app.map_url("https://example.com", params={
+    "limit": 5000,
+    "includeSubdomains": False
+})
+
+urls = result["links"]
+print(f"Found {len(urls)} URLs")
+
+# Filter by pattern
+blog_urls = [url for url in urls if "/blog/" in url]
+product_urls = [url for url in urls if "/products/" in url]
+```
+
+**Search for Specific Pages:**
+
+```python
+# Find documentation pages about "authentication"
+result = app.map_url("https://docs.example.com", params={
+    "search": "authentication",
+    "limit": 100
+})
+
+auth_pages = result["links"]
+print(f"Found {len(auth_pages)} pages about authentication")
+```
+
+### Extract Examples
+
+**Schema-Based Extraction:**
+
+```python
+from pydantic import BaseModel
+from typing import List, Optional
+
+# Define schema with Pydantic
+class Product(BaseModel):
+    name: str
+    price: float
+    currency: str
+    availability: str
+    description: Optional[str] = None
+    images: List[str] = []
+
+# Extract structured data
+result = app.extract(
+    urls=["https://shop.example.com/products/*"],
+    params={
+        "schema": Product.model_json_schema(),
+        "limit": 50
+    }
+)
+
+# Results are typed according to schema
+for item in result["data"]:
+    product = Product(**item)
+    print(f"{product.name}: {product.currency}{product.price}")
+```
+
+**Prompt-Based Extraction:**
+
+```python
+# Natural language extraction
+result = app.extract(
+    urls=["https://company.com/about"],
+    params={
+        "prompt": """Extract the following information:
+        - Company name
+        - Founded year
+        - Headquarters location
+        - Number of employees (approximate)
+        - Main products or services
+        - Contact email
+        Return as JSON with these exact field names."""
+    }
+)
+
+company_info = result["data"][0]
+print(f"Company: {company_info.get('Company name')}")
+```
+
+**Multi-Page Extraction:**
+
+```python
+# Extract from multiple product pages
+product_urls = [
+    "https://shop.com/product/1",
+    "https://shop.com/product/2",
+    "https://shop.com/product/3",
+]
+
+result = app.extract(
+    urls=product_urls,
+    params={
+        "schema": {
+            "type": "object",
+            "properties": {
+                "name": {"type": "string"},
+                "price": {"type": "number"},
+                "rating": {"type": "number"},
+                "reviews_count": {"type": "integer"}
+            },
+            "required": ["name", "price"]
+        }
+    }
+)
+
+# Process each product
+for i, product in enumerate(result["data"]):
+    print(f"Product {i+1}: {product['name']} - ${product['price']}")
+```
+
+### Batch Operations
+
+```python
+# Batch scrape multiple URLs
+urls = [
+    "https://example.com/page1",
+    "https://example.com/page2",
+    "https://example.com/page3",
+]
+
+# Start batch scrape
+batch_job = app.batch_scrape_urls(urls, params={
+    "formats": ["markdown"],
+    "onlyMainContent": True
+})
+
+# Poll for completion
+batch_id = batch_job["id"]
+while True:
+    status = app.check_batch_scrape_status(batch_id)
+    if status["status"] == "completed":
+        break
+    time.sleep(2)
+
+# Get results
+results = status["data"]
+for result in results:
+    print(f"Scraped: {result['metadata']['sourceURL']}")
+```
+
+### Error Handling Pattern
+
+```python
+from firecrawl import FirecrawlApp
+from firecrawl.exceptions import FirecrawlError
+import time
+
+def scrape_with_retry(url: str, max_retries: int = 3) -> dict | None:
+    """Scrape URL with exponential backoff retry."""
+    app = FirecrawlApp(api_key=os.getenv("FIRECRAWL_API_KEY"))
+
+    for attempt in range(max_retries):
+        try:
+            result = app.scrape_url(url, params={
+                "formats": ["markdown"],
+                "onlyMainContent": True,
+                "timeout": 30000
+            })
+            return result
+
+        except FirecrawlError as e:
+            if e.status_code == 429:  # Rate limited
+                wait_time = 2 ** attempt
+                print(f"Rate limited, waiting {wait_time}s...")
+                time.sleep(wait_time)
+            elif e.status_code == 402:  # Payment required
+                print("Quota exceeded, add credits")
+                return None
+            elif e.status_code >= 500:  # Server error
+                wait_time = 2 ** attempt
+                print(f"Server error, retrying in {wait_time}s...")
+                time.sleep(wait_time)
+            else:
+                print(f"Scrape failed: {e}")
+                return None
+
+        except Exception as e:
+            print(f"Unexpected error: {e}")
+            if attempt < max_retries - 1:
+                time.sleep(2 ** attempt)
+            else:
+                return None
+
+    return None
+```
+
+### RAG Pipeline Integration
+
+```python
+from langchain.text_splitter import RecursiveCharacterTextSplitter
+from langchain.embeddings import OpenAIEmbeddings
+from langchain.vectorstores import Chroma
+
+def build_rag_index(base_url: str, limit: int = 100):
+    """Build RAG index from crawled content."""
+    app = FirecrawlApp(api_key=os.getenv("FIRECRAWL_API_KEY"))
+
+    # Crawl documentation
+    result = app.crawl_url(base_url, params={
+        "limit": limit,
+        "scrapeOptions": {
+            "formats": ["markdown"],
+            "onlyMainContent": True
+        }
+    })
+
+    # Prepare documents
+    documents = []
+    for page in result["data"]:
+        if page.get("markdown"):
+            documents.append({
+                "content": page["markdown"],
+                "metadata": {
+                    "source": page["metadata"]["sourceURL"],
+                    "title": page["metadata"].get("title", "")
+                }
+            })
+
+    # Split into chunks
+    splitter = RecursiveCharacterTextSplitter(
+        chunk_size=1000,
+        chunk_overlap=200
+    )
+
+    chunks = []
+    for doc in documents:
+        splits = splitter.split_text(doc["content"])
+        for split in splits:
+            chunks.append({
+                "content": split,
+                "metadata": doc["metadata"]
+            })
+
+    # Create embeddings and store
+    embeddings = OpenAIEmbeddings()
+    vectorstore = Chroma.from_texts(
+        texts=[c["content"] for c in chunks],
+        metadatas=[c["metadata"] for c in chunks],
+        embedding=embeddings,
+        persist_directory="./chroma_db"
+    )
+
+    print(f"Indexed {len(chunks)} chunks from {len(documents)} pages")
+    return vectorstore
+```
+
+### CLI Usage
+
+```bash
+# Install CLI
+pip install firecrawl-py
+
+# Scrape single page
+firecrawl scrape https://example.com -o output.md
+
+# Scrape with options
+firecrawl scrape https://example.com \
+    --format markdown \
+    --only-main-content \
+    --timeout 60000 \
+    -o output.md
+
+# Crawl website
+firecrawl crawl https://docs.example.com \
+    --limit 100 \
+    --include-paths "/docs/*" \
+    -o docs_output/
+
+# Map URLs
+firecrawl map https://example.com \
+    --limit 1000 \
+    -o urls.txt
+
+# Extract structured data
+firecrawl extract https://shop.com/products/* \
+    --prompt "Extract product name, price, description" \
+    -o products.json
+```
+
+---
+
 ## Documentation References
 
 When encountering edge cases, new features, or needing the latest API specifications, use WebFetch to retrieve current documentation:

File diff suppressed because it is too large
+ 1588 - 46
agents/python-expert.md


+ 0 - 149
agents/rest-expert.md

@@ -1,149 +0,0 @@
----
-name: rest-expert
-description: Master in designing and implementing RESTful APIs with focus on best practices, HTTP methods, status codes, and resource modeling.
-model: sonnet
----
-
-# REST API Expert Agent
-
-You are a REST API expert specializing in designing and implementing RESTful APIs following industry best practices, proper HTTP semantics, and resource-oriented architecture.
-
-## Focus Areas
-- REST architectural principles and constraints
-- Resource and endpoint design methodology
-- Correct HTTP verb implementation (GET, POST, PUT, DELETE, PATCH)
-- Appropriate HTTP status code application
-- API versioning approaches (URI, header, content negotiation)
-- Resource modeling and URI design patterns
-- Statelessness requirements and implications
-- Content negotiation with various media types (JSON, XML, etc.)
-- Authentication and authorization mechanisms (OAuth 2.0, JWT, API keys)
-- Rate limiting and throttling implementation
-
-## Approach
-- Design resource-oriented APIs with clear noun-based endpoints
-- Apply HATEOAS principles when appropriate
-- Ensure stateless interactions (no server-side sessions)
-- Use standardized endpoint naming conventions
-- Implement query parameters for filtering, sorting, and pagination
-- Document APIs with OpenAPI/Swagger specifications
-- Enforce HTTPS-only for security
-- Provide standardized error responses with meaningful messages
-- Make GET requests cacheable with appropriate headers
-- Monitor API usage and performance metrics
-- Follow semantic versioning for API changes
-- Design for backward compatibility
-
-## Quality Checklist
-All deliverables must meet:
-- Standardized, consistent naming conventions
-- Idempotent HTTP verbs where expected (PUT, DELETE)
-- Appropriate status codes for all responses (2xx, 4xx, 5xx)
-- Robust error handling with detailed error objects
-- Pagination implementation for collections
-- Accurate, up-to-date API documentation
-- Industry-standard security practices
-- Cache control directives in response headers
-- Rate limit information in headers (X-RateLimit-*)
-- Strict REST constraint compliance
-- Input validation on all endpoints
-- Proper use of HTTP methods semantics
-
-## Expected Deliverables
-- Comprehensive API documentation (OpenAPI 3.0+)
-- Clear resource models with schemas
-- Request/response examples for all endpoints
-- Error handling strategies with sample error messages
-- API versioning strategy details
-- Authentication/authorization setup explanations
-- Request/response logging specifications
-- HTTPS/TLS implementation guidelines
-- Sample client code and SDKs
-- API monitoring and analytics setup
-- Developer onboarding guides
-- Changelog and migration guides
-
-## HTTP Methods Semantics
-- **GET**: Retrieve resource(s), safe and idempotent, cacheable
-- **POST**: Create new resource, not idempotent
-- **PUT**: Replace entire resource, idempotent
-- **PATCH**: Partial update, may be idempotent
-- **DELETE**: Remove resource, idempotent
-- **HEAD**: GET without body, retrieve headers only
-- **OPTIONS**: Describe communication options (CORS)
-
-## HTTP Status Codes
-### Success (2xx)
-- **200 OK**: Successful GET, PUT, PATCH, DELETE
-- **201 Created**: Successful POST, include Location header
-- **204 No Content**: Successful request with no response body
-
-### Client Errors (4xx)
-- **400 Bad Request**: Invalid syntax or validation failure
-- **401 Unauthorized**: Authentication required or failed
-- **403 Forbidden**: Authenticated but not authorized
-- **404 Not Found**: Resource doesn't exist
-- **405 Method Not Allowed**: HTTP method not supported
-- **409 Conflict**: Request conflicts with current state
-- **422 Unprocessable Entity**: Semantic validation errors
-- **429 Too Many Requests**: Rate limit exceeded
-
-### Server Errors (5xx)
-- **500 Internal Server Error**: Generic server error
-- **502 Bad Gateway**: Invalid upstream response
-- **503 Service Unavailable**: Temporary unavailability
-- **504 Gateway Timeout**: Upstream timeout
-
-## Resource Design Patterns
-- Use plural nouns for collections: `/users`, `/products`
-- Use nested resources for relationships: `/users/{id}/orders`
-- Avoid deep nesting (max 2-3 levels)
-- Use query params for filtering: `/users?role=admin&status=active`
-- Use query params for pagination: `/users?page=2&limit=20`
-- Use query params for sorting: `/users?sort=created_at&order=desc`
-- Use consistent casing (kebab-case or snake_case for URIs)
-
-## Request/Response Best Practices
-- Accept and return JSON by default
-- Support content negotiation via Accept header
-- Include metadata in responses (pagination, timestamps)
-- Use envelope format sparingly, prefer root-level data
-- Include hypermedia links when using HATEOAS
-- Provide request IDs for tracing
-- Use ETags for caching and conditional requests
-- Include API version in response headers
-
-## Security Best Practices
-- Always use HTTPS/TLS
-- Implement authentication (OAuth 2.0, JWT)
-- Use API keys for service-to-service
-- Validate and sanitize all inputs
-- Implement rate limiting per client
-- Use CORS headers appropriately
-- Don't expose sensitive data in URLs
-- Implement proper authorization checks
-- Log security events
-- Use security headers (HSTS, CSP, etc.)
-
-## Error Response Format
-```json
-{
-  "error": {
-    "code": "VALIDATION_ERROR",
-    "message": "Invalid input data",
-    "details": [
-      {
-        "field": "email",
-        "message": "Invalid email format"
-      }
-    ],
-    "request_id": "abc-123-def"
-  }
-}
-```
-
-## Versioning Strategies
-- URI versioning: `/v1/users`, `/v2/users`
-- Header versioning: `Accept: application/vnd.api.v1+json`
-- Query parameter: `/users?version=1`
-- Content negotiation via media types

+ 9 - 27
agents/sql-expert.md

@@ -63,31 +63,13 @@ All deliverables must meet:
 - Migration scripts with rollback support
 - Data validation rules
 
-## Common Patterns
-- Use CTEs for complex multi-step queries
-- Window functions for analytics (ROW_NUMBER, RANK, LAG, LEAD)
-- Proper JOIN types (INNER, LEFT, RIGHT, FULL, CROSS)
-- EXISTS vs IN for subqueries
-- Batch operations for large datasets
-- Pagination with OFFSET/LIMIT or keyset pagination
-- Handling temporal data effectively
-- Avoiding SELECT * in production code
+## Optimization Focus
+- Execution plan analysis (EXPLAIN ANALYZE)
+- Index strategy design and review
+- Query plan caching behavior
+- Partitioning decisions for large tables
+- Materialized view candidates
+- Connection pooling tuning
 
-## Optimization Techniques
-- Covering indexes to avoid table lookups
-- Partitioning for large tables
-- Query result caching strategies
-- Denormalization when read-heavy justified
-- Materialized views for expensive queries
-- Index-only scans
-- Parallel query execution
-- Connection pooling considerations
-
-## Anti-Patterns to Avoid
-- N+1 query problems
-- Implicit type conversions preventing index usage
-- Functions on indexed columns in WHERE clauses
-- Unnecessary DISTINCT or GROUP BY
-- Correlated subqueries when joins possible
-- Over-normalization causing excessive joins
-- Ignoring NULL handling in comparisons
+## Related Skill
+For pattern reference (CTEs, window functions, JOINs), use **sql-patterns** skill.

+ 0 - 84
agents/tailwind-expert.md

@@ -1,84 +0,0 @@
----
-name: tailwind-expert
-description: Expert in Tailwind CSS for efficient and responsive styling, utilizing utility-first approaches and responsive design principles.
-model: sonnet
----
-
-# Tailwind CSS Expert Agent
-
-You are a Tailwind CSS expert specializing in utility-first CSS methodology, responsive design, and modern web styling practices.
-
-## Focus Areas
-- Utility-first CSS methodology fundamentals
-- Theme customization for project-specific requirements
-- Responsive design implementation
-- Typography optimization through built-in utilities
-- Custom theme development and configuration
-- PostCSS integration workflows
-- Design token management
-- Rapid prototyping capabilities
-- Large-scale application performance optimization
-- Industry best practices for maintainable styling
-
-## Approach
-- Explore Tailwind's extensive utility class library
-- Customize `tailwind.config.js` for project needs
-- Implement responsive grids and flexbox layouts
-- Simplify styling through composition of utility classes
-- Manage component spacing systematically (margin, padding)
-- Optimize CSS through PurgeCSS/content configuration
-- Enhance component appearance with shadows and effects
-- Optimize typography with font utilities
-- Leverage consistent color palettes from theme
-- Adopt atomic design principles with Tailwind classes
-- Use `@apply` sparingly for component extraction
-- Implement dark mode with class or media strategies
-- Utilize arbitrary values when needed (`w-[137px]`)
-
-## Quality Checklist
-All deliverables must meet:
-- Tailored `tailwind.config.js` to project specifications
-- Cross-device responsive testing (mobile, tablet, desktop)
-- Consistent spacing and typography scale
-- Minimized style conflicts and specificity issues
-- Effective design token management
-- Performance optimization through content purging
-- Clear, organized class structure (avoid class soup)
-- Smooth Tailwind version update integration
-- Cross-browser compatibility verification
-- Tailwind CSS documentation adherence
-- Accessibility considerations (contrast, focus states)
-- Component reusability patterns
-
-## Output Deliverables
-- Utility-class-based styled components
-- Responsive layouts using grid and flexbox
-- Unified design themes with custom colors/fonts
-- Optimized CSS bundles (minimal file size)
-- Style guides with extended theme configuration
-- Project-specific Tailwind documentation
-- Scalable, maintainable component libraries
-- Tested responsive breakpoints
-- Production-ready implementations
-- Preconfigured `tailwind.config.js` and `postcss.config.js`
-- Custom plugin development when needed
-- Component extraction strategies
-
-## Best Practices
-- Use semantic HTML with utility classes
-- Group related utilities logically
-- Leverage Tailwind's constraint-based design system
-- Prefer mobile-first responsive approach
-- Extract components when patterns repeat
-- Use CSS variables for dynamic values
-- Document custom configuration decisions
-- Version control Tailwind config files
-
-## Common Patterns
-- Card components with consistent spacing
-- Navigation bars with responsive hamburger menus
-- Form inputs with consistent styling and states
-- Button variants using utility combinations
-- Grid layouts for content sections
-- Modal and overlay patterns
-- Loading states and animations

+ 151 - 0
analysis/CURRENT_PROJECTS.md

@@ -0,0 +1,151 @@
+# Current Projects Overview
+
+> Context file for claude-architect and Pulse. Last updated: 2025-12-13
+
+## Active Development (Touch Weekly)
+
+### Claude Code Ecosystem
+
+| Project | Purpose | Stack |
+|---------|---------|-------|
+| **claude-mods** | Extension toolkit - 23 agents, 11 skills, 11 commands | Markdown, Python, Bash |
+| **dev-shell-tools** | Modern CLI tools (ripgrep, fd, bat, eza) | Rust/Go binaries |
+| **TerminalBench** | AI benchmark runner with 100+ domain skills | Python, Harbor, Docker |
+| **expert-graph** | 590+ expert knowledge graph | Markdown, YAML |
+
+### MCP Servers (Model Context Protocol)
+
+| Project | Integration | Status |
+|---------|-------------|--------|
+| **AsanaMCP** | Asana project management | Active |
+| **HarvestMCP** | Harvest time tracking | Active |
+| **GmailMCP** | Gmail with AI summaries | Active |
+| **Azimuth** | Raindrop.io bookmarks | Active |
+| **DialpadMCP** | Dialpad phone system | In dev |
+| **PandaMCP** | (Unknown) | In dev |
+
+### Web Scraping & Content
+
+| Project | Purpose | Stack |
+|---------|---------|-------|
+| **Firecrawl** | Python scraping suite with change detection | Python, Firecrawl API |
+| **Roamcrawler** | SEO tools for destination marketing | Python, BeautifulSoup |
+| **Praxis** | AI concierge prompt framework for Chatbase | Markdown, Python |
+
+### Parental Safety Suite
+
+| Project | Purpose | Stack |
+|---------|---------|-------|
+| **Vordr** | Windows monitoring + LLM domain triage | Python, Claude API |
+| **Aegis** | AI-assisted blocklist curation tooling | Python, SQLite |
+| **aegis-blocklist** | Child safety DNS blocklist (900+ domains) | Text, Batch |
+
+---
+
+## Tech Stack Summary
+
+**Primary Languages:**
+- Python (80% of active projects)
+- JavaScript/TypeScript (web frontends, Workers)
+- Markdown (agents, skills, prompts)
+
+**Key APIs & Services:**
+- Claude API (Anthropic)
+- Firecrawl API
+- NextDNS API
+- Asana, Harvest, Gmail, Raindrop.io
+- Cloudflare Workers
+
+**Common Patterns:**
+- MCP servers for tool integration
+- SQLite for local state/caching
+- Async Python (httpx, asyncio)
+- Markdown-based agent definitions
+
+---
+
+## Domain Expertise Needed
+
+Based on active projects, the most valuable expertise areas:
+
+| Domain | Projects Using It |
+|--------|-------------------|
+| **Python async/httpx** | All MCP servers, Firecrawl, Vordr |
+| **Claude API** | Vordr, Aegis, Praxis, claude-mods |
+| **Web scraping** | Firecrawl, Roamcrawler, Vordr |
+| **DNS/Networking** | Vordr, Aegis, aegis-blocklist |
+| **SQLite** | GmailMCP, Aegis, Firecrawl |
+| **Cloudflare Workers** | CFWorker, Elevenlabs-worker |
+| **MCP Protocol** | 6 active MCP servers |
+
+---
+
+## TerminalBench Skills Relevance
+
+TerminalBench has 100+ PhD-level domain skills. Most are **not relevant** to our common work:
+
+### Irrelevant (Benchmark-specific)
+- Bioinformatics (gibson-assembly, protein-folding, dna-*)
+- Scientific computing (sympy, scipy, astropy)
+- Game theory (chess, corewars)
+- Cryptanalysis (feal, password-cracking)
+- Low-level (mips, vm-emulation, gcc-optimization)
+
+### Potentially Useful
+| Skill | Relevance |
+|-------|-----------|
+| `pytorch-*` | If doing ML work |
+| `django-*` | If building Django apps |
+| `sqlite-expert` | Our MCP servers use SQLite |
+| `nginx-expert` | Server configuration |
+| `http-expert` | API debugging |
+| `constraint-preservation` | Could adapt for our rules |
+
+### Verdict
+**Don't import TerminalBench skills.** They're tuned for benchmark tasks (DNA assembly, cryptanalysis, chess engines), not our actual work (MCP servers, web scraping, parental safety).
+
+The skill routing system is overkill at our scale (23 agents vs 100+ skills).
+
+---
+
+## Recommendations for claude-architect
+
+### When user mentions these projects, load context:
+
+```
+"Vordr" or "parental" or "monitoring" → Parental safety domain
+"MCP" or "Asana" or "Harvest" or "Gmail" → MCP server patterns
+"scraping" or "Firecrawl" or "crawl" → Web scraping tools
+"blocklist" or "Aegis" or "NextDNS" → DNS filtering
+"benchmark" or "TerminalBench" → Agent evaluation (separate domain)
+```
+
+### Priority expertise for common tasks:
+
+1. **Python async** - Every active project uses it
+2. **MCP protocol** - 6 servers and growing
+3. **Claude API** - Core to Vordr, Aegis, Praxis
+4. **SQLite** - State management across projects
+5. **Web scraping** - Firecrawl patterns reused everywhere
+
+### Don't prioritize:
+- Heavy ML/PyTorch (no active ML projects)
+- Scientific computing (no active science projects)
+- Game development (ArchMagi is archived)
+- PHP/Laravel (CraftCMS is archived)
+
+---
+
+## Project Health
+
+| Status | Count | Examples |
+|--------|-------|----------|
+| **Active** | 15 | claude-mods, Vordr, MCP servers |
+| **Maintenance** | 20 | CFWorker, DProbe, Asus |
+| **Archived** | 25+ | ArchMagi, CraftCMS, Payload |
+
+**Focus areas for 2025:**
+1. Claude Code tooling (claude-mods, dev-shell-tools)
+2. MCP server ecosystem
+3. Parental safety suite (Vordr + Aegis)
+4. AI concierge (Praxis)

+ 2 - 2
commands/archive/session-manager/README.md

@@ -152,7 +152,7 @@ Suggested: Continue with "Fix callback URL handling"
 
 ## State File Format
 
-`.claude/claude-state.json`:
+`.claude/session-cache.json`:
 
 ```json
 {
@@ -239,7 +239,7 @@ These commands integrate with the `/plan` command:
 │   Strategic (Big Picture)          Tactical (Right Now)    │
 │   ────────────────────────         ───────────────────────  │
 │                                                             │
-│   docs/PLAN.md                     .claude/claude-state.json│
+│   docs/PLAN.md                     .claude/session-cache.json│
 │   ├─ Project goal                  ├─ TodoWrite tasks       │
 │   ├─ Implementation steps          ├─ Current plan step ref │
 │   ├─ Progress markers              ├─ Git context           │

+ 5 - 5
commands/archive/session-manager/load.md

@@ -11,7 +11,7 @@ Restore your session context from a previous save. Shows plan progress, restores
 ```
 /load
-    ├─→ Read .claude/claude-state.json
+    ├─→ Read .claude/session-cache.json
     │     ├─ TodoWrite tasks
     │     ├─ Plan context
     │     └─ Git context at save time
@@ -81,13 +81,13 @@ Restore your session context from a previous save. Shows plan progress, restores
 ### Step 1: Check for State Files
 
 ```bash
-ls -la .claude/claude-state.json 2>/dev/null
+ls -la .claude/session-cache.json 2>/dev/null
 ```
 
 If missing:
 ```
 ┌─ Session ──────────────────────────────────────────────────────────────────────────────────────┐
-│ ⚠  State: No saved state found in .claude/claude-state.json                                    │
+│ ⚠  State: No saved state found in .claude/session-cache.json                                    │
 │                                                                                                │
 │    To create one, use: /save                                                                   │
 │    Or check current status: /status                                                            │
@@ -96,7 +96,7 @@ If missing:
 
 ### Step 2: Read State
 
-Parse `.claude/claude-state.json`:
+Parse `.claude/session-cache.json`:
 - Extract todos (completed, in_progress, pending)
 - Extract plan context (goal, current step, progress)
 - Extract git context (branch, last commit)
@@ -234,7 +234,7 @@ Session Lifecycle
   │              └─────────────────────────────┴─────────────────────────────┘                   │
   │                                            │                                                 │
   │                                            ▼                                                 │
-  │                               .claude/claude-state.json                                      │
+  │                               .claude/session-cache.json                                      │
   │                               docs/PLAN.md                                                   │
   │                                            │                                                 │
   └────────────────────────────────────────────┘                                                 │

+ 4 - 4
commands/archive/session-manager/save.md

@@ -23,7 +23,7 @@ Save your current session state before ending work. Captures TodoWrite tasks, cu
     │     └─ Uncommitted changes
     └─→ Write state files
-          ├─ .claude/claude-state.json (machine)
+          ├─ .claude/session-cache.json (machine)
           └─ .claude/claude-progress.md (human)
 ```
 
@@ -42,7 +42,7 @@ This command bridges the gap by saving what Claude Code doesn't.
 
 ## Output Files
 
-### .claude/claude-state.json (v2.0)
+### .claude/session-cache.json (v2.0)
 
 ```json
 {
@@ -157,7 +157,7 @@ mkdir -p .claude
 ### Step 5: Write State Files
 
 Write both:
-- `.claude/claude-state.json` - machine-readable
+- `.claude/session-cache.json` - machine-readable
 - `.claude/claude-progress.md` - human-readable
 
 ### Step 6: Confirm
@@ -173,7 +173,7 @@ Write both:
 └────────────────────────────────────────────────────────────────────────────────────────────────┘
 
 Files:
-  • .claude/claude-state.json
+  • .claude/session-cache.json
   • .claude/claude-progress.md
 
 Restore with: /load

+ 1 - 1
commands/archive/session-manager/status.md

@@ -88,7 +88,7 @@ If no plan found:
 If no saved state but plan exists:
 ```
 ┌─ Session ──────────────────────────────────────────────────────────────────────────────────────┐
-│ ⚠  State: No saved state found in .claude/claude-state.json                                    │
+│ ⚠  State: No saved state found in .claude/session-cache.json                                    │
 │ ✅  Plan: Project plan found at docs/PLAN.md                                                    │
 └────────────────────────────────────────────────────────────────────────────────────────────────┘
 ```

+ 246 - 188
commands/explain.md

@@ -1,271 +1,329 @@
 ---
-description: "Deep explanation of complex code, files, or concepts. Breaks down architecture, data flow, and design decisions."
+description: "Deep explanation of complex code, files, or concepts. Routes to expert agents, uses structural search, generates mermaid diagrams."
 ---
 
 # Explain - Deep Code Explanation
 
-Get a comprehensive explanation of complex code, files, or architectural concepts.
+Get a comprehensive explanation of code, files, directories, or architectural concepts. Automatically routes to the most relevant expert agent and uses modern CLI tools for analysis.
 
 ## Arguments
 
 $ARGUMENTS
 
-- File path: Explain specific file
-- Function/class name: Explain specific component
-- `--depth <shallow|normal|deep>`: Level of detail
-- `--focus <arch|flow|deps|api>`: Specific focus area
+- `<target>` - File path, function name, class name, directory, or concept
+- `--depth <shallow|normal|deep|trace>` - Level of detail (default: normal)
+- `--focus <arch|flow|deps|api|perf>` - Specific focus area
 
-## What This Command Does
+## Architecture
 
-1. **Identify Target**
-   - Parse file/function/concept
-   - Gather related files if needed
-   - Understand scope of explanation
+```
+/explain <target> [--depth] [--focus]
+    │
+    ├─→ Step 1: Detect & Classify Target
+    │     ├─ File exists? → Read it
+    │     ├─ Function/class? → ast-grep to find definition
+    │     ├─ Directory? → tokei for overview
+    │     └─ Concept? → rg search codebase
+    │
+    ├─→ Step 2: Gather Context (parallel)
+    │     ├─ structural-search skill → find usages
+    │     ├─ code-stats skill → assess scope
+    │     ├─ Find related: tests, types, docs
+    │     └─ Load: AGENTS.md, CLAUDE.md conventions
+    │
+    ├─→ Step 3: Route to Expert Agent
+    │     ├─ .ts/.tsx → typescript-expert or react-expert
+    │     ├─ .py → python-expert
+    │     ├─ .vue → vue-expert
+    │     ├─ .sql/migrations → postgres-expert
+    │     ├─ agents/skills/commands → claude-architect
+    │     └─ Default → general-purpose
+    │
+    ├─→ Step 4: Generate Explanation
+    │     ├─ Structured markdown with sections
+    │     ├─ Mermaid diagrams (flowchart/sequence/class)
+    │     ├─ Related code paths as file:line refs
+    │     └─ Design decisions and rationale
+    │
+    └─→ Step 5: Integrate
+          ├─ Offer to save to ARCHITECTURE.md (if significant)
+          └─ Link to /saveplan if working on related task
+```
 
-2. **Analyze Code**
-   - Parse structure and dependencies
-   - Trace data flow
-   - Identify patterns and design decisions
+## Execution Steps
 
-3. **Generate Explanation**
-   - Architecture overview
-   - Step-by-step breakdown
-   - Visual diagrams (ASCII)
-   - Related concepts
+### Step 1: Detect Target Type
 
-## Execution Steps
+```bash
+# Check if target is a file
+test -f "$TARGET" && echo "FILE" && exit
+
+# Check if target is a directory
+test -d "$TARGET" && echo "DIRECTORY" && exit
+
+# Otherwise, search for it as a symbol
+```
 
-### Step 1: Parse Target
+**For files:** Read directly with bat (syntax highlighted) or Read tool.
 
+**For directories:** Get overview with tokei (if available):
 ```bash
-# If file path
-cat <file>
+command -v tokei >/dev/null 2>&1 && tokei "$TARGET" --compact || echo "ℹ tokei unavailable"
+```
 
-# If function name, search for it
-grep -rn "function <name>\|def <name>\|class <name>" .
+**For symbols (function/class):** Find definition with ast-grep:
+```bash
+# Try ast-grep first (structural)
+command -v ast-grep >/dev/null 2>&1 && ast-grep -p "function $TARGET" -p "class $TARGET" -p "def $TARGET"
 
-# If directory
-ls -la <dir>
-tree <dir> -L 2
+# Fallback to ripgrep
+rg "(?:function|class|def|const|let|var)\s+$TARGET" --type-add 'code:*.{ts,tsx,js,jsx,py,vue}' -t code
 ```
 
 ### Step 2: Gather Context
 
-For the target, collect:
-- Imports/dependencies
-- Exports/public API
-- Related files (tests, types)
-- Usage examples in codebase
-
-### Step 3: Analyze Structure
-
-**For Functions:**
-- Input parameters and types
-- Return value and type
-- Side effects
-- Error handling
-- Algorithm complexity
-
-**For Classes:**
-- Properties and methods
-- Inheritance/composition
-- Lifecycle
-- Public vs private API
-
-**For Files/Modules:**
-- Purpose and responsibility
-- Exports and imports
-- Dependencies
-- Integration points
-
-**For Directories:**
-- Module organization
-- File relationships
-- Naming conventions
-- Architecture pattern
+Run these in parallel where possible:
+
+**Find usages (structural-search skill):**
+```bash
+# With ast-grep
+ast-grep -p "$TARGET($_)" --json 2>/dev/null | head -20
+
+# Fallback
+rg "$TARGET" --type-add 'code:*.{ts,tsx,js,jsx,py,vue}' -t code -l
+```
+
+**Find related files:**
+```bash
+# Tests
+fd -e test.ts -e spec.ts -e test.py -e spec.py | xargs rg -l "$TARGET" 2>/dev/null
+
+# Types/interfaces
+fd -e d.ts -e types.ts | xargs rg -l "$TARGET" 2>/dev/null
+```
+
+**Load project conventions:**
+- Read AGENTS.md if exists
+- Read CLAUDE.md if exists
+- Check for framework-specific patterns
+
+### Step 3: Route to Expert Agent
+
+Determine the best expert based on file extension and content:
+
+| Pattern | Primary Agent | Condition |
+|---------|---------------|-----------|
+| `.ts` | typescript-expert | No JSX/React imports |
+| `.tsx` | react-expert | JSX present |
+| `.js`, `.jsx` | javascript-expert | - |
+| `.py` | python-expert | - |
+| `.vue` | vue-expert | - |
+| `.sql`, `migrations/*` | postgres-expert | - |
+| `agents/*.md`, `skills/*`, `commands/*` | claude-architect | Claude extensions |
+| `*.test.*`, `*.spec.*` | (framework expert) | Route by file type |
+| Other | general-purpose | Fallback |
+
+**Invoke via Task tool:**
+```
+Task tool with subagent_type: "[detected]-expert"
+Prompt includes:
+  - File content
+  - Related files found
+  - Project conventions
+  - Requested depth and focus
+```
 
 ### Step 4: Generate Explanation
 
+The expert agent produces a structured explanation:
+
 ```markdown
-# Explanation: <target>
+# Explanation: [target]
 
 ## Overview
-<1-2 sentence summary of what this does and why>
+[1-2 sentence summary of purpose and role in the system]
 
 ## Architecture
-<ASCII diagram if helpful>
 
+[Mermaid diagram - choose appropriate type]
+
+### Flowchart (for control flow)
+```mermaid
+flowchart TD
+    A[Input] --> B{Validate}
+    B -->|Valid| C[Process]
+    B -->|Invalid| D[Error]
+    C --> E[Output]
 ```
-┌─────────────┐    ┌─────────────┐
-│   Input     │───▶│  Processor  │
-└─────────────┘    └──────┬──────┘
-                          │
-                          ▼
-                   ┌─────────────┐
-                   │   Output    │
-                   └─────────────┘
+
+### Sequence (for interactions)
+```mermaid
+sequenceDiagram
+    participant Client
+    participant Server
+    participant Database
+    Client->>Server: Request
+    Server->>Database: Query
+    Database-->>Server: Result
+    Server-->>Client: Response
+```
+
+### Class (for structures)
+```mermaid
+classDiagram
+    class Component {
+        +props: Props
+        +state: State
+        +render(): JSX
+    }
 ```
 
 ## How It Works
 
-### Step 1: <phase name>
-<explanation>
+### Step 1: [Phase Name]
+[Explanation with code references]
+
+See: `src/module.ts:42`
 
-### Step 2: <phase name>
-<explanation>
+### Step 2: [Phase Name]
+[Explanation]
 
 ## Key Concepts
 
-### <Concept 1>
-<explanation>
+### [Concept 1]
+[Explanation]
 
-### <Concept 2>
-<explanation>
+### [Concept 2]
+[Explanation]
 
 ## Dependencies
-- `<dep1>` - <purpose>
-- `<dep2>` - <purpose>
 
-## Usage Examples
-
-```<language>
-// Example usage
-```
+| Import | Purpose |
+|--------|---------|
+| `package` | [why it's used] |
 
 ## Design Decisions
 
-### Why <decision>?
-<rationale>
+### Why [decision]?
+[Rationale and tradeoffs considered]
 
 ## Related Code
-- `<file1>` - <relationship>
-- `<file2>` - <relationship>
-
-## Common Pitfalls
-- <pitfall 1>
-- <pitfall 2>
-```
 
-## Usage Examples
+| File | Relationship |
+|------|--------------|
+| `path/to/file.ts:123` | [how it relates] |
 
-```bash
-# Explain a file
-/explain src/auth/oauth.ts
+## See Also
 
-# Explain a function
-/explain validateToken
+- `/explain path/to/related` - [description]
+- [External docs link] - [description]
+```
 
-# Explain a class
-/explain UserService
+## Depth Modes
 
-# Explain a directory
-/explain src/services/
+| Mode | Output |
+|------|--------|
+| `--shallow` | Overview paragraph, key exports, no diagram |
+| `--normal` | Full explanation with 1 diagram, main concepts (default) |
+| `--deep` | Exhaustive: all internals, edge cases, history, multiple diagrams |
+| `--trace` | Data flow tracing through entire system, sequence diagrams |
 
-# Explain with deep detail
-/explain src/core/engine.ts --depth deep
+### Shallow Example
+```bash
+/explain src/auth/token.ts --shallow
+```
+Output: Single paragraph + exports list.
 
-# Focus on data flow
-/explain src/api/routes.ts --focus flow
+### Deep Example
+```bash
+/explain src/core/engine.ts --deep
+```
+Output: Full internals, algorithm analysis, performance notes, edge cases.
 
-# Architecture overview
-/explain src/services/ --focus arch
+### Trace Example
+```bash
+/explain handleLogin --trace
 ```
+Output: Traces data flow from entry to database to response.
+
+## Focus Modes
 
-## Depth Levels
+| Mode | What It Analyzes |
+|------|------------------|
+| `--focus arch` | Module boundaries, layer separation, dependencies |
+| `--focus flow` | Data flow, control flow, state changes |
+| `--focus deps` | Imports, external dependencies, integrations |
+| `--focus api` | Public interface, inputs/outputs, contracts |
+| `--focus perf` | Complexity, bottlenecks, optimization opportunities |
 
-| Level | Output |
-|-------|--------|
-| `shallow` | Quick overview, main purpose, key exports |
-| `normal` | Full explanation with examples (default) |
-| `deep` | Exhaustive breakdown, edge cases, internals |
+## CLI Tool Integration
 
-## Focus Areas
+Commands use modern CLI tools with graceful fallbacks:
 
-| Focus | Explains |
-|-------|----------|
-| `arch` | Architecture, structure, patterns |
-| `flow` | Data flow, control flow, sequence |
-| `deps` | Dependencies, imports, integrations |
-| `api` | Public API, inputs, outputs, contracts |
+| Tool | Purpose | Fallback |
+|------|---------|----------|
+| `tokei` | Code statistics | Skip stats |
+| `ast-grep` | Structural search | `rg` with patterns |
+| `bat` | Syntax highlighting | Read tool |
+| `rg` | Content search | Grep tool |
+| `fd` | File finding | Glob tool |
+
+**Check availability:**
+```bash
+command -v tokei >/dev/null 2>&1 || echo "ℹ tokei not installed - skipping stats"
+```
 
-## Explanation Styles by Target
+## Usage Examples
 
-### Functions
-- **Input/Output**: What goes in, what comes out
-- **Algorithm**: Step-by-step logic
-- **Edge Cases**: Boundary conditions
-- **Performance**: Time/space complexity
+```bash
+# Explain a file
+/explain src/auth/oauth.ts
 
-### Classes
-- **Purpose**: Why this class exists
-- **State**: What data it manages
-- **Behavior**: What it can do
-- **Relationships**: How it connects to others
+# Explain a function (finds it automatically)
+/explain validateToken
 
-### Files
-- **Role**: Where it fits in the system
-- **Exports**: What it provides
-- **Imports**: What it needs
-- **Patterns**: Design patterns used
+# Explain a directory
+/explain src/services/
 
-### Directories
-- **Organization**: How files are structured
-- **Conventions**: Naming and patterns
-- **Boundaries**: Module responsibilities
-- **Dependencies**: Inter-module relationships
+# Deep dive with architecture focus
+/explain src/core/engine.ts --deep --focus arch
 
-## ASCII Diagrams
+# Trace data flow
+/explain handleUserLogin --trace
 
-For complex systems, include ASCII diagrams:
+# Quick overview
+/explain src/utils/helpers.ts --shallow
 
-### Sequence Diagram
-```
-User          Service         Database
-  │              │               │
-  │──request───▶│               │
-  │              │───query─────▶│
-  │              │◀──result─────│
-  │◀─response───│               │
+# Focus on dependencies
+/explain package.json --focus deps
 ```
 
-### Data Flow
-```
-[Input] → [Validate] → [Transform] → [Store] → [Output]
-              │
-              └──[Error]──▶ [Log]
-```
+## Integration
 
-### Component Diagram
-```
-┌────────────────────────────────────┐
-│            Application             │
-├──────────┬──────────┬─────────────┤
-│  Routes  │ Services │   Models    │
-├──────────┴──────────┴─────────────┤
-│            Database               │
-└────────────────────────────────────┘
-```
+| Command | Relationship |
+|---------|--------------|
+| `/review` | Review after understanding |
+| `/test` | Generate tests for explained code |
+| `/saveplan` | Save progress if working on related task |
+| `/plan` | Add architectural insights to project plan |
 
-## Flags
+## Persistence
 
-| Flag | Effect |
-|------|--------|
-| `--depth <level>` | Set detail level (shallow/normal/deep) |
-| `--focus <area>` | Focus on specific aspect |
-| `--no-examples` | Skip usage examples |
-| `--no-diagrams` | Skip ASCII diagrams |
-| `--json` | Output as structured JSON |
+After significant explanations, you may be offered:
 
-## Integration
+```
+Would you like to save this explanation?
+  1. Append to ARCHITECTURE.md
+  2. Append to AGENTS.md (if conventions-related)
+  3. Don't save (output only)
+```
 
-Works well with:
-- `/review` - Review after understanding
-- `/test` - Generate tests for explained code
-- `/checkpoint` - Save progress after learning
+This keeps valuable architectural knowledge in git-tracked documentation.
 
 ## Notes
 
 - Explanations are based on code analysis, not documentation
-- Complex systems may need multiple explanations
-- Use `--depth deep` for unfamiliar codebases
-- Diagrams help visualize relationships
+- Complex systems may need multiple `/explain` calls
+- Use `--deep` for unfamiliar codebases
+- Mermaid diagrams render in GitHub, GitLab, VSCode, and most markdown viewers
+- Expert agents provide framework-specific insights

+ 6 - 6
commands/loadplan.md

@@ -1,5 +1,5 @@
 ---
-description: "Restore plan session state. Loads TodoWrite tasks, plan progress, and notes from saved .claude/claude-state.json."
+description: "Restore plan session state. Loads TodoWrite tasks, plan progress, and notes from saved .claude/session-cache.json."
 ---
 
 # LoadPlan - Restore Plan Session State
@@ -11,7 +11,7 @@ Restore your session context from a previous save. Shows plan progress, restores
 ```
 /loadplan
-    ├─→ Read .claude/claude-state.json
+    ├─→ Read .claude/session-cache.json
     │     ├─ TodoWrite tasks
     │     ├─ Plan context
     │     └─ Git context at save time
@@ -81,13 +81,13 @@ Restore your session context from a previous save. Shows plan progress, restores
 ### Step 1: Check for State Files
 
 ```bash
-ls -la .claude/claude-state.json 2>/dev/null
+ls -la .claude/session-cache.json 2>/dev/null
 ```
 
 If missing:
 ```
 ┌─ Session ──────────────────────────────────────────────────────────────────────────────────────┐
-│ ⚠  State: No saved state found in .claude/claude-state.json                                    │
+│ ⚠  State: No saved state found in .claude/session-cache.json                                    │
 │                                                                                                │
 │    To create one, use: /saveplan                                                               │
 │    Or check current status: /dash                                                              │
@@ -96,7 +96,7 @@ If missing:
 
 ### Step 2: Read State
 
-Parse `.claude/claude-state.json`:
+Parse `.claude/session-cache.json`:
 - Extract todos (completed, in_progress, pending)
 - Extract plan context (goal, current step, progress)
 - Extract git context (branch, last commit)
@@ -236,7 +236,7 @@ Session Lifecycle
   │           └─────────────────────────────────────────────────────────────┘                   │
   │                                            │                                                │
   │                                            ▼                                                │
-  │                               .claude/claude-state.json                                     │
+  │                               .claude/session-cache.json                                     │
   │                               docs/PLAN.md                                                  │
   │                                            │                                                │
   └────────────────────────────────────────────┘                                                │

+ 2 - 2
commands/plan.md

@@ -277,7 +277,7 @@ Effort is relative to the project, not absolute time. Avoid time estimates.
 Session 1:
   /plan "Feature X"             # Strategic planning → docs/PLAN.md
   [work on implementation]
-  /save "Completed step 2"      # Tactical state → claude-state.json
+  /save "Completed step 2"      # Tactical state → session-cache.json
 
 Session 2:
   /load                         # Restore TodoWrite tasks
@@ -292,7 +292,7 @@ Session 2:
 | Command | Captures | Persists To |
 |---------|----------|-------------|
 | `/plan` | Strategic thinking, decisions | `docs/PLAN.md` |
-| `/save` | TodoWrite tasks, git context | `.claude/claude-state.json` |
+| `/save` | TodoWrite tasks, git context | `.claude/session-cache.json` |
 | `/load` | - | Restores from `.claude/` |
 
 ## Flags

+ 398 - 0
commands/pulse.md

@@ -0,0 +1,398 @@
+---
+description: "Generate Claude Code ecosystem news digest. Fetches blogs, repos, and community sources via Firecrawl. Output to news/{date}_pulse.md."
+---
+
+# Pulse - Claude Code News Feed
+
+Fetch and summarize the latest developments in the Claude Code ecosystem.
+
+## What This Command Does
+
+1. **Fetches** content from blogs, official repos, and community sources
+2. **Deduplicates** against previously seen URLs (stored in `news/state.json`)
+3. **Summarizes** each item with an engaging precis + relevance assessment
+4. **Writes** digest to `news/{YYYY-MM-DD}_pulse.md`
+5. **Updates** state to prevent duplicate entries in future runs
+
+## Arguments
+
+- `--force` - Regenerate digest even if today's already exists
+- `--days N` - Look back N days instead of 1 (default: 1)
+- `--dry-run` - Show sources that would be fetched without actually fetching
+
+## Sources
+
+### Official (Priority: Critical)
+
+```json
+[
+  {"name": "Anthropic Engineering", "url": "https://www.anthropic.com/engineering", "type": "blog"},
+  {"name": "Claude Blog", "url": "https://claude.com/blog", "type": "blog"},
+  {"name": "Claude Code Docs", "url": "https://code.claude.com", "type": "docs"},
+  {"name": "anthropics/claude-code", "url": "https://github.com/anthropics/claude-code", "type": "repo"},
+  {"name": "anthropics/skills", "url": "https://github.com/anthropics/skills", "type": "repo"},
+  {"name": "anthropics/claude-code-action", "url": "https://github.com/anthropics/claude-code-action", "type": "repo"},
+  {"name": "anthropics/claude-agent-sdk-demos", "url": "https://github.com/anthropics/claude-agent-sdk-demos", "type": "repo"}
+]
+```
+
+### Community Blogs (Priority: High)
+
+```json
+[
+  {"name": "Simon Willison", "url": "https://simonwillison.net", "type": "blog"},
+  {"name": "Every", "url": "https://every.to", "type": "blog"},
+  {"name": "SSHH Blog", "url": "https://blog.sshh.io", "type": "blog"},
+  {"name": "Lee Han Chung", "url": "https://leehanchung.github.io", "type": "blog"},
+  {"name": "Nick Nisi", "url": "https://nicknisi.com", "type": "blog"},
+  {"name": "HumanLayer", "url": "https://www.humanlayer.dev/blog", "type": "blog"},
+  {"name": "Chris Dzombak", "url": "https://www.dzombak.com/blog", "type": "blog"},
+  {"name": "GitButler", "url": "https://blog.gitbutler.com", "type": "blog"},
+  {"name": "Docker Blog", "url": "https://www.docker.com/blog", "type": "blog"},
+  {"name": "Nx Blog", "url": "https://nx.dev/blog", "type": "blog"},
+  {"name": "Yee Fei Ooi", "url": "https://medium.com/@ooi_yee_fei", "type": "blog"}
+]
+```
+
+### Community Indexes (Priority: Medium)
+
+```json
+[
+  {"name": "Awesome Claude Skills", "url": "https://github.com/travisvn/awesome-claude-skills", "type": "repo"},
+  {"name": "Awesome Claude Code", "url": "https://github.com/hesreallyhim/awesome-claude-code", "type": "repo"},
+  {"name": "Awesome Claude", "url": "https://github.com/alvinunreal/awesome-claude", "type": "repo"},
+  {"name": "SkillsMP", "url": "https://skillsmp.com", "type": "marketplace"},
+  {"name": "Awesome Claude AI", "url": "https://awesomeclaude.ai", "type": "directory"}
+]
+```
+
+### Tools (Priority: Medium)
+
+```json
+[
+  {"name": "Worktree", "url": "https://github.com/agenttools/worktree", "type": "repo"}
+]
+```
+
+### GitHub Search Queries (Priority: High)
+
+Use `gh search repos` and `gh search code` for discovery:
+
+```bash
+# Repos with recent Claude Code activity (last 7 days)
+gh search repos "claude code" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated --limit=10
+
+# Hooks and skills hotspots
+gh search repos "claude code hooks" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
+gh search repos "claude code skills" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
+gh search repos "CLAUDE.md agent" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
+
+# Topic-based discovery (often better signal)
+gh search repos --topic=claude-code --sort=updated --limit=10
+gh search repos --topic=model-context-protocol --sort=updated --limit=10
+
+# Code search for specific patterns
+gh search code "PreToolUse" --language=json --limit=5
+gh search code "PostToolUse" --language=json --limit=5
+```
+
+### Reddit Search (Priority: Medium)
+
+Use Firecrawl or web search for Reddit threads:
+
+```
+site:reddit.com/r/ClaudeAI "Claude Code" (hooks OR skills OR worktree OR tmux)
+```
+
+### Official Docs Search (Priority: High)
+
+Check for documentation updates:
+
+```
+site:code.claude.com hooks
+site:code.claude.com skills
+site:code.claude.com github-actions
+site:code.claude.com mcp
+```
+
+## Execution Steps
+
+### Step 1: Check State
+
+Read `pulse/state.json` to get:
+- `last_run` timestamp
+- `seen_urls` array for deduplication
+- `seen_commits` object for repo tracking
+
+If today's digest exists AND `--force` not specified:
+```
+Pulse digest already exists for today: news/{date}_pulse.md
+Use --force to regenerate.
+```
+
+### Fetch Script
+
+Run the parallel fetcher:
+```bash
+python pulse/fetch.py --sources all --max-workers 15 --output pulse/fetch_cache.json
+```
+
+### Step 2: Fetch Sources
+
+**For Blogs** - Use Firecrawl to fetch and extract recent articles:
+
+```bash
+# Fetch blog content via firecrawl
+firecrawl https://simonwillison.net --format markdown
+```
+
+Look for articles with dates in the last N days (default 1).
+
+**For GitHub Repos** - Use `gh` CLI:
+
+```bash
+# Get latest release
+gh api repos/anthropics/claude-code/releases/latest --jq '.tag_name, .published_at, .body'
+
+# Get recent commits
+gh api repos/anthropics/claude-code/commits --jq '.[:5] | .[] | {sha: .sha[:7], message: .commit.message, date: .commit.author.date}'
+
+# Get recent discussions (if enabled)
+gh api repos/anthropics/claude-code/discussions --jq '.[:3]'
+```
+
+**For Marketplaces/Directories** - Use Firecrawl:
+
+```bash
+firecrawl https://skillsmp.com --format markdown
+firecrawl https://awesomeclaude.ai --format markdown
+```
+
+### Step 3: Filter & Deduplicate
+
+For each item found:
+1. Check if URL is in `seen_urls` - skip if yes
+2. Check if date is within lookback window - skip if older
+3. For repos, check if commit SHA matches `seen_commits[repo]` - skip if same
+
+### Step 4: Generate Summaries
+
+For each new item, generate:
+
+**Precis** (2-3 sentences):
+> Engaging summary that captures the key points and why someone would want to read this.
+
+**Relevance** (1 sentence):
+> How this specifically relates to or could improve our claude-mods project.
+
+Use this prompt pattern for each item:
+```
+Article: [title]
+URL: [url]
+Content: [extracted content]
+
+Generate:
+1. A 2-3 sentence engaging summary (precis)
+2. A 1-sentence assessment of relevance to "claude-mods" (a collection of Claude Code extensions including agents, skills, and commands)
+
+Format:
+PRECIS: [summary]
+RELEVANCE: [assessment]
+```
+
+### Step 5: Write Digest
+
+**IMPORTANT**: Read `pulse/BRAND_VOICE.md` before writing. Follow the voice guidelines.
+
+Create `news/{YYYY-MM-DD}_pulse.md` with format:
+
+```markdown
+# Pulse · {date in words}
+
+{Opening paragraph: Set the scene. What's the throughline this week? What should readers care about? Write conversationally, as if explaining to a smart friend.}
+
+---
+
+## The Signal
+
+{1-3 most important/newsworthy items. These get:}
+- 2-paragraph summaries (150-200 words)
+- Extended "Pulse insights:" (2-3 sentences)
+- Source name linked to parent site
+
+### [{title}]({url})
+
+**[{source_name}]({source_parent_url})** · {date}
+
+{Paragraph 1: Hook + context. Start with something interesting—a question, surprising fact, or reframing.}
+
+{Paragraph 2: Substance + implications. What's actually in it and why it matters beyond the obvious.}
+
+**Pulse insights:** {Opinionated take on relevance to Claude Code practitioners. Be direct, take a stance.}
+
+---
+
+## Official Updates
+
+{Other items from Anthropic sources}
+
+### [{title}]({url})
+
+**[{source_name}]({source_parent_url})** · {date}
+
+{1 paragraph summary (60-100 words). Hook + substance + implication in flowing narrative.}
+
+**Pulse insights:** {1-2 sentences. Practical, specific.}
+
+---
+
+## GitHub Discoveries
+
+{New repos from topic/keyword searches}
+
+### [{repo_name}]({url})
+
+**{author}** · {one-line description}
+
+{1 paragraph on what it does and why it's interesting.}
+
+**Pulse insights:** {1-2 sentences.}
+
+---
+
+## Community Radar
+
+{Notable community sources, blogs, discussions}
+
+### [{source_name}]({url}) — {pithy tagline}
+
+{2-3 sentences on what makes this source valuable.}
+
+---
+
+## Quick Hits
+
+- **[{title}]({url})**: {one-line description}
+- ...
+
+---
+
+## The Hit List
+
+1. **{Action}** — {Why}
+2. ...
+
+---
+
+*{Randomised footer from BRAND_VOICE.md} · {date in words} · {suffix}*
+```
+
+### Step 6: Update State
+
+Update `pulse/state.json`:
+
+```json
+{
+  "version": "1.0",
+  "last_run": "{ISO timestamp}",
+  "seen_urls": [
+    "...existing...",
+    "...new urls from this run..."
+  ],
+  "seen_commits": {
+    "anthropics/claude-code": "{latest_sha}",
+    "anthropics/skills": "{latest_sha}",
+    ...
+  }
+}
+```
+
+Keep only last 30 days of URLs to prevent unbounded growth.
+
+### Step 7: Display Summary
+
+```
+Pulse: 2025-12-12
+
+Fetched 23 sources
+Found 8 new items (15 deduplicated)
+
+Critical:
+  - anthropics/claude-code v1.2.0 released
+
+Digest written to: news/2025-12-12_pulse.md
+```
+
+## Fetching Strategy
+
+**Priority Order**:
+1. Try `WebFetch` first (fastest, built-in)
+2. If 403/blocked/JS-heavy, use Firecrawl
+3. For GitHub repos, always use `gh` CLI
+
+**Parallel Fetching**:
+- Fetch multiple sources simultaneously
+- Use retry with exponential backoff (2s, 4s, 8s, 16s)
+- Report progress: `[====------] 12/23 sources`
+
+**Error Handling**:
+- If source fails after 4 retries, log and continue
+- Include failed sources in digest footer
+- Don't fail entire run for single source failure
+
+## Output Example
+
+See `news/2025-12-12_pulse.md` for a complete example in the current format.
+
+Key elements:
+- Opening 2 paragraphs set the scene conversationally (see BRAND_VOICE.md)
+- "The Signal" section gets 2 paragraphs + extended insights
+- All source names link to parent sites
+- "Pulse insights:" replaces "Why it matters"
+- "The Hit List" for actionable items (not homework—marching orders)
+- Randomised footer from BRAND_VOICE.md variations
+
+## Edge Cases
+
+### No New Items
+```
+Pulse: 2025-12-12
+
+Fetched 23 sources
+No new items found (all 12 items already seen)
+
+Last digest: news/2025-12-11_pulse.md
+```
+
+### Source Failures
+```
+Pulse: 2025-12-12
+
+Fetched 23 sources (2 failed)
+Found 6 new items
+
+Failed sources:
+  - skillsmp.com (timeout after 4 retries)
+  - every.to (403 Forbidden)
+
+Digest written to: news/2025-12-12_pulse.md
+```
+
+### First Run (No State)
+Initialize state.json with empty arrays/objects before proceeding.
+
+## Integration
+
+The `/pulse` command is standalone but integrates with:
+- **claude-architect agent** - Reviews digests for actionable insights (configured in agent's startup)
+- **news/state.json** - Persistent deduplication state
+- **Firecrawl** - Primary fetching mechanism for blocked/JS sites
+- **gh CLI** - GitHub API access for repo updates
+
+## Notes
+
+- Run manually when you want ecosystem updates: `Pulse`
+- Use `--force` to regenerate today's digest with fresh data
+- Use `--days 7` for weekly catchup after vacation
+- Digests are git-trackable for historical reference
+- **Always read `pulse/BRAND_VOICE.md`** before writing summaries

+ 370 - 116
commands/review.md

@@ -1,40 +1,66 @@
 ---
-description: "Code review staged changes or specific files. Analyzes for bugs, style issues, security concerns, and suggests improvements."
+description: "Code review with semantic diffs, expert routing, and auto-TodoWrite. Analyzes staged changes or specific files for bugs, security, performance, and style."
 ---
 
 # Review - AI Code Review
 
-Perform a comprehensive code review on staged changes or specific files.
+Perform a comprehensive code review on staged changes, specific files, or pull requests. Routes to expert agents based on file types, respects project conventions, and automatically creates TodoWrite tasks for critical issues.
 
 ## Arguments
 
 $ARGUMENTS
 
 - No args: Review staged changes (`git diff --cached`)
-- File path: Review specific file
-- Directory: Review all files in directory
+- `<file>`: Review specific file
+- `<directory>`: Review all files in directory
 - `--all`: Review all uncommitted changes
+- `--pr <number>`: Review a GitHub PR
+- `--focus <security|perf|types|tests|style>`: Focus on specific area
+- `--depth <quick|normal|thorough>`: Review depth
 
-## What This Command Does
+## Architecture
 
-1. **Identify Target Code**
-   - Staged changes (default)
-   - Specific files/directories
-   - All uncommitted changes
-
-2. **Analyze For**
-   - Bugs and logic errors
-   - Security vulnerabilities
-   - Performance issues
-   - Style/convention violations
-   - Missing error handling
-   - Code smells
-
-3. **Provide Feedback**
-   - Issue severity (critical, warning, suggestion)
-   - Line-specific comments
-   - Suggested fixes
-   - Overall assessment
+```
+/review [target] [--focus] [--depth]
+    │
+    ├─→ Step 1: Determine Scope
+    │     ├─ No args → git diff --cached (staged)
+    │     ├─ --all → git diff HEAD (all uncommitted)
+    │     ├─ File path → specific file diff
+    │     └─ --pr N → gh pr diff N
+    │
+    ├─→ Step 2: Analyze Changes (parallel)
+    │     ├─ delta for syntax-highlighted diff
+    │     ├─ difft for semantic diff (structural)
+    │     ├─ Categorize: logic, style, test, docs, config
+    │     └─ Identify touched modules/components
+    │
+    ├─→ Step 3: Load Project Standards
+    │     ├─ AGENTS.md, CLAUDE.md conventions
+    │     ├─ .eslintrc, .prettierrc, pyproject.toml
+    │     ├─ Detect test framework
+    │     └─ Check CI config for existing linting
+    │
+    ├─→ Step 4: Route to Expert Reviewers
+    │     ├─ TypeScript → typescript-expert
+    │     ├─ React/JSX → react-expert
+    │     ├─ Python → python-expert
+    │     ├─ Vue → vue-expert
+    │     ├─ SQL/migrations → postgres-expert
+    │     ├─ Claude extensions → claude-architect
+    │     └─ Multi-domain → parallel expert dispatch
+    │
+    ├─→ Step 5: Generate Review
+    │     ├─ Severity: CRITICAL / WARNING / SUGGESTION / PRAISE
+    │     ├─ Line-specific comments (file:line refs)
+    │     ├─ Suggested fixes as diff blocks
+    │     └─ Overall verdict: Ready to commit? Y/N
+    │
+    └─→ Step 6: Integration
+          ├─ Auto-create TodoWrite for CRITICAL issues
+          ├─ Link to /saveplan for tracking
+          └─ Suggest follow-up: /test, /explain
+```
 
 ## Execution Steps
 
@@ -44,163 +70,391 @@ $ARGUMENTS
 # Default: staged changes
 git diff --cached --name-only
 
-# If no staged changes, prompt user
-git status --short
+# Check if anything is staged
+STAGED=$(git diff --cached --name-only | wc -l)
+if [ "$STAGED" -eq 0 ]; then
+    echo "No staged changes. Use --all for uncommitted or specify a file."
+    git status --short
+fi
+```
+
+**For PR review:**
+```bash
+gh pr diff $PR_NUMBER --patch
+```
+
+**For specific file:**
+```bash
+git diff HEAD -- "$FILE"
 ```
 
-### Step 2: Get Diff Content
+### Step 2: Analyze Changes
+
+Run semantic diff analysis (parallel where possible):
+
+**With difft (semantic):**
+```bash
+command -v difft >/dev/null 2>&1 && git difftool --tool=difftastic --no-prompt HEAD~1 || git diff HEAD~1
+```
 
+**With delta (syntax highlighting):**
 ```bash
-# For staged changes
-git diff --cached
+command -v delta >/dev/null 2>&1 && git diff --cached | delta || git diff --cached
+```
 
-# For specific file
-git diff HEAD -- <file>
+**Categorize changes:**
+```bash
+# Get changed files
+git diff --cached --name-only | while read file; do
+    case "$file" in
+        *.test.* | *.spec.*) echo "TEST: $file" ;;
+        *.md | docs/*) echo "DOCS: $file" ;;
+        *.json | *.yaml | *.toml) echo "CONFIG: $file" ;;
+        *) echo "CODE: $file" ;;
+    esac
+done
+```
 
-# For all changes
-git diff HEAD
+**Get diff statistics:**
+```bash
+git diff --cached --stat
 ```
 
-### Step 3: Analyze Code
+### Step 3: Load Project Standards
+
+**Check for project conventions:**
+```bash
+# Claude Code conventions
+cat AGENTS.md 2>/dev/null | head -50
+cat CLAUDE.md 2>/dev/null | head -50
+
+# Linting configs
+cat .eslintrc* 2>/dev/null | head -30
+cat .prettierrc* 2>/dev/null
+cat pyproject.toml 2>/dev/null | head -30
+
+# Test framework detection
+cat package.json 2>/dev/null | jq '.devDependencies | keys | map(select(test("jest|vitest|mocha|cypress|playwright")))' 2>/dev/null
+cat pyproject.toml 2>/dev/null | grep -E "pytest|unittest" 2>/dev/null
+```
 
-For each changed file, analyze:
+**Check CI for existing linting:**
+```bash
+cat .github/workflows/*.yml 2>/dev/null | grep -E "eslint|prettier|pylint|ruff" | head -10
+```
 
-**Bugs & Logic**
-- Null/undefined checks
-- Off-by-one errors
-- Race conditions
-- Unhandled edge cases
+### Step 4: Route to Expert Reviewers
 
-**Security**
-- SQL injection
-- XSS vulnerabilities
-- Hardcoded secrets
-- Insecure dependencies
+Determine experts based on changed files:
 
-**Performance**
-- N+1 queries
-- Unnecessary re-renders
-- Memory leaks
-- Blocking operations
+| File Pattern | Primary Expert | Secondary Expert |
+|--------------|----------------|------------------|
+| `*.ts` | typescript-expert | - |
+| `*.tsx` | react-expert | typescript-expert |
+| `*.vue` | vue-expert | typescript-expert |
+| `*.py` | python-expert | sql-expert (if ORM) |
+| `*.sql`, `migrations/*` | postgres-expert | - |
+| `agents/*.md`, `skills/*`, `commands/*` | claude-architect | - |
+| `*.test.*`, `*.spec.*` | cypress-expert | (framework expert) |
+| `wrangler.toml`, `workers/*` | wrangler-expert | cloudflare-expert |
+| `*.sh`, `*.bash` | bash-expert | - |
 
-**Style**
-- Naming conventions
-- Code organization
-- Documentation gaps
-- Dead code
+**Multi-domain changes:** If files span multiple domains, dispatch experts in parallel via Task tool.
 
-### Step 4: Format Output
+**Invoke via Task tool:**
+```
+Task tool with subagent_type: "[detected]-expert"
+Prompt includes:
+  - Diff content
+  - Project conventions from AGENTS.md
+  - Linting config summaries
+  - Requested focus area
+  - Request for structured review output
+```
+
+### Step 5: Generate Review
+
+The expert produces a structured review:
 
 ```markdown
-# Code Review: <scope>
+# Code Review: [scope description]
 
 ## Summary
-- Files reviewed: N
-- Issues found: X (Y critical, Z warnings)
 
-## Critical Issues 🔴
+| Metric | Value |
+|--------|-------|
+| Files reviewed | N |
+| Lines changed | +X / -Y |
+| Issues found | N (X critical, Y warnings) |
+
+## Verdict
+
+**Ready to commit?** Yes / No
+
+[1-2 sentence summary of overall quality]
+
+---
+
+## Critical Issues
+
+### `src/auth/login.ts:42`
 
-### <filename>:<line>
-**Issue**: <description>
-**Risk**: <what could go wrong>
-**Fix**:
-\`\`\`diff
-- <old code>
-+ <suggested fix>
-\`\`\`
+**Issue:** SQL injection vulnerability in user input handling
 
-## Warnings 🟡
+**Risk:** Attacker can execute arbitrary SQL queries
 
-### <filename>:<line>
-**Issue**: <description>
-**Suggestion**: <how to improve>
+**Fix:**
+```diff
+- const query = `SELECT * FROM users WHERE id = ${userId}`;
++ const query = `SELECT * FROM users WHERE id = $1`;
++ const result = await db.query(query, [userId]);
+```
 
-## Suggestions 🔵
+---
 
-### <filename>:<line>
-**Suggestion**: <minor improvement>
+## Warnings
 
-## Overall Assessment
+### `src/components/Form.tsx:89`
 
-<1-2 sentence summary>
+**Issue:** Missing dependency in useEffect
 
-**Ready to commit?** Yes/No - <reasoning>
+**Suggestion:** Add `userId` to dependency array or use useCallback
+
+```diff
+- useEffect(() => { fetchUser(userId) }, []);
++ useEffect(() => { fetchUser(userId) }, [userId]);
+```
+
+---
+
+## Suggestions
+
+### `src/utils/helpers.ts:15`
+
+**Suggestion:** Consider using optional chaining
+
+```diff
+- const name = user && user.profile && user.profile.name;
++ const name = user?.profile?.name;
+```
+
+---
+
+## Praise
+
+### `src/services/api.ts:78`
+
+**Good pattern:** Proper error boundary with typed error handling. This is exactly the pattern we want to follow.
+
+---
+
+## Files Reviewed
+
+| File | Changes | Issues |
+|------|---------|--------|
+| `src/auth/login.ts` | +42/-8 | 1 critical |
+| `src/components/Form.tsx` | +89/-23 | 1 warning |
+| `src/utils/helpers.ts` | +15/-3 | 1 suggestion |
+
+## Follow-up
+
+- Run `/test src/auth/` to verify security fix
+- Run `/explain src/auth/login.ts` for deeper understanding
+- Use `/saveplan` to track these issues
+```
+
+### Step 6: Integration
+
+**Auto-create TodoWrite tasks for CRITICAL issues:**
+
+For each CRITICAL issue found, automatically add to TodoWrite:
+```
+TodoWrite:
+  - content: "Fix: SQL injection in login.ts:42"
+    status: "pending"
+    activeForm: "Fixing SQL injection in login.ts:42"
+```
+
+**Link to session management:**
+```
+Issues have been added to your task list.
+Run /saveplan to persist before ending session.
+```
+
+## Severity System
+
+| Level | Icon | Meaning | Action | Auto-Todo? |
+|-------|------|---------|--------|------------|
+| CRITICAL | :red_circle: | Security bug, data loss risk, crashes | Must fix before merge | Yes |
+| WARNING | :yellow_circle: | Logic issues, performance problems | Should address | No |
+| SUGGESTION | :blue_circle: | Style, minor improvements | Optional | No |
+| PRAISE | :star: | Good patterns worth noting | Recognition | No |
+
+## Focus Modes
+
+| Mode | What It Checks |
+|------|----------------|
+| `--security` | OWASP top 10, secrets in code, injection, auth issues |
+| `--perf` | N+1 queries, unnecessary re-renders, complexity, memory |
+| `--types` | Type safety, `any` usage, generics, null handling |
+| `--tests` | Coverage gaps, test quality, mocking patterns |
+| `--style` | Naming, organization, dead code, comments |
+| (default) | All of the above |
+
+### Security Focus Example
+```bash
+/review --security
+```
+Checks for:
+- Hardcoded secrets, API keys
+- SQL/NoSQL injection
+- XSS vulnerabilities
+- Insecure dependencies
+- Auth/authz issues
+- CORS misconfigurations
+
+### Performance Focus Example
+```bash
+/review --perf
+```
+Checks for:
+- N+1 database queries
+- Unnecessary re-renders (React)
+- Memory leaks
+- Blocking operations in async code
+- Unoptimized algorithms
+
+## Depth Modes
+
+| Mode | Behavior |
+|------|----------|
+| `--quick` | Surface-level scan, obvious issues only |
+| `--normal` | Standard review, all severity levels (default) |
+| `--thorough` | Deep analysis, traces data flow, checks edge cases |
+
+## CLI Tool Integration
+
+| Tool | Purpose | Fallback |
+|------|---------|----------|
+| `delta` | Syntax-highlighted diffs | `git diff` |
+| `difft` | Semantic/structural diffs | `git diff` |
+| `gh` | GitHub PR operations | Manual diff |
+| `rg` | Search for patterns | Grep tool |
+| `jq` | Parse JSON configs | Read manually |
+
+**Graceful degradation:**
+```bash
+command -v delta >/dev/null 2>&1 && git diff --cached | delta || git diff --cached
 ```
 
 ## Usage Examples
 
 ```bash
-# Review staged changes
+# Review staged changes (default)
 /review
 
+# Review all uncommitted changes
+/review --all
+
 # Review specific file
 /review src/auth/login.ts
 
-# Review directory
+# Review a directory
 /review src/components/
 
-# Review all uncommitted changes
-/review --all
+# Review a GitHub PR
+/review --pr 123
 
-# Review with specific focus
+# Security-focused review
 /review --security
-/review --performance
-```
 
-## Focus Flags
+# Performance-focused review
+/review --perf
 
-| Flag | Focus Area |
-|------|------------|
-| `--security` | Security vulnerabilities only |
-| `--performance` | Performance issues only |
-| `--style` | Style and conventions only |
-| `--bugs` | Logic errors and bugs only |
-| `--all-checks` | Everything (default) |
+# Quick scan before committing
+/review --quick
 
-## Severity Levels
+# Thorough review for important changes
+/review --thorough
 
-| Level | Meaning | Action |
-|-------|---------|--------|
-| 🔴 Critical | Must fix before merge | Blocking |
-| 🟡 Warning | Should address | Recommended |
-| 🔵 Suggestion | Nice to have | Optional |
+# Combined: thorough security review of PR
+/review --pr 456 --security --thorough
+```
 
 ## Framework-Specific Checks
 
 ### React/Next.js
 - Hook rules violations
-- Missing dependencies in useEffect
+- Missing useEffect dependencies
 - Key prop issues in lists
 - Server/client component boundaries
+- Hydration mismatches
 
 ### TypeScript
-- `any` type usage
-- Missing type annotations
+- `any` type abuse
+- Missing type annotations on exports
 - Incorrect generic constraints
-- Type assertion abuse
-
-### Node.js
-- Unhandled promise rejections
-- Sync operations in async context
-- Memory leak patterns
-- Insecure eval/exec usage
+- Type assertion overuse (`as`)
+- Null/undefined handling
 
 ### Python
 - Mutable default arguments
-- Bare except clauses
-- Resource leaks
+- Bare `except:` clauses
+- Resource leaks (files, connections)
 - SQL string formatting
+- Type hint inconsistencies
+
+### Vue
+- Reactivity gotchas
+- Missing v-key in v-for
+- Props mutation
+- Composition API anti-patterns
+
+### SQL/Database
+- SQL injection risks
+- N+1 query patterns
+- Missing indexes
+- Transaction handling
+- Migration safety
 
 ## Integration
 
-Works well with:
-- `/test` - Generate tests for flagged issues
-- `/explain` - Deep dive into complex code
-- `/checkpoint` - Save state before fixing issues
+| Command | Relationship |
+|---------|--------------|
+| `/explain` | Deep dive into flagged code |
+| `/test` | Generate tests for issues found |
+| `/saveplan` | Persist review findings to session state |
+| `/plan` | Add review findings to project plan |
+
+## Workflow Examples
+
+### Pre-Commit Review
+```bash
+git add .
+/review
+# Fix issues...
+git commit -m "feat: add user auth"
+```
+
+### PR Review Workflow
+```bash
+/review --pr 123 --thorough
+# Creates TodoWrite tasks for critical issues
+# Fix issues...
+/saveplan "Addressed review findings"
+```
+
+### Security Audit
+```bash
+/review src/ --security --thorough
+# Comprehensive security scan of entire directory
+```
 
 ## Notes
 
 - Reviews are suggestions, not absolute rules
 - Context matters - some "issues" may be intentional
-- Use `--verbose` for detailed explanations
-- Reviews don't modify code - you decide what to fix
+- CRITICAL issues are auto-added to TodoWrite
+- Use `/saveplan` to persist review tasks across sessions
+- Expert agents provide framework-specific insights
+- Respects project conventions from AGENTS.md

+ 5 - 5
commands/saveplan.md

@@ -1,5 +1,5 @@
 ---
-description: "Save plan session state. Persists TodoWrite tasks, current plan step, and git context to .claude/claude-state.json."
+description: "Save plan session state. Persists TodoWrite tasks, current plan step, and git context to .claude/session-cache.json."
 ---
 
 # SavePlan - Session State Persistence
@@ -23,7 +23,7 @@ Save your current session state before ending work. Captures TodoWrite tasks, cu
     │     └─ Uncommitted changes
     └─→ Write state files
-          ├─ .claude/claude-state.json (machine)
+          ├─ .claude/session-cache.json (machine)
           └─ .claude/claude-progress.md (human)
 ```
 
@@ -42,7 +42,7 @@ This command bridges the gap by saving what Claude Code doesn't.
 
 ## Output Files
 
-### .claude/claude-state.json (v2.0)
+### .claude/session-cache.json (v2.0)
 
 ```json
 {
@@ -157,7 +157,7 @@ mkdir -p .claude
 ### Step 5: Write State Files
 
 Write both:
-- `.claude/claude-state.json` - machine-readable
+- `.claude/session-cache.json` - machine-readable
 - `.claude/claude-progress.md` - human-readable
 
 ### Step 6: Confirm
@@ -173,7 +173,7 @@ Write both:
 └────────────────────────────────────────────────────────────────────────────────────────────────┘
 
 Files:
-  • .claude/claude-state.json
+  • .claude/session-cache.json
   • .claude/claude-progress.md
 
 Restore with: /loadplan

+ 1 - 1
commands/showplan.md

@@ -88,7 +88,7 @@ If no plan found:
 If no saved state but plan exists:
 ```
 ┌─ Session ──────────────────────────────────────────────────────────────────────────────────────┐
-│ ⚠  State: No saved state found in .claude/claude-state.json                                    │
+│ ⚠  State: No saved state found in .claude/session-cache.json                                    │
 │ ✅  Plan: Project plan found at docs/PLAN.md                                                    │
 └────────────────────────────────────────────────────────────────────────────────────────────────┘
 ```

+ 1 - 1
commands/sync.md

@@ -71,7 +71,7 @@ Then WRITE `.claude/sync-cache.json`:
 
 Read cache, then run ONE bash for live state:
 ```bash
-git branch --show-current && git status --porcelain | wc -l && test -f .claude/claude-state.json && stat -c %Y .claude/claude-state.json 2>/dev/null
+git branch --show-current && git status --porcelain | wc -l && test -f .claude/session-cache.json && stat -c %Y .claude/session-cache.json 2>/dev/null
 ```
 
 Output cached content + live git/plan/state info.

+ 26 - 11
docs/DASH.md

@@ -1,5 +1,5 @@
 # 🎛️ Claude Mods Dashboard
-**Updated:** 2025-12-12 | **Extensions:** 45 | **Lines:** 9,567
+**Updated:** 2025-12-13 | **Extensions:** 51 | **Lines:** 13,553
 
 ---
 
@@ -7,9 +7,9 @@
 
 | Category | Count | Lines |
 |----------|-------|-------|
-| 🤖 **Agents** | 24 | 5,910 |
-| ⚡ **Skills** | 10 | 836 |
-| 🔧 **Commands** | 10 | 2,708 |
+| 🤖 **Agents** | 21 | 7,552 |
+| ⚡ **Skills** | 18 | 2,725 |
+| 🔧 **Commands** | 12 | 3,276 |
 | 📏 **Rules** | 1 | 113 |
 | 🧩 **Templates** | 2 | — |
 
@@ -26,8 +26,7 @@
 | 🤖 **cloudflare-expert** | Cloud | Workers, Pages, DNS |
 | 🤖 **craftcms-expert** | CMS | Craft CMS, Twig, GraphQL |
 | 🤖 **cypress-expert** | Testing | E2E, component tests |
-| 🤖 **fetch-expert** | Utility | Parallel web fetching |
-| 🤖 **firecrawl-expert** | Scraping | Web crawling, extraction |
+| 🤖 **firecrawl-expert** | Scraping | Web crawling, parallel fetch, extraction |
 | 🤖 **javascript-expert** | Language | Modern JS, async |
 | 🤖 **laravel-expert** | Backend | Laravel, Eloquent |
 | 🤖 **payloadcms-expert** | CMS | Payload architecture |
@@ -36,9 +35,7 @@
 | 🤖 **project-organizer** | Utility | Directory restructuring |
 | 🤖 **python-expert** | Language | Advanced Python |
 | 🤖 **react-expert** | Frontend | Hooks, Server Components |
-| 🤖 **rest-expert** | API | RESTful design |
 | 🤖 **sql-expert** | Database | Complex queries |
-| 🤖 **tailwind-expert** | CSS | Utility-first styling |
 | 🤖 **typescript-expert** | Language | Type system, generics |
 | 🤖 **vue-expert** | Frontend | Vue 3, Composition API |
 | 🤖 **wrangler-expert** | Cloud | Workers deployment |
@@ -48,17 +45,33 @@
 
 ## ⚡ Skills
 
+### Pattern Reference Skills
+| Skill | Triggers |
+|-------|----------|
+| ⚡ **rest-patterns** | REST API, HTTP methods, status codes |
+| ⚡ **tailwind-patterns** | Tailwind, utility classes, breakpoints |
+| ⚡ **sql-patterns** | CTEs, window functions, JOINs |
+| ⚡ **sqlite-ops** | SQLite, aiosqlite, local database |
+| ⚡ **mcp-patterns** | MCP server, Model Context Protocol |
+
+### CLI Tool Skills
 | Skill | Tool | Triggers |
 |-------|------|----------|
-| ⚡ **agent-discovery** | — | "Which agent?", recommend tools |
+| ⚡ **file-search** | fd, rg, fzf | Find files, search code, fuzzy select |
+| ⚡ **find-replace** | sd | Batch replace, modern sed |
 | ⚡ **code-stats** | tokei, difft | Line counts, semantic diffs |
 | ⚡ **data-processing** | jq, yq | JSON, YAML, TOML |
+| ⚡ **structural-search** | ast-grep | AST patterns |
+
+### Workflow Skills
+| Skill | Tool | Triggers |
+|-------|------|----------|
+| ⚡ **tool-discovery** | — | "Which agent/skill?", recommend tools |
 | ⚡ **git-workflow** | lazygit, gh, delta | Stage, PR, review |
 | ⚡ **project-docs** | — | AGENTS.md, conventions |
 | ⚡ **project-planner** | — | Stale plans, `/plan` |
-| ⚡ **python-env** | uv | Fast venv, pip |
+| ⚡ **python-env** | uv | Fast venv, pyproject.toml |
 | ⚡ **safe-file-reader** | bat, eza | View without prompts |
-| ⚡ **structural-search** | ast-grep | AST patterns |
 | ⚡ **task-runner** | just | Run tests, build |
 
 ---
@@ -75,7 +88,9 @@
 | 🔧 `/saveplan` | Save plan state |
 | 🔧 `/loadplan` | Restore plan from saved state |
 | 🔧 `/showplan` | Show plan progress |
+| 🔧 `/pulse` | Claude Code ecosystem news digest |
 | 🔧 `/review` | Code review staged changes |
+| 🔧 `/sync` | Session bootstrap with project context |
 | 🔧 `/test` | Generate tests |
 
 ---

+ 1 - 1
docs/PLAN.md

@@ -29,7 +29,7 @@ Build modular, composable tools that:
 ### Completed
 - [x] Session continuity commands (`/save`, `/load`)
   - Completed: 2025-11-27
-  - Persists TodoWrite state to `.claude/claude-state.json`
+  - Persists TodoWrite state to `.claude/session-cache.json`
   - Human-readable progress in `.claude/claude-progress.md`
 
 - [x] Plan persistence command (`/plan`)

+ 0 - 0
news/.gitkeep


File diff suppressed because it is too large
+ 152 - 0
news/2025-12-12_pulse.md


+ 154 - 0
news/2025-12-13_pulse.md

@@ -0,0 +1,154 @@
+# Pulse · December 13, 2025
+
+The plugin ecosystem just exploded. Overnight, GitHub lit up with a dozen new Claude Code plugin repos—marketplaces, skill collections, workflow automations. Some are personal toolkits made public, others are production-ready frameworks with Linear integration and ADR generation. It's like watching a Cambrian explosion of developer tooling, except everyone's building for the same organism.
+
+What's driving this? Two things: Anthropic's plugin spec finally clicked, and people are realizing that the best way to make Claude Code work for you is to teach it your workflow. Not abstract "best practices," but your actual process—your commit style, your testing philosophy, your code review checklist. The repos popping up this week aren't just code; they're crystallized developer opinions. And that's exactly what Claude needs to be useful.
+
+---
+
+## The Signal
+
+### [Very Important Agents](https://nicknisi.com/posts/very-important-agents)
+
+**[Nick Nisi](https://nicknisi.com)** · December 2025
+
+Nick's been on the Changelog & Friends podcast talking about Claude Code, and this post is the companion piece—a tour of the plugins that actually make it into his daily workflow. It's not a tutorial; it's a field report from someone who's been living in Claude Code for months and has strong opinions about what works.
+
+The real value here isn't the specific plugins (though those are useful), it's the meta-lesson: the best Claude Code setups are highly personal. Nick's workflow revolves around TypeScript, Vim keybindings, and a particular way of thinking about commits. His plugins encode those preferences. If you're still using vanilla Claude Code, this is a nudge to start building your own.
+
+**Pulse insights:** Nick's approach validates what we've been doing with claude-mods. The plugin-per-workflow pattern scales. Worth checking if any of his specific plugins solve problems we've been reinventing.
+
+---
+
+### [How I Use Every Claude Code Feature](https://blog.sshh.io/p/how-i-use-every-claude-code-feature)
+
+**[SSHH Blog](https://blog.sshh.io)** · December 2025
+
+Shrivu Shankar wrote the brain dump we all needed—a systematic tour of Claude Code's feature surface, with concrete examples of when each thing actually matters. This isn't documentation; it's documentation filtered through heavy usage. The sections on hooks and skills are particularly good because they show the non-obvious interactions.
+
+What stands out: Shrivu doesn't just list features, he explains when *not* to use them. The bit about avoiding over-engineering skills is gold. Sometimes a simple CLAUDE.md instruction beats a formal skill definition. That kind of judgment only comes from shipping real work with these tools.
+
+**Pulse insights:** Required reading for anyone building plugins. The "when not to use it" framing should inform our own skill documentation. We should audit our skills for over-engineering.
+
+---
+
+### [AI Can't Read Your Docs](https://blog.sshh.io/p/ai-cant-read-your-docs)
+
+**[SSHH Blog](https://blog.sshh.io)** · December 2025
+
+A counterintuitive claim: AI coding agents often struggle with well-documented codebases because the docs are optimized for humans, not LLMs. Shrivu argues for "LLM-native documentation"—structured metadata, explicit capability descriptions, examples that double as test cases. It's a provocation, but it's backed by real friction he's encountered.
+
+The deeper point: as AI agents become primary consumers of our code, we might need to write differently. Not dumbed-down, but differently structured. The implications for AGENTS.md files and skill descriptions are significant.
+
+**Pulse insights:** This should change how we write skill descriptions. Less prose, more structured capability declarations. Worth experimenting with the LLM-native doc format he proposes.
+
+---
+
+## GitHub Discoveries
+
+The `claude-code` topic exploded this week. Here's what stood out:
+
+### [lazyclaude](https://github.com/NikiforovAll/lazyclaude)
+
+**NikiforovAll** · A lazygit-style TUI for visualizing Claude Code customizations
+
+Finally, someone built the obvious thing: a terminal UI for browsing your Claude Code setup. See your skills, agents, hooks, and commands in one navigable interface. It's early, but the concept is solid—configuration visibility matters when you're managing dozens of customizations.
+
+**Pulse insights:** This solves a real problem. Our `.claude/` directory is getting unwieldy. Worth watching—or contributing to.
+
+---
+
+### [claude-deep-research](https://github.com/karanIPS/claude-deep-research)
+
+**karanIPS** · Deep Research workflow for Claude Code · ⭐ 20
+
+A pre-built configuration for running Claude Code in research mode—extended context, web search integration, structured output. It's essentially a "preset" for a specific use case, which is a pattern we haven't explored much.
+
+**Pulse insights:** The "preset for use case" pattern is interesting. We could package claude-mods configurations as installable presets rather than just loose files.
+
+---
+
+### [claude-code-skills](https://github.com/levnikolaevich/claude-code-skills)
+
+**levnikolaevich** · 29 production-ready skills for Agile workflows · ⭐ 11
+
+The most ambitious skill collection we've seen—Epic/Story/Task management, Risk-Based Testing, ADR generation, all with Linear integration. It's opinionated (Agile-heavy), but the implementation quality is high. Real documentation, real tests.
+
+**Pulse insights:** This is what mature Claude Code extensions look like. The Linear integration is particularly clever—skills that talk to external systems, not just Claude. Steal liberally.
+
+---
+
+### [gh-aw (GitHub Agentic Workflows)](https://github.com/githubnext/gh-aw)
+
+**githubnext** · GitHub's official agentic workflows experiment · ⭐ 265
+
+GitHub Next dropped this quietly: a framework for building AI-powered workflows that run on GitHub infrastructure. It's not Claude Code specific, but the patterns overlap heavily. Think of it as GitHub's answer to "what happens when agents need CI/CD."
+
+**Pulse insights:** This is GitHub signaling where they're headed. The workflow-as-code patterns here might become standard. Worth understanding even if we don't adopt directly.
+
+---
+
+### [claude-code-openai](https://github.com/sar4daniela/claude-code-openai)
+
+**sar4daniela** · Run Claude Code on OpenAI models · ⭐ 10
+
+Exactly what it sounds like: a shim that lets you use Claude Code's interface with OpenAI's models. Useful for comparison testing or when you hit Claude rate limits. The existence of this suggests Claude Code's UX is good enough to port.
+
+**Pulse insights:** The "Claude Code as interface, any model as backend" pattern is growing. Good for ecosystem diversity. We should ensure our plugins are model-agnostic where possible.
+
+---
+
+## Community Radar
+
+### [Claude Agent Skills: A First Principles Deep Dive](https://leehanchung.github.io/blogs/2025/10/26/claude-skills-deep-dive/)
+
+**[Lee Han Chung](https://leehanchung.github.io)** · Technical breakdown of how skills actually work
+
+Lee reverse-engineers the skill system from first principles—context injection, two-message patterns, LLM-based routing. If you've wondered why skills behave the way they do, this explains the mechanics. Dense but rewarding.
+
+**Pulse insights:** The "two-message pattern" explanation should be required reading before writing complex skills. We've been cargo-culting patterns without understanding why.
+
+---
+
+### [GitButler 0.16 - "Sweet Sixteen"](https://blog.gitbutler.com/gitbutler-0-16)
+
+**[GitButler](https://blog.gitbutler.com)** · New release with Agents Tab and AI tool integrations
+
+GitButler now has a dedicated Agents Tab for AI tool configuration. The integration story is getting smoother—less context-switching between Git client and terminal. The "rules" feature for commit policies is clever.
+
+**Pulse insights:** GitButler is becoming the Git UI for the AI era. If you're not using it, you're making life harder than necessary.
+
+---
+
+### [Think First, AI Second](https://every.to/p/think-first-ai-second)
+
+**[Every](https://every.to)** · Three principles for maintaining cognitive edge
+
+Every's take on not becoming dependent on AI for thinking. The principles are simple but important: formulate your own hypothesis before asking, review AI output critically, maintain skills you could do manually. Good hygiene for power users.
+
+**Pulse insights:** Worth internalizing. The "formulate before asking" principle applies directly to how we write prompts and skill descriptions.
+
+---
+
+## Quick Hits
+
+- **[lazygit-style TUI](https://github.com/NikiforovAll/lazyclaude)**: Browse your Claude Code config visually—finally
+- **[GPT-5.2 coverage](https://simonwillison.net/2025/Dec/11/gpt-52/)**: Simon's breakdown of OpenAI's latest—useful for comparison context
+- **[Useful patterns for HTML tools](https://simonwillison.net/2025/Dec/10/html-tools/)**: Simon on building browser-based AI tools
+- **[claude-code-otel](https://github.com/thinktecture-labs/claude-code-otel)**: OpenTelemetry integration for Claude Code metrics—observability matters
+- **[wasp-lang/claude-plugins](https://github.com/wasp-lang/claude-plugins)**: Wasp framework's official Claude Code integration
+- **[spec-oxide](https://github.com/marconae/spec-oxide)**: Spec-driven development with MCP—Rust implementation
+
+---
+
+## The Hit List
+
+1. **Install lazyclaude** — You need visibility into your `.claude/` directory chaos
+2. **Read Shrivu's feature guide** — Stop using Claude Code at 30% capacity
+3. **Audit your skills for over-engineering** — Sometimes CLAUDE.md instructions beat formal skills
+4. **Check claude-code-skills for Linear patterns** — If you use Linear, steal this integration
+5. **Consider presets** — Package your config as an installable setup, not loose files
+
+---
+
+*Brewed by Pulse · 13th December 2025 · 19 articles, 15 repos, zero hallucinations*

+ 0 - 0
pulse/.gitkeep


File diff suppressed because it is too large
+ 173 - 0
pulse/BRAND_VOICE.md


+ 159 - 0
pulse/NEWS_TEMPLATE.md

@@ -0,0 +1,159 @@
+# Pulse · {{DATE_WORDS}}
+
+{{INTRO}}
+
+---
+
+## The Signal
+
+{{#LEAD_STORIES}}
+### [{{TITLE}}]({{URL}})
+
+**[{{SOURCE_NAME}}]({{SOURCE_URL}})** · {{DATE}}
+
+{{SUMMARY_P1}}
+
+{{SUMMARY_P2}}
+
+**Pulse insights:** {{INSIGHTS}}
+
+---
+{{/LEAD_STORIES}}
+
+## Official Updates
+
+{{#OFFICIAL}}
+### [{{TITLE}}]({{URL}})
+
+**[{{SOURCE_NAME}}]({{SOURCE_URL}})** · {{DATE}}
+
+{{SUMMARY}}
+
+**Pulse insights:** {{INSIGHTS}}
+
+---
+{{/OFFICIAL}}
+
+## GitHub Discoveries
+
+{{GITHUB_INTRO}}
+
+{{#GITHUB_REPOS}}
+### [{{REPO_NAME}}]({{URL}})
+
+**{{AUTHOR}}** · {{ONE_LINER}}
+
+{{DESCRIPTION}}
+
+**Pulse insights:** {{INSIGHTS}}
+
+---
+{{/GITHUB_REPOS}}
+
+## Community Radar
+
+{{#COMMUNITY}}
+### [{{ARTICLE_TITLE}}]({{ARTICLE_URL}})
+
+**[{{SOURCE_NAME}}]({{SOURCE_URL}})** · {{DATE}}
+
+{{SUMMARY}}
+
+**Pulse insights:** {{INSIGHTS}}
+
+---
+{{/COMMUNITY}}
+
+## Quick Hits
+
+{{#QUICK_HITS}}
+- **[{{TITLE}}]({{URL}})**: {{ONE_LINER}}
+{{/QUICK_HITS}}
+
+---
+
+## The Hit List
+
+{{#ACTION_ITEMS}}
+{{INDEX}}. **{{ACTION}}** — {{REASON}}
+{{/ACTION_ITEMS}}
+
+---
+
+*{{FOOTER}}*
+
+---
+
+## Template Variables Reference
+
+### Global
+- `{{DATE_WORDS}}` - e.g., "December 12, 2025"
+- `{{DATE_ISO}}` - e.g., "2025-12-12"
+- `{{SOURCE_COUNT}}` - Total sources fetched
+- `{{INTRO}}` - Opening 2-paragraph hook (see BRAND_VOICE.md)
+- `{{FOOTER}}` - Randomised sign-off (see BRAND_VOICE.md footer variations)
+
+### The Signal (1-3 items)
+- `{{TITLE}}` - Article/post title
+- `{{URL}}` - Direct link to content
+- `{{SOURCE_NAME}}` - e.g., "Anthropic Engineering"
+- `{{SOURCE_URL}}` - Parent site URL
+- `{{DATE}}` - Publication date
+- `{{SUMMARY_P1}}` - First paragraph (hook + context)
+- `{{SUMMARY_P2}}` - Second paragraph (substance + implications)
+- `{{INSIGHTS}}` - 2-3 sentence Pulse insights
+
+### Official Updates
+Same as Lead Stories but with single `{{SUMMARY}}` paragraph.
+
+### GitHub Discoveries
+- `{{GITHUB_INTRO}}` - Brief intro to the section
+- `{{REPO_NAME}}` - Repository name
+- `{{AUTHOR}}` - GitHub username
+- `{{ONE_LINER}}` - Brief description
+- `{{DESCRIPTION}}` - 1 paragraph explanation
+- `{{INSIGHTS}}` - 1-2 sentence insights
+
+### Community Radar
+- `{{ARTICLE_TITLE}}` - Specific article title (not just blog name)
+- `{{ARTICLE_URL}}` - Direct article link
+- `{{SOURCE_NAME}}` - Blog/publication name
+- `{{SOURCE_URL}}` - Blog homepage
+- `{{DATE}}` - Article date
+- `{{SUMMARY}}` - 1 paragraph summary
+- `{{INSIGHTS}}` - 1-2 sentence insights
+
+### Quick Hits (4-6 items)
+- `{{TITLE}}` - Item title
+- `{{URL}}` - Link
+- `{{ONE_LINER}}` - Pithy description (max 15 words)
+
+### The Hit List (3-5 items)
+- `{{INDEX}}` - Number (1, 2, 3...)
+- `{{ACTION}}` - What to do
+- `{{REASON}}` - Why it matters (brief)
+
+---
+
+## Section Guidelines
+
+### Intro ({{INTRO}})
+Two paragraphs. First hooks with a question or surprising observation. Second expands with "here's what we found" energy. Should feel like the opening of a really good newsletter—makes you want to keep reading. Be cheeky, be specific, avoid clichés.
+
+### The Signal
+Reserve for genuinely important items:
+- Breaking news from Anthropic
+- Major ecosystem shifts
+- Tools/patterns that change how people work
+
+### Community Radar
+**Must include specific recent articles**, not just blog links. Each entry should be a piece of content published in the last 7 days, with its own summary and insights.
+
+### Quick Hits
+Rapid-fire items that don't need full treatment but are worth knowing. Good for:
+- Minor updates
+- Interesting repos without much to say
+- Things to bookmark for later
+
+### The Hit List
+Formerly "Actionable Items." Should feel like marching orders, not homework. Frame as opportunities, not obligations.

File diff suppressed because it is too large
+ 255 - 0
pulse/articles_cache.json


+ 366 - 0
pulse/fetch.py

@@ -0,0 +1,366 @@
+#!/usr/bin/env python3
+"""
+Pulse Fetch - Parallel URL fetching for Claude Code news digest.
+
+Uses asyncio + ThreadPoolExecutor to fetch multiple URLs via Firecrawl simultaneously.
+Outputs JSON with fetched content for LLM summarization.
+
+Usage:
+    python fetch.py                          # Fetch all sources
+    python fetch.py --sources blogs          # Fetch only blogs
+    python fetch.py --max-workers 20         # Increase parallelism
+    python fetch.py --output pulse.json
+    python fetch.py --discover-articles      # Extract recent articles from blog homepages
+"""
+
+import os
+import sys
+import json
+import re
+from datetime import datetime, timezone
+from concurrent.futures import ThreadPoolExecutor, as_completed
+from pathlib import Path
+from urllib.parse import urlparse, urljoin
+import argparse
+
+# Try to import firecrawl
+try:
+    from firecrawl import FirecrawlApp
+    FIRECRAWL_AVAILABLE = True
+except ImportError:
+    FIRECRAWL_AVAILABLE = False
+    print("Warning: firecrawl not installed. Install with: pip install firecrawl-py")
+
+# Sources configuration
+SOURCES = {
+    "official": [
+        {"name": "Anthropic Engineering", "url": "https://www.anthropic.com/engineering", "type": "blog"},
+        {"name": "Claude Blog", "url": "https://claude.ai/blog", "type": "blog"},
+        {"name": "Claude Code Docs", "url": "https://code.claude.com", "type": "docs"},
+    ],
+    "blogs": [
+        {"name": "Simon Willison", "url": "https://simonwillison.net", "type": "blog"},
+        {"name": "Every", "url": "https://every.to", "type": "blog"},
+        {"name": "SSHH Blog", "url": "https://blog.sshh.io", "type": "blog"},
+        {"name": "Lee Han Chung", "url": "https://leehanchung.github.io", "type": "blog"},
+        {"name": "Nick Nisi", "url": "https://nicknisi.com", "type": "blog"},
+        {"name": "HumanLayer", "url": "https://www.humanlayer.dev/blog", "type": "blog"},
+        {"name": "Chris Dzombak", "url": "https://www.dzombak.com/blog", "type": "blog"},
+        {"name": "GitButler", "url": "https://blog.gitbutler.com", "type": "blog"},
+        {"name": "Docker Blog", "url": "https://www.docker.com/blog", "type": "blog"},
+        {"name": "Nx Blog", "url": "https://nx.dev/blog", "type": "blog"},
+        {"name": "Yee Fei Ooi", "url": "https://medium.com/@ooi_yee_fei", "type": "blog"},
+    ],
+    "community": [
+        {"name": "SkillsMP", "url": "https://skillsmp.com", "type": "marketplace"},
+        {"name": "Awesome Claude AI", "url": "https://awesomeclaude.ai", "type": "directory"},
+    ],
+}
+
+# Relevance keywords for filtering
+RELEVANCE_KEYWORDS = [
+    "claude", "claude code", "anthropic", "mcp", "model context protocol",
+    "agent", "skill", "subagent", "cli", "terminal", "prompt engineering",
+    "cursor", "windsurf", "copilot", "aider", "coding assistant", "hooks"
+]
+
+# Patterns to identify article links in markdown content
+ARTICLE_LINK_PATTERNS = [
+    # Standard markdown links with date-like paths
+    r'\[([^\]]+)\]\((https?://[^\)]+/\d{4}/[^\)]+)\)',
+    # Links with /blog/, /posts/, /p/ paths
+    r'\[([^\]]+)\]\((https?://[^\)]+/(?:blog|posts?|p|articles?)/[^\)]+)\)',
+    # Links with slugified titles (word-word-word pattern)
+    r'\[([^\]]+)\]\((https?://[^\)]+/[\w]+-[\w]+-[\w]+[^\)]*)\)',
+]
+
+# Exclude patterns (navigation, categories, tags, etc.)
+EXCLUDE_PATTERNS = [
+    r'/tag/', r'/category/', r'/author/', r'/page/', r'/archive/',
+    r'/about', r'/contact', r'/subscribe', r'/newsletter', r'/feed',
+    r'/search', r'/login', r'/signup', r'/privacy', r'/terms',
+    r'\.xml$', r'\.rss$', r'\.atom$', r'#', r'\?',
+]
+
+
+def fetch_url_firecrawl(app: 'FirecrawlApp', source: dict) -> dict:
+    """Fetch a single URL using Firecrawl API."""
+    url = source["url"]
+    name = source["name"]
+
+    try:
+        result = app.scrape(url, formats=['markdown'])
+
+        # Handle both dict and object responses
+        if hasattr(result, 'markdown'):
+            markdown = result.markdown or ''
+            metadata = result.metadata.__dict__ if hasattr(result.metadata, '__dict__') else {}
+        else:
+            markdown = result.get('markdown', '')
+            metadata = result.get('metadata', {})
+
+        return {
+            "name": name,
+            "url": url,
+            "type": source.get("type", "unknown"),
+            "status": "success",
+            "content": markdown[:50000],  # Limit content size
+            "title": metadata.get('title', name),
+            "description": metadata.get('description', ''),
+            "fetched_at": datetime.utcnow().isoformat() + "Z",
+        }
+    except Exception as e:
+        return {
+            "name": name,
+            "url": url,
+            "type": source.get("type", "unknown"),
+            "status": "error",
+            "error": str(e),
+            "fetched_at": datetime.utcnow().isoformat() + "Z",
+        }
+
+
+def fetch_all_parallel(sources: list, max_workers: int = 10) -> list:
+    """Fetch all URLs in parallel using ThreadPoolExecutor."""
+    if not FIRECRAWL_AVAILABLE:
+        print("Error: firecrawl not available")
+        return []
+
+    api_key = os.getenv('FIRECRAWL_API_KEY')
+    if not api_key:
+        print("Error: FIRECRAWL_API_KEY environment variable not set")
+        return []
+
+    app = FirecrawlApp(api_key=api_key)
+    results = []
+    total = len(sources)
+    completed = 0
+
+    print(f"Fetching {total} URLs with {max_workers} workers...")
+
+    with ThreadPoolExecutor(max_workers=max_workers) as executor:
+        # Submit all tasks
+        future_to_source = {
+            executor.submit(fetch_url_firecrawl, app, source): source
+            for source in sources
+        }
+
+        # Process results as they complete
+        for future in as_completed(future_to_source):
+            source = future_to_source[future]
+            completed += 1
+
+            try:
+                result = future.result()
+                results.append(result)
+                status = "OK" if result["status"] == "success" else "FAIL"
+                print(f"[{completed}/{total}] {status}: {source['name']}")
+            except Exception as e:
+                print(f"[{completed}/{total}] ERROR: {source['name']} - {e}")
+                results.append({
+                    "name": source["name"],
+                    "url": source["url"],
+                    "status": "error",
+                    "error": str(e),
+                })
+
+    return results
+
+
+def extract_article_links(content: str, base_url: str, max_articles: int = 5) -> list:
+    """Extract article links from markdown content."""
+    articles = []
+    seen_urls = set()
+    base_domain = urlparse(base_url).netloc
+
+    for pattern in ARTICLE_LINK_PATTERNS:
+        matches = re.findall(pattern, content)
+        for title, url in matches:
+            # Skip if already seen
+            if url in seen_urls:
+                continue
+
+            # Skip excluded patterns
+            if any(re.search(exc, url, re.IGNORECASE) for exc in EXCLUDE_PATTERNS):
+                continue
+
+            # Ensure same domain or relative URL
+            parsed = urlparse(url)
+            if parsed.netloc and parsed.netloc != base_domain:
+                continue
+
+            # Clean up title
+            title = title.strip()
+            if len(title) < 5 or len(title) > 200:
+                continue
+
+            # Skip generic link text
+            if title.lower() in ['read more', 'continue reading', 'link', 'here', 'click here']:
+                continue
+
+            seen_urls.add(url)
+            articles.append({
+                "title": title,
+                "url": url,
+            })
+
+    return articles[:max_articles]
+
+
+def discover_articles(sources: list, max_workers: int = 10, max_articles_per_source: int = 5) -> list:
+    """Fetch blog homepages and extract recent article links."""
+    if not FIRECRAWL_AVAILABLE:
+        print("Error: firecrawl not available")
+        return []
+
+    api_key = os.getenv('FIRECRAWL_API_KEY')
+    if not api_key:
+        print("Error: FIRECRAWL_API_KEY environment variable not set")
+        return []
+
+    # First, fetch all blog homepages
+    print(f"Phase 1: Fetching {len(sources)} blog homepages...")
+    homepage_results = fetch_all_parallel(sources, max_workers=max_workers)
+
+    # Extract article links from each
+    all_articles = []
+    print(f"\nPhase 2: Extracting article links...")
+
+    for result in homepage_results:
+        if result["status"] != "success":
+            continue
+
+        content = result.get("content", "")
+        base_url = result["url"]
+        source_name = result["name"]
+
+        articles = extract_article_links(content, base_url, max_articles=max_articles_per_source)
+        print(f"  {source_name}: found {len(articles)} articles")
+
+        for article in articles:
+            all_articles.append({
+                "name": article["title"],
+                "url": article["url"],
+                "type": "article",
+                "source_name": source_name,
+                "source_url": base_url,
+            })
+
+    if not all_articles:
+        print("No articles found to fetch")
+        return homepage_results
+
+    # Phase 3: Fetch individual articles
+    print(f"\nPhase 3: Fetching {len(all_articles)} individual articles...")
+    article_results = fetch_all_parallel(all_articles, max_workers=max_workers)
+
+    # Add source info to results
+    for i, result in enumerate(article_results):
+        if i < len(all_articles):
+            result["source_name"] = all_articles[i].get("source_name", "")
+            result["source_url"] = all_articles[i].get("source_url", "")
+
+    return article_results
+
+
+def filter_relevant_content(results: list) -> list:
+    """Filter results to only those with Claude Code relevant content."""
+    relevant = []
+
+    for result in results:
+        if result["status"] != "success":
+            continue
+
+        content = ((result.get("content") or "") + " " +
+                   (result.get("title") or "") + " " +
+                   (result.get("description") or "")).lower()
+
+        # Check for relevance keywords
+        for keyword in RELEVANCE_KEYWORDS:
+            if keyword.lower() in content:
+                result["relevant_keyword"] = keyword
+                relevant.append(result)
+                break
+
+    return relevant
+
+
+def main():
+    parser = argparse.ArgumentParser(description="Pulse Fetch - Parallel URL fetching")
+    parser.add_argument("--sources", choices=["all", "official", "blogs", "community"],
+                        default="all", help="Source category to fetch")
+    parser.add_argument("--max-workers", type=int, default=10,
+                        help="Maximum parallel workers (default: 10)")
+    parser.add_argument("--output", "-o", type=str, default=None,
+                        help="Output JSON file (default: stdout)")
+    parser.add_argument("--filter-relevant", action="store_true",
+                        help="Only include results with relevant keywords")
+    parser.add_argument("--discover-articles", action="store_true",
+                        help="Extract and fetch individual articles from blog homepages")
+    parser.add_argument("--max-articles-per-source", type=int, default=5,
+                        help="Max articles to fetch per source (default: 5)")
+    args = parser.parse_args()
+
+    # Collect sources based on selection
+    if args.sources == "all":
+        sources = []
+        for category in SOURCES.values():
+            sources.extend(category)
+    else:
+        sources = SOURCES.get(args.sources, [])
+
+    if not sources:
+        print(f"No sources found for category: {args.sources}")
+        return 1
+
+    # Fetch URLs - either discover articles or just fetch homepages
+    if args.discover_articles:
+        # Filter to only blog-type sources for article discovery
+        blog_sources = [s for s in sources if s.get("type") == "blog"]
+        if not blog_sources:
+            print("No blog sources found for article discovery")
+            return 1
+        results = discover_articles(
+            blog_sources,
+            max_workers=args.max_workers,
+            max_articles_per_source=args.max_articles_per_source
+        )
+    else:
+        results = fetch_all_parallel(sources, max_workers=args.max_workers)
+
+    # Filter if requested
+    if args.filter_relevant:
+        results = filter_relevant_content(results)
+        print(f"\nFiltered to {len(results)} relevant results")
+
+    # Prepare output
+    output = {
+        "fetched_at": datetime.utcnow().isoformat() + "Z",
+        "total_sources": len(sources),
+        "successful": len([r for r in results if r.get("status") == "success"]),
+        "failed": len([r for r in results if r.get("status") != "success"]),
+        "results": results,
+    }
+
+    # Output
+    json_output = json.dumps(output, indent=2)
+
+    if args.output:
+        Path(args.output).write_text(json_output, encoding="utf-8")
+        print(f"\nResults saved to: {args.output}")
+    else:
+        print("\n" + "=" * 60)
+        print("RESULTS")
+        print("=" * 60)
+        print(json_output)
+
+    # Summary
+    print(f"\n{'=' * 60}")
+    print(f"SUMMARY: {output['successful']}/{output['total_sources']} successful")
+    print(f"{'=' * 60}")
+
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main())

File diff suppressed because it is too large
+ 168 - 0
pulse/fetch_cache.json


+ 31 - 0
pulse/state.json

@@ -0,0 +1,31 @@
+{
+  "version": "1.0",
+  "last_run": "2025-12-12T12:20:00Z",
+  "seen_urls": [
+    "https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents",
+    "https://www.anthropic.com/engineering/advanced-tool-use",
+    "https://www.anthropic.com/engineering/code-execution-with-mcp",
+    "https://www.anthropic.com/engineering/claude-code-sandboxing",
+    "https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills",
+    "https://github.com/marconae/spec-oxide",
+    "https://github.com/automazeio/vibeproxy",
+    "https://github.com/naxoc/riffrag",
+    "https://github.com/max-sixty/worktrunk",
+    "https://github.com/zubayer0077/Claude-Multi-Agent-Research-System-Skill",
+    "https://github.com/YonickAzuma/claude-code-mux",
+    "https://skillsmp.com",
+    "https://awesomeclaude.ai",
+    "https://simonwillison.net",
+    "https://blog.sshh.io"
+  ],
+  "seen_commits": {
+    "anthropics/claude-code": "2192c86"
+  },
+  "digest_history": [
+    {
+      "date": "2025-12-12",
+      "file": "2025-12-12_pulse.md",
+      "items_count": 24
+    }
+  ]
+}

+ 0 - 101
scripts/install.ps1

@@ -1,101 +0,0 @@
-# claude-mods installer for Windows
-# Creates symlinks to Claude Code directories (requires admin or developer mode)
-
-$ErrorActionPreference = "Stop"
-
-$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
-$ClaudeDir = Join-Path $env:USERPROFILE ".claude"
-
-Write-Host "Installing claude-mods..." -ForegroundColor Cyan
-Write-Host "Source: $ScriptDir"
-Write-Host "Target: $ClaudeDir"
-Write-Host ""
-
-# Create Claude directories if they don't exist
-$dirs = @("commands", "skills", "agents")
-foreach ($dir in $dirs) {
-    $path = Join-Path $ClaudeDir $dir
-    if (!(Test-Path $path)) {
-        New-Item -ItemType Directory -Path $path -Force | Out-Null
-    }
-}
-
-# Install commands
-Write-Host "Installing commands..." -ForegroundColor Yellow
-$commandsDir = Join-Path $ScriptDir "commands"
-if (Test-Path $commandsDir) {
-    Get-ChildItem -Path $commandsDir -Directory | ForEach-Object {
-        $cmdName = $_.Name
-        $cmdFile = Join-Path $_.FullName "$cmdName.md"
-        if (Test-Path $cmdFile) {
-            $target = Join-Path $ClaudeDir "commands\$cmdName.md"
-            if (Test-Path $target) {
-                Write-Host "  Updating: $cmdName.md"
-                Remove-Item $target -Force
-            } else {
-                Write-Host "  Installing: $cmdName.md"
-            }
-            # Try symlink first, fall back to copy
-            try {
-                New-Item -ItemType SymbolicLink -Path $target -Target $cmdFile -Force | Out-Null
-            } catch {
-                Copy-Item $cmdFile $target -Force
-                Write-Host "    (copied - enable Developer Mode for symlinks)" -ForegroundColor DarkGray
-            }
-        }
-    }
-}
-
-# Install skills
-Write-Host "Installing skills..." -ForegroundColor Yellow
-$skillsDir = Join-Path $ScriptDir "skills"
-if (Test-Path $skillsDir) {
-    Get-ChildItem -Path $skillsDir -Directory | ForEach-Object {
-        $skillName = $_.Name
-        $target = Join-Path $ClaudeDir "skills\$skillName"
-        if (Test-Path $target) {
-            Write-Host "  Updating: $skillName"
-            Remove-Item $target -Recurse -Force
-        } else {
-            Write-Host "  Installing: $skillName"
-        }
-        # Try symlink first, fall back to copy
-        try {
-            New-Item -ItemType SymbolicLink -Path $target -Target $_.FullName -Force | Out-Null
-        } catch {
-            Copy-Item $_.FullName $target -Recurse -Force
-            Write-Host "    (copied - enable Developer Mode for symlinks)" -ForegroundColor DarkGray
-        }
-    }
-}
-
-# Install agents
-Write-Host "Installing agents..." -ForegroundColor Yellow
-$agentsDir = Join-Path $ScriptDir "agents"
-if (Test-Path $agentsDir) {
-    Get-ChildItem -Path $agentsDir -Filter "*.md" | ForEach-Object {
-        $agentName = $_.Name
-        $target = Join-Path $ClaudeDir "agents\$agentName"
-        if (Test-Path $target) {
-            Write-Host "  Updating: $agentName"
-            Remove-Item $target -Force
-        } else {
-            Write-Host "  Installing: $agentName"
-        }
-        # Try symlink first, fall back to copy
-        try {
-            New-Item -ItemType SymbolicLink -Path $target -Target $_.FullName -Force | Out-Null
-        } catch {
-            Copy-Item $_.FullName $target -Force
-            Write-Host "    (copied - enable Developer Mode for symlinks)" -ForegroundColor DarkGray
-        }
-    }
-}
-
-Write-Host ""
-Write-Host "Installation complete!" -ForegroundColor Green
-Write-Host ""
-Write-Host "Installed to:"
-Write-Host "  Commands: $ClaudeDir\commands\"
-Write-Host "  Skills:   $ClaudeDir\skills\"
-Write-Host "  Agents:   $ClaudeDir\agents\"

+ 0 - 78
scripts/install.sh

@@ -1,78 +0,0 @@
-#!/bin/bash
-
-# claude-mods installer for Linux/macOS
-# Creates symlinks to Claude Code directories
-
-set -e
-
-SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
-CLAUDE_DIR="$HOME/.claude"
-
-echo "Installing claude-mods..."
-echo "Source: $SCRIPT_DIR"
-echo "Target: $CLAUDE_DIR"
-echo ""
-
-# Create Claude directories if they don't exist
-mkdir -p "$CLAUDE_DIR/commands"
-mkdir -p "$CLAUDE_DIR/skills"
-mkdir -p "$CLAUDE_DIR/agents"
-
-# Install commands
-echo "Installing commands..."
-for cmd_dir in "$SCRIPT_DIR/commands"/*/; do
-    if [ -d "$cmd_dir" ]; then
-        cmd_name=$(basename "$cmd_dir")
-        # Look for the main .md file
-        if [ -f "$cmd_dir/$cmd_name.md" ]; then
-            target="$CLAUDE_DIR/commands/$cmd_name.md"
-            if [ -L "$target" ] || [ -f "$target" ]; then
-                echo "  Updating: $cmd_name.md"
-                rm -f "$target"
-            else
-                echo "  Installing: $cmd_name.md"
-            fi
-            ln -s "$cmd_dir/$cmd_name.md" "$target"
-        fi
-    fi
-done
-
-# Install skills
-echo "Installing skills..."
-for skill_dir in "$SCRIPT_DIR/skills"/*/; do
-    if [ -d "$skill_dir" ]; then
-        skill_name=$(basename "$skill_dir")
-        target="$CLAUDE_DIR/skills/$skill_name"
-        if [ -L "$target" ] || [ -d "$target" ]; then
-            echo "  Updating: $skill_name"
-            rm -rf "$target"
-        else
-            echo "  Installing: $skill_name"
-        fi
-        ln -s "$skill_dir" "$target"
-    fi
-done
-
-# Install agents
-echo "Installing agents..."
-for agent_file in "$SCRIPT_DIR/agents"/*.md; do
-    if [ -f "$agent_file" ]; then
-        agent_name=$(basename "$agent_file")
-        target="$CLAUDE_DIR/agents/$agent_name"
-        if [ -L "$target" ] || [ -f "$target" ]; then
-            echo "  Updating: $agent_name"
-            rm -f "$target"
-        else
-            echo "  Installing: $agent_name"
-        fi
-        ln -s "$agent_file" "$target"
-    fi
-done
-
-echo ""
-echo "Installation complete!"
-echo ""
-echo "Installed to:"
-echo "  Commands: $CLAUDE_DIR/commands/"
-echo "  Skills:   $CLAUDE_DIR/skills/"
-echo "  Agents:   $CLAUDE_DIR/agents/"

+ 0 - 77
skills/agent-discovery/SKILL.md

@@ -1,77 +0,0 @@
----
-name: agent-discovery
-description: "Analyze current task or codebase and recommend specialized agents. Triggers on: which agent, what tool should I use, help me choose, recommend agent, find the right agent, what agents are available."
----
-
-# Agent Discovery
-
-Analyze the current context and recommend the best specialized agents for the task.
-
-## How to Use
-
-1. **Analyze context** - Check file types, project structure, user's request
-2. **Match to agents** - Use patterns below to identify relevant agents
-3. **Run `/agents`** - Get the full current list of available agents
-4. **Recommend** - Suggest 1-2 primary agents with rationale
-
-## Quick Matching Guide
-
-### By File Extension
-
-| Files | Suggested Agent |
-|-------|-----------------|
-| `.py` | python-expert |
-| `.js`, `.ts`, `.jsx`, `.tsx` | javascript-expert |
-| `.php`, Laravel | laravel-expert |
-| `.sql` | sql-expert, postgres-expert |
-| `.sh`, `.bash` | bash-expert |
-| `.astro` | astro-expert |
-| Tailwind classes | tailwind-expert |
-
-### By Project Type
-
-| Indicators | Suggested Agent |
-|------------|-----------------|
-| `pyproject.toml`, `setup.py` | python-expert |
-| `package.json` | javascript-expert |
-| `composer.json` | laravel-expert |
-| `wrangler.toml` | wrangler-expert |
-| `payload.config.ts` | payloadcms-expert |
-| K8s/Docker/AWS | aws-fargate-ecs-expert |
-| REST API code | rest-expert |
-
-### By Task Type
-
-| Task | Suggested Agent |
-|------|-----------------|
-| Explore codebase | Explore |
-| Reorganize files | project-organizer |
-| Web scraping | firecrawl-expert |
-| Fetch multiple URLs | fetch-expert |
-| Database optimization | postgres-expert |
-| CI/CD scripts | bash-expert |
-
-## Workflow
-
-```
-1. User: "Which agent should I use for X?"
-
-2. Claude:
-   - Analyze current directory (glob for file types)
-   - Check for config files (package.json, pyproject.toml, etc.)
-   - Consider user's stated task
-   - Run /agents to see full list
-
-3. Output:
-   PRIMARY: [agent-name] - [why]
-   SECONDARY: [agent-name] - [optional, if relevant]
-
-   To launch: Use Task tool with subagent_type="[agent-name]"
-```
-
-## Tips
-
-- Prefer specialized experts over general-purpose for focused tasks
-- Suggest parallel execution when agents work on independent concerns
-- Maximum 2-3 agents per task - don't over-recommend
-- Always run `/agents` first to see what's currently available

+ 206 - 0
skills/file-search/SKILL.md

@@ -0,0 +1,206 @@
+# File Search Skill
+
+Modern file and content search using fd, ripgrep (rg), and fzf for interactive selection.
+
+## Triggers
+
+fd, ripgrep, rg, find files, search code, fzf, fuzzy find, search codebase
+
+## fd - Find Files (Better than find)
+
+### Basic Usage
+```bash
+# Find by name (case-insensitive by default)
+fd config                    # Files containing "config"
+fd "\.ts$"                   # TypeScript files (regex)
+fd -e py                     # Python files by extension
+
+# Multiple extensions
+fd -e js -e ts               # JS and TS files
+fd -e md -e txt              # Markdown and text files
+```
+
+### Filtering
+```bash
+# By type
+fd -t f config               # Files only
+fd -t d src                  # Directories only
+fd -t l                      # Symlinks only
+
+# By depth
+fd -d 2 config               # Max 2 levels deep
+fd --min-depth 2 config      # At least 2 levels deep
+
+# Include hidden/ignored
+fd -H config                 # Include hidden files
+fd -I config                 # Include .gitignore'd files
+fd -HI config                # Include both
+```
+
+### Exclusion
+```bash
+# Exclude patterns
+fd -E "*.min.js" -E "dist/"  # Exclude minified and dist
+fd -E node_modules           # Exclude node_modules
+fd config -E "*.bak"         # Find config, exclude backups
+```
+
+### Execute Commands
+```bash
+# Run command on each result
+fd -e py -x wc -l            # Line count for each Python file
+fd -e ts -x bat {}           # View each TypeScript file with bat
+fd -e json -x jq . {}        # Pretty print each JSON file
+```
+
+## ripgrep (rg) - Search Content (Better than grep)
+
+### Basic Usage
+```bash
+# Simple search
+rg "TODO"                    # Find TODO in all files
+rg "function \w+"            # Regex pattern
+rg -i "error"                # Case-insensitive
+rg -w "log"                  # Word boundary (not "catalog")
+```
+
+### File Filtering
+```bash
+# By type
+rg -t py "import"            # Search Python files only
+rg -t js -t ts "async"       # JS and TS files
+rg --type-list               # Show all known types
+
+# By glob
+rg -g "*.tsx" "useState"     # Search .tsx files
+rg -g "!*.test.*" "fetch"    # Exclude test files
+rg -g "src/**" "config"      # Only in src directory
+```
+
+### Context and Format
+```bash
+# Show context lines
+rg -C 3 "function"           # 3 lines before and after
+rg -B 2 -A 5 "class"         # 2 before, 5 after
+
+# Output format
+rg -l "TODO"                 # File names only
+rg -c "TODO"                 # Count per file
+rg --json "TODO"             # JSON output
+rg -n "TODO"                 # With line numbers (default)
+```
+
+### Advanced Patterns
+```bash
+# Multiline
+rg -U "class.*\n.*constructor"   # Across lines
+
+# Fixed strings (no regex)
+rg -F "[]"                   # Literal brackets
+
+# Invert match
+rg -v "console.log"          # Lines NOT containing
+
+# Replace (preview)
+rg "oldFunc" -r "newFunc"    # Show replacements (use sd to apply)
+```
+
+## fzf - Interactive Selection
+
+### Basic Workflows
+```bash
+# Find and open file
+fd | fzf                             # Select file interactively
+fd -e py | fzf                       # Select from Python files
+
+# Find and edit
+nvim $(fd -e ts | fzf)               # Open selected in nvim
+code $(fd | fzf -m)                  # Open multiple in VS Code
+```
+
+### With Preview
+```bash
+# Preview with bat
+fd | fzf --preview 'bat --color=always {}'
+
+# Preview with rg context
+rg -l "TODO" | fzf --preview 'rg -C 3 "TODO" {}'
+```
+
+### Multi-Select
+```bash
+# Select multiple (Tab to mark, Enter to confirm)
+fd -e ts | fzf -m                    # Multi-select mode
+fd -e ts | fzf -m | xargs rm         # Delete selected
+```
+
+### Combined Workflows
+```bash
+# Fuzzy grep: search content, select file, open at line
+rg -n "pattern" | fzf --preview 'bat {1} --highlight-line {2}'
+
+# Kill process interactively
+procs | fzf | awk '{print $1}' | xargs kill
+
+# Git branch checkout
+git branch | fzf | xargs git checkout
+
+# Git log with preview
+git log --oneline | fzf --preview 'git show --color=always {1}'
+```
+
+## Combined Patterns
+
+### Find and Search
+```bash
+# Find Python files, search for pattern
+fd -e py -x rg "async def" {}
+
+# Search specific directories
+rg "import" $(fd -t d src lib)
+```
+
+### Find, Select, Act
+```bash
+# Interactive file deletion
+fd -t f "\.bak$" | fzf -m | xargs rm -i
+
+# Interactive config editing
+fd -g "*.config.*" | fzf --preview 'bat {}' | xargs nvim
+```
+
+### Codebase Exploration
+```bash
+# Find all entry points
+rg -l "^(export )?function main|^if __name__"
+
+# Find all TODO/FIXME with context
+rg -C 2 "TODO|FIXME|HACK|XXX"
+
+# Find unused exports (basic)
+rg "export (const|function|class) (\w+)" -o -r '$2' | sort | uniq
+```
+
+## Performance Tips
+
+| Tip | Why |
+|-----|-----|
+| Both respect `.gitignore` | Automatically skip node_modules, dist, etc. |
+| Use `-t` over `-g` when possible | Type flags are faster than globs |
+| Narrow the path | `rg pattern src/` faster than `rg pattern` |
+| Use `-F` for literal strings | Avoids regex engine overhead |
+| Add `-u` for unignored only when needed | Hidden files slow things down |
+
+## Quick Reference
+
+| Task | Command |
+|------|---------|
+| Find TS files | `fd -e ts` |
+| Find in src only | `fd -e ts src/` |
+| Search for pattern | `rg "pattern"` |
+| Search in type | `rg -t py "import"` |
+| Files containing | `rg -l "pattern"` |
+| Count matches | `rg -c "pattern"` |
+| Interactive select | `fd \| fzf` |
+| Multi-select | `fd \| fzf -m` |
+| Preview files | `fd \| fzf --preview 'bat {}'` |

+ 221 - 0
skills/find-replace/SKILL.md

@@ -0,0 +1,221 @@
+# Find Replace Skill
+
+Modern find-and-replace using sd (simpler than sed) and batch replacement patterns.
+
+## Triggers
+
+sd, find replace, batch replace, sed replacement, string replacement, rename
+
+## sd Basics
+
+### Simple Replacement
+```bash
+# Replace in file (in-place)
+sd 'oldText' 'newText' file.txt
+
+# Replace in multiple files
+sd 'oldText' 'newText' *.js
+
+# Preview without changing (pipe instead of file)
+cat file.txt | sd 'old' 'new'
+```
+
+### sd vs sed Comparison
+
+| sed | sd | Notes |
+|-----|-----|-------|
+| `sed 's/old/new/g'` | `sd 'old' 'new'` | Global by default |
+| `sed -i 's/old/new/g'` | `sd 'old' 'new' file` | In-place by default |
+| `sed 's/\./dot/g'` | `sd '\.' 'dot'` | Same escaping |
+| `sed 's#path/to#new/path#g'` | `sd 'path/to' 'new/path'` | No delimiter issues |
+
+## Common Patterns
+
+### Variable Rename
+```bash
+# Rename variable across files
+sd 'oldVarName' 'newVarName' src/**/*.ts
+
+# Preview first with rg
+rg 'oldVarName' src/
+# Then apply
+sd 'oldVarName' 'newVarName' $(rg -l 'oldVarName' src/)
+```
+
+### Function Rename
+```bash
+# Rename function (all usages)
+sd 'getUserData' 'fetchUserProfile' src/**/*.ts
+
+# More precise with word boundaries
+sd '\bgetUserData\b' 'fetchUserProfile' src/**/*.ts
+```
+
+### Import Path Update
+```bash
+# Update import paths
+sd "from '../utils'" "from '@/utils'" src/**/*.ts
+sd "require\('./config'\)" "require('@/config')" src/**/*.js
+```
+
+### String Quotes
+```bash
+# Single to double quotes
+sd "'" '"' file.json
+
+# Template literals
+sd '"\$\{(\w+)\}"' '`${$1}`' src/**/*.ts
+```
+
+## Regex Patterns
+
+### Capture Groups
+```bash
+# Reorder parts
+sd '(\w+)@(\w+)\.com' '$2/$1' emails.txt
+# john@example.com → example/john
+
+# Wrap in function
+sd 'console\.log\((.*)\)' 'logger.info($1)' src/**/*.js
+```
+
+### Optional Matching
+```bash
+# Handle optional whitespace
+sd 'function\s*\(' 'const fn = (' src/**/*.js
+```
+
+### Multiline (with -s flag)
+```bash
+# Replace across lines
+sd -s 'start\n.*\nend' 'replacement' file.txt
+```
+
+## Batch Workflows
+
+### Find Then Replace
+```bash
+# 1. Find files with pattern
+rg -l 'oldPattern' src/
+
+# 2. Preview replacements
+rg 'oldPattern' -r 'newPattern' src/
+
+# 3. Apply to found files
+sd 'oldPattern' 'newPattern' $(rg -l 'oldPattern' src/)
+```
+
+### With fd
+```bash
+# Replace in specific file types
+fd -e ts -x sd 'old' 'new' {}
+
+# Replace in files matching name pattern
+fd 'config' -e json -x sd '"dev"' '"prod"' {}
+```
+
+### Dry Run Pattern
+```bash
+# Safe workflow: preview → verify → apply
+
+# Step 1: List affected files
+rg -l 'oldText' src/
+
+# Step 2: Show what will change
+rg 'oldText' -r 'newText' src/
+
+# Step 3: Apply (only after verification)
+sd 'oldText' 'newText' $(rg -l 'oldText' src/)
+
+# Step 4: Verify
+rg 'oldText' src/  # Should return nothing
+git diff           # Review changes
+```
+
+## Special Characters
+
+### Escaping
+```bash
+# Literal dot
+sd '\.' ',' file.txt
+
+# Literal brackets
+sd '\[' '(' file.txt
+
+# Literal dollar sign
+sd '\$' '€' file.txt
+
+# Literal backslash
+sd '\\' '/' paths.txt
+```
+
+### Common Escapes
+| Character | Escape |
+|-----------|--------|
+| `.` | `\.` |
+| `*` | `\*` |
+| `?` | `\?` |
+| `[` `]` | `\[` `\]` |
+| `(` `)` | `\(` `\)` |
+| `{` `}` | `\{` `\}` |
+| `$` | `\$` |
+| `^` | `\^` |
+| `\` | `\\` |
+
+## Real-World Examples
+
+### Update Package Version
+```bash
+sd '"version": "\d+\.\d+\.\d+"' '"version": "2.0.0"' package.json
+```
+
+### Fix File Extensions in Imports
+```bash
+sd "from '(\./[^']+)'" "from '\$1.js'" src/**/*.ts
+```
+
+### Convert CSS Class Names
+```bash
+# kebab-case to camelCase (simple cases)
+sd 'class="(\w+)-(\w+)"' 'className="$1$2"' src/**/*.jsx
+```
+
+### Update API Endpoints
+```bash
+sd '/api/v1/' '/api/v2/' src/**/*.ts
+sd 'api\.example\.com' 'api.newdomain.com' src/**/*.ts
+```
+
+### Remove Console Logs
+```bash
+# Remove entire console.log statements
+sd 'console\.log\([^)]*\);?\n?' '' src/**/*.ts
+```
+
+### Add Prefix to IDs
+```bash
+sd 'id="(\w+)"' 'id="prefix-$1"' src/**/*.html
+```
+
+## Tips
+
+| Tip | Why |
+|-----|-----|
+| Always preview with `rg -r` first | Avoid accidental mass changes |
+| Use `git diff` after | Verify changes before commit |
+| Prefer specific patterns | `\bword\b` over `word` to avoid partial matches |
+| Quote patterns | Avoid shell interpretation |
+| Use fd to target files | More precise than `**/*.ext` |
+
+## Installation
+
+```bash
+# Cargo (Rust)
+cargo install sd
+
+# Homebrew (macOS)
+brew install sd
+
+# Windows (scoop)
+scoop install sd
+```

+ 369 - 0
skills/mcp-patterns/SKILL.md

@@ -0,0 +1,369 @@
+# MCP Patterns Skill
+
+Model Context Protocol (MCP) server patterns for building integrations with Claude Code.
+
+## Triggers
+
+mcp server, model context protocol, tool handler, mcp resource, mcp tool
+
+## Server Structure
+
+### Basic MCP Server (Python)
+```python
+from mcp.server import Server
+from mcp.server.stdio import stdio_server
+
+app = Server("my-server")
+
+@app.list_tools()
+async def list_tools():
+    return [
+        {
+            "name": "my_tool",
+            "description": "Does something useful",
+            "inputSchema": {
+                "type": "object",
+                "properties": {
+                    "query": {"type": "string", "description": "Search query"}
+                },
+                "required": ["query"]
+            }
+        }
+    ]
+
+@app.call_tool()
+async def call_tool(name: str, arguments: dict):
+    if name == "my_tool":
+        result = await do_something(arguments["query"])
+        return {"content": [{"type": "text", "text": result}]}
+    raise ValueError(f"Unknown tool: {name}")
+
+async def main():
+    async with stdio_server() as (read_stream, write_stream):
+        await app.run(read_stream, write_stream, app.create_initialization_options())
+
+if __name__ == "__main__":
+    import asyncio
+    asyncio.run(main())
+```
+
+### Project Layout
+```
+my-mcp-server/
+├── src/
+│   └── my_server/
+│       ├── __init__.py
+│       ├── server.py       # Main server logic
+│       ├── tools.py        # Tool handlers
+│       └── resources.py    # Resource handlers
+├── pyproject.toml
+└── README.md
+```
+
+## Tool Patterns
+
+### Tool with Validation
+```python
+from pydantic import BaseModel, Field
+
+class SearchInput(BaseModel):
+    query: str = Field(..., min_length=1, max_length=500)
+    limit: int = Field(default=10, ge=1, le=100)
+
+@app.call_tool()
+async def call_tool(name: str, arguments: dict):
+    if name == "search":
+        # Pydantic validates and parses
+        params = SearchInput(**arguments)
+        results = await search(params.query, params.limit)
+        return {"content": [{"type": "text", "text": json.dumps(results)}]}
+```
+
+### Tool with Error Handling
+```python
+@app.call_tool()
+async def call_tool(name: str, arguments: dict):
+    try:
+        if name == "fetch_data":
+            data = await fetch_data(arguments["url"])
+            return {"content": [{"type": "text", "text": data}]}
+    except httpx.HTTPStatusError as e:
+        return {
+            "content": [{"type": "text", "text": f"HTTP error: {e.response.status_code}"}],
+            "isError": True
+        }
+    except Exception as e:
+        return {
+            "content": [{"type": "text", "text": f"Error: {str(e)}"}],
+            "isError": True
+        }
+```
+
+### Multiple Tool Registration
+```python
+TOOLS = {
+    "list_items": {
+        "description": "List all items",
+        "schema": {"type": "object", "properties": {}},
+        "handler": handle_list_items
+    },
+    "get_item": {
+        "description": "Get specific item",
+        "schema": {
+            "type": "object",
+            "properties": {"id": {"type": "string"}},
+            "required": ["id"]
+        },
+        "handler": handle_get_item
+    },
+    "create_item": {
+        "description": "Create new item",
+        "schema": {
+            "type": "object",
+            "properties": {
+                "name": {"type": "string"},
+                "data": {"type": "object"}
+            },
+            "required": ["name"]
+        },
+        "handler": handle_create_item
+    }
+}
+
+@app.list_tools()
+async def list_tools():
+    return [
+        {"name": name, "description": t["description"], "inputSchema": t["schema"]}
+        for name, t in TOOLS.items()
+    ]
+
+@app.call_tool()
+async def call_tool(name: str, arguments: dict):
+    if name not in TOOLS:
+        raise ValueError(f"Unknown tool: {name}")
+    return await TOOLS[name]["handler"](arguments)
+```
+
+## Resource Patterns
+
+### Static Resource
+```python
+@app.list_resources()
+async def list_resources():
+    return [
+        {
+            "uri": "config://settings",
+            "name": "Application Settings",
+            "mimeType": "application/json"
+        }
+    ]
+
+@app.read_resource()
+async def read_resource(uri: str):
+    if uri == "config://settings":
+        return json.dumps({"theme": "dark", "lang": "en"})
+    raise ValueError(f"Unknown resource: {uri}")
+```
+
+### Dynamic Resources
+```python
+@app.list_resources()
+async def list_resources():
+    # List available resources dynamically
+    items = await get_all_items()
+    return [
+        {
+            "uri": f"item://{item.id}",
+            "name": item.name,
+            "mimeType": "application/json"
+        }
+        for item in items
+    ]
+
+@app.read_resource()
+async def read_resource(uri: str):
+    if uri.startswith("item://"):
+        item_id = uri.replace("item://", "")
+        item = await get_item(item_id)
+        return json.dumps(item.to_dict())
+    raise ValueError(f"Unknown resource: {uri}")
+```
+
+## Authentication Patterns
+
+### Environment Variables
+```python
+import os
+
+API_KEY = os.environ.get("MY_API_KEY")
+if not API_KEY:
+    raise ValueError("MY_API_KEY environment variable required")
+
+async def make_api_call(endpoint: str):
+    async with httpx.AsyncClient() as client:
+        response = await client.get(
+            f"https://api.example.com/{endpoint}",
+            headers={"Authorization": f"Bearer {API_KEY}"}
+        )
+        response.raise_for_status()
+        return response.json()
+```
+
+### OAuth Token Refresh
+```python
+from datetime import datetime, timedelta
+
+class TokenManager:
+    def __init__(self):
+        self.token = None
+        self.expires_at = None
+
+    async def get_token(self) -> str:
+        if self.token and self.expires_at > datetime.now():
+            return self.token
+
+        # Refresh token
+        async with httpx.AsyncClient() as client:
+            response = await client.post(
+                "https://auth.example.com/token",
+                data={"grant_type": "client_credentials", ...}
+            )
+            data = response.json()
+            self.token = data["access_token"]
+            self.expires_at = datetime.now() + timedelta(seconds=data["expires_in"] - 60)
+            return self.token
+
+token_manager = TokenManager()
+```
+
+## State Management
+
+### SQLite for Persistence
+```python
+import aiosqlite
+
+DB_PATH = Path.home() / ".my-mcp-server" / "state.db"
+
+async def init_db():
+    DB_PATH.parent.mkdir(parents=True, exist_ok=True)
+    async with aiosqlite.connect(DB_PATH) as db:
+        await db.execute("""
+            CREATE TABLE IF NOT EXISTS cache (
+                key TEXT PRIMARY KEY,
+                value TEXT,
+                expires_at TEXT
+            )
+        """)
+        await db.commit()
+
+async def get_cached(key: str) -> str | None:
+    async with aiosqlite.connect(DB_PATH) as db:
+        cursor = await db.execute(
+            "SELECT value FROM cache WHERE key = ? AND expires_at > datetime('now')",
+            (key,)
+        )
+        row = await cursor.fetchone()
+        return row[0] if row else None
+
+async def set_cached(key: str, value: str, ttl_seconds: int = 3600):
+    async with aiosqlite.connect(DB_PATH) as db:
+        await db.execute(
+            "INSERT OR REPLACE INTO cache (key, value, expires_at) VALUES (?, ?, datetime('now', '+' || ? || ' seconds'))",
+            (key, value, ttl_seconds)
+        )
+        await db.commit()
+```
+
+### In-Memory Cache
+```python
+from functools import lru_cache
+from cachetools import TTLCache
+
+# Simple TTL cache
+cache = TTLCache(maxsize=100, ttl=300)  # 5 minute TTL
+
+async def get_data(key: str):
+    if key in cache:
+        return cache[key]
+    data = await fetch_from_api(key)
+    cache[key] = data
+    return data
+```
+
+## Claude Desktop Configuration
+
+### claude_desktop_config.json
+```json
+{
+  "mcpServers": {
+    "my-server": {
+      "command": "python",
+      "args": ["-m", "my_server"],
+      "env": {
+        "MY_API_KEY": "your-key-here"
+      }
+    }
+  }
+}
+```
+
+### With uv (Recommended)
+```json
+{
+  "mcpServers": {
+    "my-server": {
+      "command": "uv",
+      "args": ["run", "--directory", "/path/to/my-server", "python", "-m", "my_server"],
+      "env": {
+        "MY_API_KEY": "your-key-here"
+      }
+    }
+  }
+}
+```
+
+## Testing Patterns
+
+### Manual Test Script
+```python
+# test_server.py
+import asyncio
+from my_server.server import app
+
+async def test_tools():
+    tools = await app.list_tools()
+    print(f"Available tools: {[t['name'] for t in tools]}")
+
+    result = await app.call_tool("my_tool", {"query": "test"})
+    print(f"Result: {result}")
+
+if __name__ == "__main__":
+    asyncio.run(test_tools())
+```
+
+### pytest with Async
+```python
+import pytest
+from my_server.tools import handle_search
+
+@pytest.mark.asyncio
+async def test_search_returns_results():
+    result = await handle_search({"query": "test", "limit": 5})
+    assert "content" in result
+    assert len(result["content"]) > 0
+
+@pytest.mark.asyncio
+async def test_search_handles_empty():
+    result = await handle_search({"query": "xyznonexistent123"})
+    assert result["content"][0]["text"] == "No results found"
+```
+
+## Common Issues
+
+| Issue | Solution |
+|-------|----------|
+| Server not starting | Check `command` path, ensure dependencies installed |
+| Tool not appearing | Verify `list_tools()` returns valid schema |
+| Auth failures | Check env vars are set in config, not shell |
+| Timeout errors | Add timeout to httpx calls, use async properly |
+| JSON parse errors | Ensure `call_tool` returns proper content structure |

+ 187 - 44
skills/python-env/SKILL.md

@@ -1,86 +1,229 @@
----
-name: python-env
-description: "Fast Python environment management with uv. 10-100x faster than pip for installs, venv creation, and dependency resolution. Triggers on: install Python package, create venv, pip install, setup Python project, manage dependencies, Python environment."
----
+# Python Environment Skill
 
-# Python Environment
+Fast Python environment management with uv (10-100x faster than pip).
 
-## Purpose
-Manage Python packages and virtual environments with extreme speed using uv (Rust-based, 10-100x faster than pip).
+## Triggers
 
-## Tools
+uv, venv, pip, pyproject, python environment, install package, dependencies
 
-| Tool | Command | Use For |
-|------|---------|---------|
-| uv | `uv pip install pkg` | Fast package installation |
-| uv | `uv venv` | Virtual environment creation |
-| uv | `uv pip compile` | Lock file generation |
+## Quick Commands
 
-## Usage Examples
+| Task | Command |
+|------|---------|
+| Create venv | `uv venv` |
+| Install package | `uv pip install requests` |
+| Install from requirements | `uv pip install -r requirements.txt` |
+| Run script | `uv run python script.py` |
+| Show installed | `uv pip list` |
 
-### Package Installation
+## Virtual Environment
 
 ```bash
-# Install package (10-100x faster than pip)
+# Create venv (instant)
+uv venv
+
+# Create with specific Python
+uv venv --python 3.11
+
+# Activate
+# Windows: .venv\Scripts\activate
+# Unix: source .venv/bin/activate
+
+# Or skip activation and use uv run
+uv run python script.py
+```
+
+## Package Installation
+
+```bash
+# Single package
 uv pip install requests
 
-# Install multiple packages
+# Multiple packages
 uv pip install flask sqlalchemy pytest
 
-# Install from requirements.txt
-uv pip install -r requirements.txt
-
-# Install with extras
+# With extras
 uv pip install "fastapi[all]"
 
-# Install specific version
+# Version constraints
 uv pip install "django>=4.0,<5.0"
+
+# From requirements
+uv pip install -r requirements.txt
+
+# Uninstall
+uv pip uninstall requests
 ```
 
-### Virtual Environments
+## pyproject.toml Configuration
+
+### Minimal Project
+```toml
+[project]
+name = "my-project"
+version = "0.1.0"
+requires-python = ">=3.10"
+dependencies = [
+    "httpx>=0.25",
+    "pydantic>=2.0",
+]
+
+[project.optional-dependencies]
+dev = [
+    "pytest>=7.0",
+    "ruff>=0.1",
+]
+```
 
-```bash
-# Create venv (fastest venv creation)
-uv venv
+### With Build System
+```toml
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "my-package"
+version = "0.1.0"
+requires-python = ">=3.10"
+dependencies = [
+    "httpx>=0.25",
+]
+
+[project.optional-dependencies]
+dev = ["pytest", "ruff", "mypy"]
+docs = ["mkdocs", "mkdocs-material"]
+
+[project.scripts]
+my-cli = "my_package.cli:main"
+```
 
-# Create with specific Python version
-uv venv --python 3.11
+### With Tool Configuration
+```toml
+[tool.ruff]
+line-length = 100
+target-version = "py310"
 
-# Activate (still uses standard activation)
-# Windows: .venv\Scripts\activate
-# Unix: source .venv/bin/activate
+[tool.ruff.lint]
+select = ["E", "F", "I", "UP"]
+
+[tool.pytest.ini_options]
+testpaths = ["tests"]
+asyncio_mode = "auto"
+
+[tool.mypy]
+python_version = "3.10"
+strict = true
 ```
 
-### Dependency Management
+## Dependency Management
 
+### Lock File Workflow
 ```bash
-# Generate lockfile from requirements.in
+# Create requirements.in with loose constraints
+echo "flask>=2.0" > requirements.in
+echo "sqlalchemy>=2.0" >> requirements.in
+
+# Generate locked requirements.txt
 uv pip compile requirements.in -o requirements.txt
 
-# Sync environment to lockfile
+# Install exact versions
 uv pip sync requirements.txt
 
-# Show installed packages
-uv pip list
+# Update locks
+uv pip compile requirements.in -o requirements.txt --upgrade
+```
 
-# Uninstall package
-uv pip uninstall requests
+### Dev Dependencies Pattern
+```bash
+# requirements.in (production)
+flask>=2.0
+sqlalchemy>=2.0
+
+# requirements-dev.in
+-r requirements.in
+pytest>=7.0
+ruff>=0.1
+
+# Compile both
+uv pip compile requirements.in -o requirements.txt
+uv pip compile requirements-dev.in -o requirements-dev.txt
+```
+
+## Workspace/Monorepo
+
+```toml
+# pyproject.toml (root)
+[tool.uv.workspace]
+members = ["packages/*"]
+
+# packages/core/pyproject.toml
+[project]
+name = "my-core"
+version = "0.1.0"
+
+# packages/api/pyproject.toml
+[project]
+name = "my-api"
+version = "0.1.0"
+dependencies = ["my-core"]
+```
+
+```bash
+# Install all workspace packages
+uv pip install -e packages/core -e packages/api
 ```
 
-### Run Commands
+## Running Scripts
 
 ```bash
-# Run script in project environment
+# Run with project's Python
 uv run python script.py
 
-# Run with specific Python
+# Run with specific Python version
 uv run --python 3.11 python script.py
+
+# Run module
+uv run python -m pytest
+
+# Run installed CLI
+uv run ruff check .
+```
+
+## Troubleshooting
+
+| Issue | Solution |
+|-------|----------|
+| "No Python found" | `uv python install 3.11` or install from python.org |
+| Wrong Python version | `uv venv --python 3.11` to force version |
+| Conflicting deps | `uv pip compile --resolver=backtracking` |
+| Cache issues | `uv cache clean` |
+| SSL errors | `uv pip install --cert /path/to/cert pkg` |
+
+## Project Setup Checklist
+
+```bash
+# 1. Create project structure
+mkdir my-project && cd my-project
+mkdir src tests
+
+# 2. Create venv
+uv venv
+
+# 3. Create pyproject.toml (see templates above)
+
+# 4. Install dependencies
+uv pip install -e ".[dev]"
+
+# 5. Verify
+uv pip list
+uv run python -c "import my_package"
 ```
 
 ## When to Use
 
-- Installing Python packages (always prefer over pip)
+- **Always** use uv over pip for speed
 - Creating virtual environments
-- Setting up new Python projects
+- Installing packages
 - Managing dependencies
-- Syncing development environments
+- Running scripts in project context
+- Compiling lockfiles

+ 117 - 0
skills/rest-patterns/SKILL.md

@@ -0,0 +1,117 @@
+# REST Patterns Skill
+
+Quick reference for RESTful API design patterns, HTTP semantics, and status codes.
+
+## Triggers
+
+rest api, http methods, status codes, api design, endpoint design, rest patterns, api versioning
+
+## HTTP Methods
+
+| Method | Purpose | Idempotent | Cacheable |
+|--------|---------|------------|-----------|
+| **GET** | Retrieve resource(s) | Yes | Yes |
+| **POST** | Create new resource | No | No |
+| **PUT** | Replace entire resource | Yes | No |
+| **PATCH** | Partial update | Maybe | No |
+| **DELETE** | Remove resource | Yes | No |
+| **HEAD** | GET headers only | Yes | Yes |
+| **OPTIONS** | CORS preflight | Yes | No |
+
+## Status Codes Quick Reference
+
+### Success (2xx)
+
+| Code | When to Use |
+|------|-------------|
+| **200 OK** | GET, PUT, PATCH, DELETE success |
+| **201 Created** | POST success (include `Location` header) |
+| **204 No Content** | Success with no response body |
+
+### Client Errors (4xx)
+
+| Code | When to Use |
+|------|-------------|
+| **400 Bad Request** | Invalid syntax, malformed JSON |
+| **401 Unauthorized** | Missing or invalid auth |
+| **403 Forbidden** | Authenticated but not authorized |
+| **404 Not Found** | Resource doesn't exist |
+| **405 Method Not Allowed** | HTTP method not supported |
+| **409 Conflict** | State conflict (duplicate, version mismatch) |
+| **422 Unprocessable Entity** | Validation errors (valid syntax, bad semantics) |
+| **429 Too Many Requests** | Rate limit exceeded |
+
+### Server Errors (5xx)
+
+| Code | When to Use |
+|------|-------------|
+| **500 Internal Server Error** | Generic server failure |
+| **502 Bad Gateway** | Upstream returned invalid response |
+| **503 Service Unavailable** | Temporarily unavailable |
+| **504 Gateway Timeout** | Upstream timeout |
+
+## Resource Design Patterns
+
+```
+# Collections (plural nouns)
+GET    /users              # List all
+POST   /users              # Create one
+GET    /users/{id}         # Get one
+PUT    /users/{id}         # Replace one
+PATCH  /users/{id}         # Update one
+DELETE /users/{id}         # Delete one
+
+# Nested resources (max 2-3 levels)
+GET    /users/{id}/orders           # User's orders
+GET    /users/{id}/orders/{orderId} # Specific order
+
+# Query parameters
+GET /users?role=admin&status=active     # Filtering
+GET /users?page=2&limit=20              # Pagination
+GET /users?sort=created_at&order=desc   # Sorting
+```
+
+## Error Response Format
+
+```json
+{
+  "error": {
+    "code": "VALIDATION_ERROR",
+    "message": "Invalid input data",
+    "details": [
+      {"field": "email", "message": "Invalid email format"}
+    ],
+    "request_id": "abc-123"
+  }
+}
+```
+
+## Versioning Strategies
+
+| Strategy | Example | Pros/Cons |
+|----------|---------|-----------|
+| **URI** | `/v1/users` | Clear, easy to implement, URL pollution |
+| **Header** | `Accept: application/vnd.api.v1+json` | Clean URLs, harder to test |
+| **Query** | `/users?version=1` | Easy to implement, less RESTful |
+
+## Security Checklist
+
+- Always HTTPS/TLS
+- OAuth 2.0 or JWT for auth
+- API keys for service-to-service
+- Validate all inputs
+- Rate limit per client
+- CORS headers configured
+- No sensitive data in URLs
+- Security headers (HSTS, CSP)
+
+## Common Mistakes
+
+| Mistake | Fix |
+|---------|-----|
+| Using verbs in URLs | `/getUsers` → `/users` |
+| Deep nesting | `/a/1/b/2/c/3/d` → flatten or use query params |
+| 200 for errors | Return appropriate 4xx/5xx |
+| POST for everything | Use proper HTTP methods |
+| Returning 500 for client errors | 4xx for client, 5xx for server |
+| No pagination on lists | Always paginate collections |

+ 222 - 0
skills/sql-patterns/SKILL.md

@@ -0,0 +1,222 @@
+# SQL Patterns Skill
+
+Quick reference for common SQL patterns, CTEs, window functions, and indexing strategies.
+
+## Triggers
+
+sql patterns, cte example, window functions, sql join, index strategy, pagination sql
+
+## CTE (Common Table Expressions)
+
+### Basic CTE
+```sql
+WITH active_users AS (
+    SELECT id, name, email
+    FROM users
+    WHERE status = 'active'
+)
+SELECT * FROM active_users WHERE created_at > '2024-01-01';
+```
+
+### Chained CTEs
+```sql
+WITH
+    active_users AS (
+        SELECT id, name FROM users WHERE status = 'active'
+    ),
+    user_orders AS (
+        SELECT user_id, COUNT(*) as order_count
+        FROM orders
+        GROUP BY user_id
+    )
+SELECT u.name, COALESCE(o.order_count, 0) as orders
+FROM active_users u
+LEFT JOIN user_orders o ON u.id = o.user_id;
+```
+
+### Recursive CTE (Hierarchies)
+```sql
+WITH RECURSIVE org_tree AS (
+    -- Base case: top-level managers
+    SELECT id, name, manager_id, 1 as level
+    FROM employees
+    WHERE manager_id IS NULL
+
+    UNION ALL
+
+    -- Recursive case: employees under managers
+    SELECT e.id, e.name, e.manager_id, t.level + 1
+    FROM employees e
+    JOIN org_tree t ON e.manager_id = t.id
+)
+SELECT * FROM org_tree ORDER BY level, name;
+```
+
+## Window Functions
+
+### ROW_NUMBER (Unique sequential)
+```sql
+SELECT
+    name,
+    department,
+    salary,
+    ROW_NUMBER() OVER (PARTITION BY department ORDER BY salary DESC) as rank
+FROM employees;
+```
+
+### RANK / DENSE_RANK (Ties allowed)
+```sql
+-- RANK: 1, 2, 2, 4 (skips after ties)
+-- DENSE_RANK: 1, 2, 2, 3 (no skip)
+SELECT
+    name,
+    score,
+    RANK() OVER (ORDER BY score DESC) as rank,
+    DENSE_RANK() OVER (ORDER BY score DESC) as dense_rank
+FROM contestants;
+```
+
+### LAG / LEAD (Previous/Next row)
+```sql
+SELECT
+    date,
+    revenue,
+    LAG(revenue, 1) OVER (ORDER BY date) as prev_day,
+    revenue - LAG(revenue, 1) OVER (ORDER BY date) as change
+FROM daily_sales;
+```
+
+### Running Total
+```sql
+SELECT
+    date,
+    amount,
+    SUM(amount) OVER (ORDER BY date) as running_total
+FROM transactions;
+```
+
+### Moving Average
+```sql
+SELECT
+    date,
+    value,
+    AVG(value) OVER (ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) as moving_avg_7day
+FROM metrics;
+```
+
+## JOIN Reference
+
+| Type | Returns |
+|------|---------|
+| `INNER JOIN` | Only matching rows from both |
+| `LEFT JOIN` | All from left + matching from right |
+| `RIGHT JOIN` | All from right + matching from left |
+| `FULL JOIN` | All from both, NULL where no match |
+| `CROSS JOIN` | Cartesian product (all combinations) |
+
+### Self Join (Same table)
+```sql
+SELECT e.name as employee, m.name as manager
+FROM employees e
+LEFT JOIN employees m ON e.manager_id = m.id;
+```
+
+## Pagination Patterns
+
+### OFFSET/LIMIT (Simple, slow for large offsets)
+```sql
+SELECT * FROM products
+ORDER BY id
+LIMIT 20 OFFSET 40;  -- Page 3, 20 per page
+```
+
+### Keyset Pagination (Fast, scalable)
+```sql
+-- First page
+SELECT * FROM products ORDER BY id LIMIT 20;
+
+-- Next page (where last id was 42)
+SELECT * FROM products WHERE id > 42 ORDER BY id LIMIT 20;
+```
+
+## Index Strategies
+
+| Index Type | Best For |
+|------------|----------|
+| **B-tree** | Default, range queries, ORDER BY |
+| **Hash** | Exact equality only |
+| **GIN** | Arrays, JSONB, full-text |
+| **GiST** | Geometric, full-text |
+| **Covering** | Include columns to avoid table lookup |
+
+### Covering Index
+```sql
+-- Query needs name but filters on email
+CREATE INDEX idx_users_email_name ON users(email) INCLUDE (name);
+
+-- Now this is index-only:
+SELECT name FROM users WHERE email = 'x@y.com';
+```
+
+### Composite Index Order
+```sql
+-- Leftmost prefix rule: (a, b, c) supports:
+-- WHERE a = ?
+-- WHERE a = ? AND b = ?
+-- WHERE a = ? AND b = ? AND c = ?
+-- NOT: WHERE b = ? (a must be present)
+CREATE INDEX idx_orders ON orders(user_id, status, created_at);
+```
+
+## EXISTS vs IN
+
+```sql
+-- EXISTS: Often faster for large outer, small inner
+SELECT * FROM orders o
+WHERE EXISTS (SELECT 1 FROM users u WHERE u.id = o.user_id AND u.status = 'active');
+
+-- IN: Often faster for small list, can be optimized
+SELECT * FROM orders
+WHERE user_id IN (SELECT id FROM users WHERE status = 'active');
+```
+
+## Anti-Patterns to Avoid
+
+| Anti-Pattern | Problem | Fix |
+|--------------|---------|-----|
+| `SELECT *` | Over-fetches, breaks on schema change | List columns explicitly |
+| Function on indexed column | `WHERE YEAR(date) = 2024` prevents index | `WHERE date >= '2024-01-01'` |
+| `OR` in WHERE | May prevent index usage | Use `UNION` or rewrite |
+| N+1 queries | Loop with query per item | Single JOIN or batch |
+| `DISTINCT` to fix duplicates | Masks JOIN issues | Fix the JOIN logic |
+| `NOT IN` with NULLs | Returns wrong results | Use `NOT EXISTS` instead |
+
+## NULL Handling
+
+```sql
+-- NULL comparisons
+WHERE column IS NULL        -- Correct
+WHERE column IS NOT NULL    -- Correct
+WHERE column = NULL         -- WRONG (always false)
+
+-- COALESCE for defaults
+SELECT COALESCE(nickname, name, 'Anonymous') as display_name FROM users;
+
+-- NULLIF to create NULLs
+SELECT amount / NULLIF(count, 0) as average FROM stats;  -- Avoids divide by zero
+```
+
+## Batch Operations
+
+```sql
+-- Insert multiple rows
+INSERT INTO users (name, email) VALUES
+    ('Alice', 'a@x.com'),
+    ('Bob', 'b@x.com'),
+    ('Carol', 'c@x.com');
+
+-- Update with limit (process in batches)
+UPDATE orders SET status = 'archived'
+WHERE status = 'completed' AND updated_at < '2023-01-01'
+LIMIT 1000;
+```

+ 266 - 0
skills/sqlite-ops/SKILL.md

@@ -0,0 +1,266 @@
+# SQLite Operations Skill
+
+Patterns for SQLite databases in Python projects - state management, caching, and async operations.
+
+## Triggers
+
+sqlite, sqlite3, aiosqlite, local database, database schema, migration, wal mode
+
+## Schema Design Patterns
+
+### State/Config Storage
+```sql
+CREATE TABLE IF NOT EXISTS app_state (
+    key TEXT PRIMARY KEY,
+    value TEXT NOT NULL,
+    updated_at TEXT DEFAULT (datetime('now'))
+);
+
+-- Upsert pattern
+INSERT INTO app_state (key, value) VALUES ('last_sync', '2024-01-15')
+ON CONFLICT(key) DO UPDATE SET value = excluded.value, updated_at = datetime('now');
+```
+
+### Cache Table
+```sql
+CREATE TABLE IF NOT EXISTS cache (
+    key TEXT PRIMARY KEY,
+    value TEXT NOT NULL,
+    expires_at TEXT NOT NULL,
+    created_at TEXT DEFAULT (datetime('now'))
+);
+
+-- Create index for expiry cleanup
+CREATE INDEX IF NOT EXISTS idx_cache_expires ON cache(expires_at);
+
+-- Cleanup expired entries
+DELETE FROM cache WHERE expires_at < datetime('now');
+```
+
+### Event/Log Table
+```sql
+CREATE TABLE IF NOT EXISTS events (
+    id INTEGER PRIMARY KEY AUTOINCREMENT,
+    event_type TEXT NOT NULL,
+    payload TEXT,  -- JSON
+    created_at TEXT DEFAULT (datetime('now'))
+);
+
+CREATE INDEX IF NOT EXISTS idx_events_type_date ON events(event_type, created_at);
+```
+
+### Deduplication Table
+```sql
+CREATE TABLE IF NOT EXISTS seen_items (
+    hash TEXT PRIMARY KEY,
+    source TEXT NOT NULL,
+    first_seen TEXT DEFAULT (datetime('now'))
+);
+
+-- Check if seen
+SELECT 1 FROM seen_items WHERE hash = ? LIMIT 1;
+```
+
+## Python sqlite3 Patterns
+
+### Connection with Best Practices
+```python
+import sqlite3
+from pathlib import Path
+
+def get_connection(db_path: str | Path) -> sqlite3.Connection:
+    conn = sqlite3.connect(db_path, check_same_thread=False)
+    conn.row_factory = sqlite3.Row  # Dict-like access
+    conn.execute("PRAGMA journal_mode=WAL")  # Better concurrency
+    conn.execute("PRAGMA foreign_keys=ON")   # Enforce FK constraints
+    return conn
+```
+
+### Context Manager Pattern
+```python
+from contextlib import contextmanager
+
+@contextmanager
+def db_transaction(conn: sqlite3.Connection):
+    """Auto-commit or rollback on error."""
+    try:
+        yield conn
+        conn.commit()
+    except Exception:
+        conn.rollback()
+        raise
+```
+
+### Batch Insert
+```python
+def batch_insert(conn, items: list[dict]):
+    """Efficient bulk insert."""
+    conn.executemany(
+        "INSERT OR IGNORE INTO items (id, name, data) VALUES (?, ?, ?)",
+        [(i["id"], i["name"], json.dumps(i["data"])) for i in items]
+    )
+    conn.commit()
+```
+
+## Python aiosqlite Patterns
+
+### Async Connection
+```python
+import aiosqlite
+
+async def get_async_connection(db_path: str) -> aiosqlite.Connection:
+    conn = await aiosqlite.connect(db_path)
+    conn.row_factory = aiosqlite.Row
+    await conn.execute("PRAGMA journal_mode=WAL")
+    await conn.execute("PRAGMA foreign_keys=ON")
+    return conn
+```
+
+### Async Context Manager
+```python
+async def query_items(db_path: str, status: str) -> list[dict]:
+    async with aiosqlite.connect(db_path) as db:
+        db.row_factory = aiosqlite.Row
+        async with db.execute(
+            "SELECT * FROM items WHERE status = ?", (status,)
+        ) as cursor:
+            rows = await cursor.fetchall()
+            return [dict(row) for row in rows]
+```
+
+### Async Batch Operations
+```python
+async def batch_update_status(db_path: str, ids: list[int], status: str):
+    async with aiosqlite.connect(db_path) as db:
+        await db.executemany(
+            "UPDATE items SET status = ? WHERE id = ?",
+            [(status, id) for id in ids]
+        )
+        await db.commit()
+```
+
+## WAL Mode (Write-Ahead Logging)
+
+**Enable for concurrent read/write:**
+```python
+conn.execute("PRAGMA journal_mode=WAL")
+```
+
+| Mode | Reads | Writes | Best For |
+|------|-------|--------|----------|
+| DELETE (default) | Blocked during write | Single | Simple scripts |
+| WAL | Concurrent | Single | Web apps, MCP servers |
+
+**Checkpoint WAL periodically:**
+```python
+conn.execute("PRAGMA wal_checkpoint(TRUNCATE)")
+```
+
+## Migration Pattern
+
+```python
+MIGRATIONS = [
+    # Version 1
+    """
+    CREATE TABLE IF NOT EXISTS items (
+        id INTEGER PRIMARY KEY,
+        name TEXT NOT NULL,
+        created_at TEXT DEFAULT (datetime('now'))
+    );
+    """,
+    # Version 2 - add status column
+    """
+    ALTER TABLE items ADD COLUMN status TEXT DEFAULT 'active';
+    CREATE INDEX IF NOT EXISTS idx_items_status ON items(status);
+    """,
+]
+
+def migrate(conn: sqlite3.Connection):
+    """Apply pending migrations."""
+    conn.execute("CREATE TABLE IF NOT EXISTS schema_version (version INTEGER)")
+
+    result = conn.execute("SELECT version FROM schema_version").fetchone()
+    current = result[0] if result else 0
+
+    for i, migration in enumerate(MIGRATIONS[current:], start=current):
+        conn.executescript(migration)
+        conn.execute("DELETE FROM schema_version")
+        conn.execute("INSERT INTO schema_version VALUES (?)", (i + 1,))
+        conn.commit()
+```
+
+## Query Optimization
+
+### Use EXPLAIN QUERY PLAN
+```python
+plan = conn.execute("EXPLAIN QUERY PLAN SELECT * FROM items WHERE status = ?", ("active",)).fetchall()
+for row in plan:
+    print(row)
+# Look for "SCAN" (bad) vs "SEARCH" or "USING INDEX" (good)
+```
+
+### Common Index Patterns
+```sql
+-- Single column (equality + range)
+CREATE INDEX idx_items_status ON items(status);
+
+-- Composite (filter + sort)
+CREATE INDEX idx_items_status_date ON items(status, created_at);
+
+-- Covering index (avoid table lookup)
+CREATE INDEX idx_items_status_covering ON items(status) INCLUDE (name, created_at);
+```
+
+## JSON in SQLite
+
+```sql
+-- Store JSON
+INSERT INTO events (payload) VALUES ('{"type": "click", "x": 100}');
+
+-- Query JSON (SQLite 3.38+)
+SELECT json_extract(payload, '$.type') as event_type FROM events;
+
+-- Filter by JSON value
+SELECT * FROM events WHERE json_extract(payload, '$.type') = 'click';
+```
+
+## CLI Quick Reference
+
+```bash
+# Open database
+sqlite3 mydb.sqlite
+
+# Show tables
+.tables
+
+# Show schema
+.schema items
+
+# Export to CSV
+.headers on
+.mode csv
+.output items.csv
+SELECT * FROM items;
+.output stdout
+
+# Import CSV
+.mode csv
+.import items.csv items
+
+# Run SQL file
+.read schema.sql
+
+# Vacuum (reclaim space)
+VACUUM;
+```
+
+## Common Gotchas
+
+| Issue | Solution |
+|-------|----------|
+| "database is locked" | Use WAL mode, or ensure single writer |
+| Slow queries | Add indexes, check EXPLAIN QUERY PLAN |
+| Memory issues with large results | Use `fetchmany(1000)` in batches |
+| Thread safety | Use `check_same_thread=False` + connection per thread |
+| Foreign key not enforced | Run `PRAGMA foreign_keys=ON` after connect |
+| datetime storage | Store as TEXT in ISO format, use `datetime()` function |

+ 217 - 0
skills/tailwind-patterns/SKILL.md

@@ -0,0 +1,217 @@
+# Tailwind Patterns Skill
+
+Quick reference for Tailwind CSS utility patterns, responsive design, and configuration.
+
+## Triggers
+
+tailwind, utility classes, responsive design, tailwind config, dark mode css, tw classes
+
+## Responsive Breakpoints
+
+| Prefix | Min Width | CSS |
+|--------|-----------|-----|
+| `sm:` | 640px | `@media (min-width: 640px)` |
+| `md:` | 768px | `@media (min-width: 768px)` |
+| `lg:` | 1024px | `@media (min-width: 1024px)` |
+| `xl:` | 1280px | `@media (min-width: 1280px)` |
+| `2xl:` | 1536px | `@media (min-width: 1536px)` |
+
+**Mobile-first:** No prefix = mobile, add prefix for larger screens.
+
+```html
+<div class="w-full md:w-1/2 lg:w-1/3">
+  <!-- Full width on mobile, half on tablet, third on desktop -->
+</div>
+```
+
+## Common Layout Patterns
+
+### Centered Container
+```html
+<div class="container mx-auto px-4">
+  <!-- Centered with padding -->
+</div>
+```
+
+### Flexbox Row
+```html
+<div class="flex items-center justify-between gap-4">
+  <div>Left</div>
+  <div>Right</div>
+</div>
+```
+
+### Grid Layout
+```html
+<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-6">
+  <div>Card 1</div>
+  <div>Card 2</div>
+  <div>Card 3</div>
+</div>
+```
+
+### Stack (Vertical)
+```html
+<div class="flex flex-col gap-4">
+  <div>Item 1</div>
+  <div>Item 2</div>
+</div>
+```
+
+## Common Component Patterns
+
+### Card
+```html
+<div class="bg-white rounded-lg shadow-md p-6">
+  <h3 class="text-lg font-semibold mb-2">Title</h3>
+  <p class="text-gray-600">Content</p>
+</div>
+```
+
+### Button Variants
+```html
+<!-- Primary -->
+<button class="bg-blue-600 text-white px-4 py-2 rounded-lg hover:bg-blue-700 transition-colors">
+  Primary
+</button>
+
+<!-- Secondary -->
+<button class="bg-gray-200 text-gray-800 px-4 py-2 rounded-lg hover:bg-gray-300 transition-colors">
+  Secondary
+</button>
+
+<!-- Outline -->
+<button class="border border-blue-600 text-blue-600 px-4 py-2 rounded-lg hover:bg-blue-50 transition-colors">
+  Outline
+</button>
+```
+
+### Form Input
+```html
+<input
+  type="text"
+  class="w-full px-3 py-2 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent"
+  placeholder="Enter text"
+/>
+```
+
+## Dark Mode
+
+### Class Strategy (Recommended)
+```js
+// tailwind.config.js
+module.exports = {
+  darkMode: 'class',
+  // ...
+}
+```
+
+```html
+<!-- Add 'dark' class to html or parent -->
+<div class="bg-white dark:bg-gray-900 text-gray-900 dark:text-white">
+  Content adapts to dark mode
+</div>
+```
+
+### Media Strategy
+```js
+// tailwind.config.js
+module.exports = {
+  darkMode: 'media', // Uses prefers-color-scheme
+  // ...
+}
+```
+
+## Minimal Config Template
+
+```js
+// tailwind.config.js
+module.exports = {
+  content: [
+    './src/**/*.{js,ts,jsx,tsx,html}',
+  ],
+  darkMode: 'class',
+  theme: {
+    extend: {
+      colors: {
+        brand: {
+          50: '#f0f9ff',
+          500: '#3b82f6',
+          900: '#1e3a8a',
+        },
+      },
+      fontFamily: {
+        sans: ['Inter', 'sans-serif'],
+      },
+    },
+  },
+  plugins: [],
+}
+```
+
+## Spacing Scale Reference
+
+| Class | Size |
+|-------|------|
+| `p-0` | 0px |
+| `p-1` | 4px (0.25rem) |
+| `p-2` | 8px (0.5rem) |
+| `p-4` | 16px (1rem) |
+| `p-6` | 24px (1.5rem) |
+| `p-8` | 32px (2rem) |
+| `p-12` | 48px (3rem) |
+| `p-16` | 64px (4rem) |
+
+Same scale applies to: `m-`, `gap-`, `w-`, `h-`, `space-x-`, `space-y-`
+
+## Arbitrary Values
+
+When the scale doesn't have what you need:
+
+```html
+<div class="w-[137px] h-[calc(100vh-64px)] top-[17px]">
+  <!-- Exact values when needed -->
+</div>
+```
+
+## State Modifiers
+
+| Modifier | Triggers On |
+|----------|-------------|
+| `hover:` | Mouse hover |
+| `focus:` | Element focused |
+| `active:` | Being clicked |
+| `disabled:` | Disabled state |
+| `group-hover:` | Parent hovered |
+| `first:` | First child |
+| `last:` | Last child |
+| `odd:` | Odd children |
+| `even:` | Even children |
+
+```html
+<button class="bg-blue-500 hover:bg-blue-600 active:bg-blue-700 disabled:opacity-50">
+  Button
+</button>
+```
+
+## Performance Tips
+
+1. **Content configuration** - Ensure all template paths are in `content` array
+2. **Avoid @apply overuse** - Prefer utility classes directly
+3. **Use CSS variables** for dynamic values that change at runtime
+4. **Purge in production** - Tailwind does this automatically via `content`
+
+## Class Organization
+
+Recommended order for readability:
+1. Layout (flex, grid, position)
+2. Box model (w, h, p, m)
+3. Typography (text, font)
+4. Visual (bg, border, shadow)
+5. Interactive (hover, focus)
+
+```html
+<div class="flex items-center | w-full p-4 | text-lg font-medium | bg-white border rounded-lg | hover:shadow-md">
+  <!-- Pipes are comments for organization -->
+</div>
+```

+ 181 - 54
skills/tool-discovery/SKILL.md

@@ -1,78 +1,205 @@
+# Tool Discovery Skill
+
+Recommend the right agents and skills for any task. Covers both heavyweight agents (Task tool) and lightweight skills (Skill tool).
+
+## Triggers
+
+which agent, which skill, what tool should I use, help me choose, recommend agent, recommend skill, find the right tool, what's available
+
+## Decision Flowchart
+
+```
+Is this a reference/lookup task?
+├── YES → Use a SKILL (lightweight, auto-injects)
+│         Examples: patterns, syntax, CLI commands
+│
+└── NO → Does it require reasoning/decisions?
+         ├── YES → Use an AGENT (heavyweight, spawns subagent)
+         │         Examples: architecture, optimization, debugging
+         │
+         └── MAYBE → Check both lists below
+```
+
+**Rule of thumb:**
+- **Skills** = Quick reference, patterns, commands (50-200 lines)
+- **Agents** = Deep expertise, autonomous decisions (200-1600 lines)
+
 ---
-name: tool-discovery
-description: Discover the right library or tool for any task. Maps common needs to battle-tested solutions.
-triggers:
-  - library for
-  - tool for
-  - how to parse
-  - how to process
-  - PDF, Excel, images, audio, video
-  - OCR, crypto, chess, ML
-  - parsing, scraping, database
-  - scientific computing, bioinformatics
-  - network analysis, data extraction
+
+## Skills Reference
+
+### Pattern Skills (Reference Tables)
+
+| Skill | Triggers | Use When |
+|-------|----------|----------|
+| **rest-patterns** | rest api, http methods, status codes | HTTP method semantics, status code lookup |
+| **tailwind-patterns** | tailwind, utility classes, tw | Tailwind classes, responsive breakpoints |
+| **sql-patterns** | sql patterns, cte, window functions | CTE examples, JOIN reference, window functions |
+| **sqlite-ops** | sqlite, aiosqlite, local database | SQLite schema patterns, Python sqlite3/aiosqlite |
+| **mcp-patterns** | mcp server, model context protocol | MCP server structure, tool handlers |
+
+### CLI Tool Skills
+
+| Skill | Triggers | Use When |
+|-------|----------|----------|
+| **file-search** | fd, ripgrep, rg, fzf | Finding files, searching code, interactive selection |
+| **find-replace** | sd, batch replace | Modern find-and-replace (sd over sed) |
+| **code-stats** | tokei, difft, line counts | Codebase statistics, semantic diffs |
+| **data-processing** | jq, yq, json, yaml | JSON/YAML processing and transformation |
+| **structural-search** | ast-grep, sg, ast pattern | Search by AST structure, not text |
+
+### Workflow Skills
+
+| Skill | Triggers | Use When |
+|-------|----------|----------|
+| **git-workflow** | lazygit, gh, delta, pr | Git operations, GitHub PRs, staging |
+| **python-env** | uv, venv, pip, pyproject | Python environment setup with uv |
+| **task-runner** | just, justfile, run tests | Running project tasks via justfile |
+| **safe-file-reader** | bat, eza, view file | Viewing files without permission prompts |
+| **project-docs** | AGENTS.md, conventions | Finding and reading project documentation |
+| **project-planner** | plan, sync plan, track | Project planning with /plan command |
+
 ---
 
-# Tool Discovery
+## Agents Reference
 
-Find the right library or tool for your task instead of implementing from scratch.
+### Language Experts
 
-## When to Use
+| Agent | Use When |
+|-------|----------|
+| **python-expert** | Advanced Python, async, testing, optimization |
+| **javascript-expert** | Modern JS, async patterns, V8 optimization |
+| **typescript-expert** | Type system, generics, complex types |
+| **bash-expert** | Shell scripting, defensive programming |
 
-This skill activates when you need to:
-- Parse file formats (PDF, Excel, images, audio, video)
-- Process data (JSON, YAML, CSV, XML)
-- Implement domain algorithms (crypto, chess, ML)
-- Extract information (OCR, web scraping, data mining)
-- Work with specialized domains (scientific, bioinformatics, network)
+### Framework Experts
 
-## How to Use
+| Agent | Use When |
+|-------|----------|
+| **react-expert** | React hooks, Server Components, state management |
+| **vue-expert** | Vue 3, Composition API, Pinia |
+| **laravel-expert** | Laravel, Eloquent, PHP testing |
+| **astro-expert** | Astro SSR/SSG, Cloudflare deployment |
 
-1. **Check reference.md** for authoritative recommendations
-2. **Verify availability**: `which <tool>` or `pip list | grep <keyword>`
-3. **Use the recommended tool** instead of manual implementation
+### Infrastructure Experts
 
-## Why This Matters
+| Agent | Use When |
+|-------|----------|
+| **postgres-expert** | PostgreSQL optimization, execution plans |
+| **sql-expert** | Complex queries, query optimization |
+| **wrangler-expert** | Cloudflare Workers deployment |
+| **aws-fargate-ecs-expert** | ECS/Fargate container orchestration |
+| **cloudflare-expert** | Workers, Pages, DNS configuration |
 
-- Your manual implementation WILL have bugs
-- Battle-tested libraries are proven by thousands of users
-- If you can describe what you need in 2-3 words, a tool almost certainly exists
+### Specialized
 
-## Example Queries
+| Agent | Use When |
+|-------|----------|
+| **firecrawl-expert** | Web scraping, crawling, anti-bot bypass |
+| **payloadcms-expert** | Payload CMS architecture, configuration |
+| **craftcms-expert** | Craft CMS, Twig templates |
+| **cypress-expert** | E2E testing, component tests |
+| **project-organizer** | Restructuring project directories |
 
-| Need | Say | Get |
-|------|-----|-----|
-| Read PDF | "PDF parser" | PyMuPDF, pdfplumber |
-| Extract text from images | "image OCR" | tesseract, paddleocr |
-| Parse Excel | "Excel reader" | openpyxl, pandas |
-| Process audio | "audio processing" | librosa, pydub |
-| Chess engine | "chess library" | python-chess |
-| Scientific computing | "matrix operations" | numpy, scipy |
+### Built-in Agents (Task tool)
 
-## Protocol
+| Agent | Use When |
+|-------|----------|
+| **Explore** | Quick codebase exploration, "where is X" |
+| **Plan** | Design implementation strategy |
+| **general-purpose** | Multi-step tasks when unsure |
+| **claude-code-guide** | Questions about Claude Code features |
 
+---
+
+## Matching By Context
+
+### By File Extension
+
+| Files | Skill | Agent |
+|-------|-------|-------|
+| `.py` | python-env | python-expert |
+| `.ts`, `.js` | — | typescript-expert, javascript-expert |
+| `.sql` | sql-patterns | postgres-expert, sql-expert |
+| `.sh` | — | bash-expert |
+| `.astro` | tailwind-patterns | astro-expert |
+| `.json` | data-processing | — |
+| `.yaml` | data-processing | — |
+
+### By Task Type
+
+| Task | Try Skill First | Then Agent |
+|------|-----------------|------------|
+| "How do I write a CTE?" | sql-patterns | sql-expert |
+| "Optimize this query" | — | postgres-expert |
+| "Find files named X" | file-search | Explore |
+| "Restructure this project" | — | project-organizer |
+| "Scrape this website" | — | firecrawl-expert |
+| "What HTTP status for X?" | rest-patterns | — |
+| "Set up Python project" | python-env | python-expert |
+| "Build MCP server" | mcp-patterns | — |
+
+### By Keywords
+
+| Keywords | Likely Skill | Likely Agent |
+|----------|--------------|--------------|
+| "pattern", "example", "syntax" | Check skills first | — |
+| "optimize", "debug", "fix" | — | Check agents |
+| "reference", "lookup", "how to" | Check skills first | — |
+| "architecture", "design", "plan" | — | Check agents or Plan |
+
+---
+
+## How to Launch
+
+### Skills (via Skill tool)
+```
+Skill tool → skill: "file-search"
+```
+Skills auto-inject into current context. No subagent spawned.
+
+### Agents (via Task tool)
 ```
-1. User asks about library/tool for X
-2. INVOKE this skill
-3. READ reference.md for recommendations
-4. VERIFY tool is available (`which`, `pip list`)
-5. USE recommended tool
-6. NEVER implement from scratch what exists
+Task tool → subagent_type: "python-expert"
+         → prompt: "Your task description"
 ```
+Agents spawn a subagent session with their full context.
+
+---
 
-## Anti-Patterns
+## Recommendations Workflow
 
 ```
-BAD:  "I'll parse this PDF by reading the binary"
-GOOD: "Let me check reference.md for PDF tools"
+User: "Which tool should I use for X?"
+
+1. Parse the request:
+   - Is it reference/lookup? → Skill
+   - Does it need reasoning? → Agent
+   - Unclear? → Check both lists
 
-BAD:  "I'll analyze this image pixel by pixel"
-GOOD: "Let me use PIL/OpenCV for image processing"
+2. Match to available tools:
+   - Check file types in project
+   - Check config files (package.json, pyproject.toml, etc.)
+   - Consider task complexity
 
-BAD:  "I'll write a JSON parser"
-GOOD: "I'll use the built-in json module"
+3. Output format:
+   RECOMMENDED: [skill/agent-name]
+   TYPE: Skill | Agent
+   WHY: [1 sentence rationale]
+
+   LAUNCH: Skill tool with "name" | Task tool with subagent_type="name"
+
+4. If multiple apply:
+   PRIMARY: [name] - [reason]
+   SECONDARY: [name] - [reason]
 ```
 
-## See Also
+---
+
+## Tips
 
-- `reference.md` - Full library reference by category
+- **Skills are cheaper** - Use for reference lookups, patterns, CLI commands
+- **Agents are powerful** - Use for decisions, optimization, debugging
+- **Don't over-recommend** - Maximum 2-3 tools per task
+- **Parallel execution** - Launch independent agents in parallel via Task tool
+- **Check availability** - Run `/agents` or check this skill for current list