Browse Source

refactor: Extract pulse to standalone repo

Move pulse command to github.com/0xDarkMatter/pulse

Pulse generates persistent state files and news digests that don't
belong in a shared plugin repo. As standalone it can track its own
state and evolve independently.

Removed: pulse/, news/ directories
Updated: commands/pulse.md → deprecation notice

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
0xDarkMatter 3 months ago
parent
commit
469e3dd68d

+ 1 - 1
README.md

@@ -102,7 +102,7 @@ Then symlink or copy to your Claude directories:
 | [spawn](commands/spawn.md) | Generate expert agents with PhD-level patterns and code examples. |
 | [conclave](commands/conclave.md) | **[DEPRECATED]** Use [Conclave CLI](https://github.com/0xDarkMatter/conclave) instead. |
 | [atomise](commands/atomise.md) | Atom of Thoughts reasoning - decompose problems into atomic units with confidence tracking and backtracking. |
-| [pulse](commands/pulse.md) | Generate Claude Code ecosystem news digest from blogs, repos, and community sources. |
+| [pulse](commands/pulse.md) | **[MOVED]** See [0xDarkMatter/pulse](https://github.com/0xDarkMatter/pulse). |
 | [setperms](commands/setperms.md) | Set tool permissions and CLI preferences. |
 | [archive](commands/archive.md) | Archive completed plans and session state. |
 

+ 16 - 381
commands/pulse.md

@@ -1,398 +1,33 @@
 ---
-description: "Generate Claude Code ecosystem news digest. Fetches blogs, repos, and community sources via Firecrawl. Output to news/{date}_pulse.md."
+description: "**[MOVED]** Claude Code ecosystem news digest. Now at github.com/0xDarkMatter/pulse"
 ---
 
-# Pulse - Claude Code News Feed
+# Pulse
 
-Fetch and summarize the latest developments in the Claude Code ecosystem.
+> **This command has been moved to a standalone repository.**
 
-## What This Command Does
+## New Location
 
-1. **Fetches** content from blogs, official repos, and community sources
-2. **Deduplicates** against previously seen URLs (stored in `news/state.json`)
-3. **Summarizes** each item with an engaging precis + relevance assessment
-4. **Writes** digest to `news/{YYYY-MM-DD}_pulse.md`
-5. **Updates** state to prevent duplicate entries in future runs
+**Repository:** [github.com/0xDarkMatter/pulse](https://github.com/0xDarkMatter/pulse)
 
-## Arguments
-
-- `--force` - Regenerate digest even if today's already exists
-- `--days N` - Look back N days instead of 1 (default: 1)
-- `--dry-run` - Show sources that would be fetched without actually fetching
-
-## Sources
-
-### Official (Priority: Critical)
-
-```json
-[
-  {"name": "Anthropic Engineering", "url": "https://www.anthropic.com/engineering", "type": "blog"},
-  {"name": "Claude Blog", "url": "https://claude.com/blog", "type": "blog"},
-  {"name": "Claude Code Docs", "url": "https://code.claude.com", "type": "docs"},
-  {"name": "anthropics/claude-code", "url": "https://github.com/anthropics/claude-code", "type": "repo"},
-  {"name": "anthropics/skills", "url": "https://github.com/anthropics/skills", "type": "repo"},
-  {"name": "anthropics/claude-code-action", "url": "https://github.com/anthropics/claude-code-action", "type": "repo"},
-  {"name": "anthropics/claude-agent-sdk-demos", "url": "https://github.com/anthropics/claude-agent-sdk-demos", "type": "repo"}
-]
-```
-
-### Community Blogs (Priority: High)
-
-```json
-[
-  {"name": "Simon Willison", "url": "https://simonwillison.net", "type": "blog"},
-  {"name": "Every", "url": "https://every.to", "type": "blog"},
-  {"name": "SSHH Blog", "url": "https://blog.sshh.io", "type": "blog"},
-  {"name": "Lee Han Chung", "url": "https://leehanchung.github.io", "type": "blog"},
-  {"name": "Nick Nisi", "url": "https://nicknisi.com", "type": "blog"},
-  {"name": "HumanLayer", "url": "https://www.humanlayer.dev/blog", "type": "blog"},
-  {"name": "Chris Dzombak", "url": "https://www.dzombak.com/blog", "type": "blog"},
-  {"name": "GitButler", "url": "https://blog.gitbutler.com", "type": "blog"},
-  {"name": "Docker Blog", "url": "https://www.docker.com/blog", "type": "blog"},
-  {"name": "Nx Blog", "url": "https://nx.dev/blog", "type": "blog"},
-  {"name": "Yee Fei Ooi", "url": "https://medium.com/@ooi_yee_fei", "type": "blog"}
-]
-```
-
-### Community Indexes (Priority: Medium)
-
-```json
-[
-  {"name": "Awesome Claude Skills", "url": "https://github.com/travisvn/awesome-claude-skills", "type": "repo"},
-  {"name": "Awesome Claude Code", "url": "https://github.com/hesreallyhim/awesome-claude-code", "type": "repo"},
-  {"name": "Awesome Claude", "url": "https://github.com/alvinunreal/awesome-claude", "type": "repo"},
-  {"name": "SkillsMP", "url": "https://skillsmp.com", "type": "marketplace"},
-  {"name": "Awesome Claude AI", "url": "https://awesomeclaude.ai", "type": "directory"}
-]
-```
-
-### Tools (Priority: Medium)
-
-```json
-[
-  {"name": "Worktree", "url": "https://github.com/agenttools/worktree", "type": "repo"}
-]
-```
-
-### GitHub Search Queries (Priority: High)
-
-Use `gh search repos` and `gh search code` for discovery:
-
-```bash
-# Repos with recent Claude Code activity (last 7 days)
-gh search repos "claude code" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated --limit=10
-
-# Hooks and skills hotspots
-gh search repos "claude code hooks" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
-gh search repos "claude code skills" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
-gh search repos "CLAUDE.md agent" --pushed=">$(date -d '7 days ago' +%Y-%m-%d)" --sort=updated
-
-# Topic-based discovery (often better signal)
-gh search repos --topic=claude-code --sort=updated --limit=10
-gh search repos --topic=model-context-protocol --sort=updated --limit=10
-
-# Code search for specific patterns
-gh search code "PreToolUse" --language=json --limit=5
-gh search code "PostToolUse" --language=json --limit=5
-```
-
-### Reddit Search (Priority: Medium)
-
-Use Firecrawl or web search for Reddit threads:
-
-```
-site:reddit.com/r/ClaudeAI "Claude Code" (hooks OR skills OR worktree OR tmux)
-```
-
-### Official Docs Search (Priority: High)
-
-Check for documentation updates:
-
-```
-site:code.claude.com hooks
-site:code.claude.com skills
-site:code.claude.com github-actions
-site:code.claude.com mcp
-```
-
-## Execution Steps
-
-### Step 1: Check State
-
-Read `pulse/state.json` to get:
-- `last_run` timestamp
-- `seen_urls` array for deduplication
-- `seen_commits` object for repo tracking
-
-If today's digest exists AND `--force` not specified:
-```
-Pulse digest already exists for today: news/{date}_pulse.md
-Use --force to regenerate.
-```
-
-### Fetch Script
-
-Run the parallel fetcher:
-```bash
-python pulse/fetch.py --sources all --max-workers 15 --output pulse/fetch_cache.json
-```
-
-### Step 2: Fetch Sources
-
-**For Blogs** - Use Firecrawl to fetch and extract recent articles:
-
-```bash
-# Fetch blog content via firecrawl
-firecrawl https://simonwillison.net --format markdown
-```
-
-Look for articles with dates in the last N days (default 1).
-
-**For GitHub Repos** - Use `gh` CLI:
-
-```bash
-# Get latest release
-gh api repos/anthropics/claude-code/releases/latest --jq '.tag_name, .published_at, .body'
-
-# Get recent commits
-gh api repos/anthropics/claude-code/commits --jq '.[:5] | .[] | {sha: .sha[:7], message: .commit.message, date: .commit.author.date}'
-
-# Get recent discussions (if enabled)
-gh api repos/anthropics/claude-code/discussions --jq '.[:3]'
-```
-
-**For Marketplaces/Directories** - Use Firecrawl:
+## Installation
 
 ```bash
-firecrawl https://skillsmp.com --format markdown
-firecrawl https://awesomeclaude.ai --format markdown
-```
-
-### Step 3: Filter & Deduplicate
-
-For each item found:
-1. Check if URL is in `seen_urls` - skip if yes
-2. Check if date is within lookback window - skip if older
-3. For repos, check if commit SHA matches `seen_commits[repo]` - skip if same
-
-### Step 4: Generate Summaries
-
-For each new item, generate:
-
-**Precis** (2-3 sentences):
-> Engaging summary that captures the key points and why someone would want to read this.
-
-**Relevance** (1 sentence):
-> How this specifically relates to or could improve our claude-mods project.
-
-Use this prompt pattern for each item:
-```
-Article: [title]
-URL: [url]
-Content: [extracted content]
-
-Generate:
-1. A 2-3 sentence engaging summary (precis)
-2. A 1-sentence assessment of relevance to "claude-mods" (a collection of Claude Code extensions including agents, skills, and commands)
+# Clone the repo
+git clone https://github.com/0xDarkMatter/pulse.git
 
-Format:
-PRECIS: [summary]
-RELEVANCE: [assessment]
+# Copy the command to your Claude Code commands
+cp pulse/COMMAND.md ~/.claude/commands/pulse.md
 ```
 
-### Step 5: Write Digest
-
-**IMPORTANT**: Read `pulse/BRAND_VOICE.md` before writing. Follow the voice guidelines.
+## Why It Moved
 
-Create `news/{YYYY-MM-DD}_pulse.md` with format:
+Pulse generates persistent state files and news digests that don't belong in a shared plugin repo. As a standalone tool, it can:
 
-```markdown
-# Pulse · {date in words}
-
-{Opening paragraph: Set the scene. What's the throughline this week? What should readers care about? Write conversationally, as if explaining to a smart friend.}
+- Track its own state without cluttering claude-mods
+- Have dedicated issue tracking for source curation
+- Evolve independently from the plugin release cycle
 
 ---
 
-## The Signal
-
-{1-3 most important/newsworthy items. These get:}
-- 2-paragraph summaries (150-200 words)
-- Extended "Pulse insights:" (2-3 sentences)
-- Source name linked to parent site
-
-### [{title}]({url})
-
-**[{source_name}]({source_parent_url})** · {date}
-
-{Paragraph 1: Hook + context. Start with something interesting—a question, surprising fact, or reframing.}
-
-{Paragraph 2: Substance + implications. What's actually in it and why it matters beyond the obvious.}
-
-**Pulse insights:** {Opinionated take on relevance to Claude Code practitioners. Be direct, take a stance.}
-
----
-
-## Official Updates
-
-{Other items from Anthropic sources}
-
-### [{title}]({url})
-
-**[{source_name}]({source_parent_url})** · {date}
-
-{1 paragraph summary (60-100 words). Hook + substance + implication in flowing narrative.}
-
-**Pulse insights:** {1-2 sentences. Practical, specific.}
-
----
-
-## GitHub Discoveries
-
-{New repos from topic/keyword searches}
-
-### [{repo_name}]({url})
-
-**{author}** · {one-line description}
-
-{1 paragraph on what it does and why it's interesting.}
-
-**Pulse insights:** {1-2 sentences.}
-
----
-
-## Community Radar
-
-{Notable community sources, blogs, discussions}
-
-### [{source_name}]({url}) — {pithy tagline}
-
-{2-3 sentences on what makes this source valuable.}
-
----
-
-## Quick Hits
-
-- **[{title}]({url})**: {one-line description}
-- ...
-
----
-
-## The Hit List
-
-1. **{Action}** — {Why}
-2. ...
-
----
-
-*{Randomised footer from BRAND_VOICE.md} · {date in words} · {suffix}*
-```
-
-### Step 6: Update State
-
-Update `pulse/state.json`:
-
-```json
-{
-  "version": "1.0",
-  "last_run": "{ISO timestamp}",
-  "seen_urls": [
-    "...existing...",
-    "...new urls from this run..."
-  ],
-  "seen_commits": {
-    "anthropics/claude-code": "{latest_sha}",
-    "anthropics/skills": "{latest_sha}",
-    ...
-  }
-}
-```
-
-Keep only last 30 days of URLs to prevent unbounded growth.
-
-### Step 7: Display Summary
-
-```
-Pulse: 2025-12-12
-
-Fetched 23 sources
-Found 8 new items (15 deduplicated)
-
-Critical:
-  - anthropics/claude-code v1.2.0 released
-
-Digest written to: news/2025-12-12_pulse.md
-```
-
-## Fetching Strategy
-
-**Priority Order**:
-1. Try `WebFetch` first (fastest, built-in)
-2. If 403/blocked/JS-heavy, use Firecrawl
-3. For GitHub repos, always use `gh` CLI
-
-**Parallel Fetching**:
-- Fetch multiple sources simultaneously
-- Use retry with exponential backoff (2s, 4s, 8s, 16s)
-- Report progress: `[====------] 12/23 sources`
-
-**Error Handling**:
-- If source fails after 4 retries, log and continue
-- Include failed sources in digest footer
-- Don't fail entire run for single source failure
-
-## Output Example
-
-See `news/2025-12-12_pulse.md` for a complete example in the current format.
-
-Key elements:
-- Opening 2 paragraphs set the scene conversationally (see BRAND_VOICE.md)
-- "The Signal" section gets 2 paragraphs + extended insights
-- All source names link to parent sites
-- "Pulse insights:" replaces "Why it matters"
-- "The Hit List" for actionable items (not homework—marching orders)
-- Randomised footer from BRAND_VOICE.md variations
-
-## Edge Cases
-
-### No New Items
-```
-Pulse: 2025-12-12
-
-Fetched 23 sources
-No new items found (all 12 items already seen)
-
-Last digest: news/2025-12-11_pulse.md
-```
-
-### Source Failures
-```
-Pulse: 2025-12-12
-
-Fetched 23 sources (2 failed)
-Found 6 new items
-
-Failed sources:
-  - skillsmp.com (timeout after 4 retries)
-  - every.to (403 Forbidden)
-
-Digest written to: news/2025-12-12_pulse.md
-```
-
-### First Run (No State)
-Initialize state.json with empty arrays/objects before proceeding.
-
-## Integration
-
-The `/pulse` command is standalone but integrates with:
-- **claude-architect agent** - Reviews digests for actionable insights (configured in agent's startup)
-- **news/state.json** - Persistent deduplication state
-- **Firecrawl** - Primary fetching mechanism for blocked/JS sites
-- **gh CLI** - GitHub API access for repo updates
-
-## Notes
-
-- Run manually when you want ecosystem updates: `Pulse`
-- Use `--force` to regenerate today's digest with fresh data
-- Use `--days 7` for weekly catchup after vacation
-- Digests are git-trackable for historical reference
-- **Always read `pulse/BRAND_VOICE.md`** before writing summaries
+*See the [Pulse repository](https://github.com/0xDarkMatter/pulse) for full documentation.*

+ 0 - 0
news/.gitkeep


File diff suppressed because it is too large
+ 0 - 152
news/2025-12-12_pulse.md


+ 0 - 154
news/2025-12-13_pulse.md

@@ -1,154 +0,0 @@
-# Pulse · December 13, 2025
-
-The plugin ecosystem just exploded. Overnight, GitHub lit up with a dozen new Claude Code plugin repos—marketplaces, skill collections, workflow automations. Some are personal toolkits made public, others are production-ready frameworks with Linear integration and ADR generation. It's like watching a Cambrian explosion of developer tooling, except everyone's building for the same organism.
-
-What's driving this? Two things: Anthropic's plugin spec finally clicked, and people are realizing that the best way to make Claude Code work for you is to teach it your workflow. Not abstract "best practices," but your actual process—your commit style, your testing philosophy, your code review checklist. The repos popping up this week aren't just code; they're crystallized developer opinions. And that's exactly what Claude needs to be useful.
-
----
-
-## The Signal
-
-### [Very Important Agents](https://nicknisi.com/posts/very-important-agents)
-
-**[Nick Nisi](https://nicknisi.com)** · December 2025
-
-Nick's been on the Changelog & Friends podcast talking about Claude Code, and this post is the companion piece—a tour of the plugins that actually make it into his daily workflow. It's not a tutorial; it's a field report from someone who's been living in Claude Code for months and has strong opinions about what works.
-
-The real value here isn't the specific plugins (though those are useful), it's the meta-lesson: the best Claude Code setups are highly personal. Nick's workflow revolves around TypeScript, Vim keybindings, and a particular way of thinking about commits. His plugins encode those preferences. If you're still using vanilla Claude Code, this is a nudge to start building your own.
-
-**Pulse insights:** Nick's approach validates what we've been doing with claude-mods. The plugin-per-workflow pattern scales. Worth checking if any of his specific plugins solve problems we've been reinventing.
-
----
-
-### [How I Use Every Claude Code Feature](https://blog.sshh.io/p/how-i-use-every-claude-code-feature)
-
-**[SSHH Blog](https://blog.sshh.io)** · December 2025
-
-Shrivu Shankar wrote the brain dump we all needed—a systematic tour of Claude Code's feature surface, with concrete examples of when each thing actually matters. This isn't documentation; it's documentation filtered through heavy usage. The sections on hooks and skills are particularly good because they show the non-obvious interactions.
-
-What stands out: Shrivu doesn't just list features, he explains when *not* to use them. The bit about avoiding over-engineering skills is gold. Sometimes a simple CLAUDE.md instruction beats a formal skill definition. That kind of judgment only comes from shipping real work with these tools.
-
-**Pulse insights:** Required reading for anyone building plugins. The "when not to use it" framing should inform our own skill documentation. We should audit our skills for over-engineering.
-
----
-
-### [AI Can't Read Your Docs](https://blog.sshh.io/p/ai-cant-read-your-docs)
-
-**[SSHH Blog](https://blog.sshh.io)** · December 2025
-
-A counterintuitive claim: AI coding agents often struggle with well-documented codebases because the docs are optimized for humans, not LLMs. Shrivu argues for "LLM-native documentation"—structured metadata, explicit capability descriptions, examples that double as test cases. It's a provocation, but it's backed by real friction he's encountered.
-
-The deeper point: as AI agents become primary consumers of our code, we might need to write differently. Not dumbed-down, but differently structured. The implications for AGENTS.md files and skill descriptions are significant.
-
-**Pulse insights:** This should change how we write skill descriptions. Less prose, more structured capability declarations. Worth experimenting with the LLM-native doc format he proposes.
-
----
-
-## GitHub Discoveries
-
-The `claude-code` topic exploded this week. Here's what stood out:
-
-### [lazyclaude](https://github.com/NikiforovAll/lazyclaude)
-
-**NikiforovAll** · A lazygit-style TUI for visualizing Claude Code customizations
-
-Finally, someone built the obvious thing: a terminal UI for browsing your Claude Code setup. See your skills, agents, hooks, and commands in one navigable interface. It's early, but the concept is solid—configuration visibility matters when you're managing dozens of customizations.
-
-**Pulse insights:** This solves a real problem. Our `.claude/` directory is getting unwieldy. Worth watching—or contributing to.
-
----
-
-### [claude-deep-research](https://github.com/karanIPS/claude-deep-research)
-
-**karanIPS** · Deep Research workflow for Claude Code · ⭐ 20
-
-A pre-built configuration for running Claude Code in research mode—extended context, web search integration, structured output. It's essentially a "preset" for a specific use case, which is a pattern we haven't explored much.
-
-**Pulse insights:** The "preset for use case" pattern is interesting. We could package claude-mods configurations as installable presets rather than just loose files.
-
----
-
-### [claude-code-skills](https://github.com/levnikolaevich/claude-code-skills)
-
-**levnikolaevich** · 29 production-ready skills for Agile workflows · ⭐ 11
-
-The most ambitious skill collection we've seen—Epic/Story/Task management, Risk-Based Testing, ADR generation, all with Linear integration. It's opinionated (Agile-heavy), but the implementation quality is high. Real documentation, real tests.
-
-**Pulse insights:** This is what mature Claude Code extensions look like. The Linear integration is particularly clever—skills that talk to external systems, not just Claude. Steal liberally.
-
----
-
-### [gh-aw (GitHub Agentic Workflows)](https://github.com/githubnext/gh-aw)
-
-**githubnext** · GitHub's official agentic workflows experiment · ⭐ 265
-
-GitHub Next dropped this quietly: a framework for building AI-powered workflows that run on GitHub infrastructure. It's not Claude Code specific, but the patterns overlap heavily. Think of it as GitHub's answer to "what happens when agents need CI/CD."
-
-**Pulse insights:** This is GitHub signaling where they're headed. The workflow-as-code patterns here might become standard. Worth understanding even if we don't adopt directly.
-
----
-
-### [claude-code-openai](https://github.com/sar4daniela/claude-code-openai)
-
-**sar4daniela** · Run Claude Code on OpenAI models · ⭐ 10
-
-Exactly what it sounds like: a shim that lets you use Claude Code's interface with OpenAI's models. Useful for comparison testing or when you hit Claude rate limits. The existence of this suggests Claude Code's UX is good enough to port.
-
-**Pulse insights:** The "Claude Code as interface, any model as backend" pattern is growing. Good for ecosystem diversity. We should ensure our plugins are model-agnostic where possible.
-
----
-
-## Community Radar
-
-### [Claude Agent Skills: A First Principles Deep Dive](https://leehanchung.github.io/blogs/2025/10/26/claude-skills-deep-dive/)
-
-**[Lee Han Chung](https://leehanchung.github.io)** · Technical breakdown of how skills actually work
-
-Lee reverse-engineers the skill system from first principles—context injection, two-message patterns, LLM-based routing. If you've wondered why skills behave the way they do, this explains the mechanics. Dense but rewarding.
-
-**Pulse insights:** The "two-message pattern" explanation should be required reading before writing complex skills. We've been cargo-culting patterns without understanding why.
-
----
-
-### [GitButler 0.16 - "Sweet Sixteen"](https://blog.gitbutler.com/gitbutler-0-16)
-
-**[GitButler](https://blog.gitbutler.com)** · New release with Agents Tab and AI tool integrations
-
-GitButler now has a dedicated Agents Tab for AI tool configuration. The integration story is getting smoother—less context-switching between Git client and terminal. The "rules" feature for commit policies is clever.
-
-**Pulse insights:** GitButler is becoming the Git UI for the AI era. If you're not using it, you're making life harder than necessary.
-
----
-
-### [Think First, AI Second](https://every.to/p/think-first-ai-second)
-
-**[Every](https://every.to)** · Three principles for maintaining cognitive edge
-
-Every's take on not becoming dependent on AI for thinking. The principles are simple but important: formulate your own hypothesis before asking, review AI output critically, maintain skills you could do manually. Good hygiene for power users.
-
-**Pulse insights:** Worth internalizing. The "formulate before asking" principle applies directly to how we write prompts and skill descriptions.
-
----
-
-## Quick Hits
-
-- **[lazygit-style TUI](https://github.com/NikiforovAll/lazyclaude)**: Browse your Claude Code config visually—finally
-- **[GPT-5.2 coverage](https://simonwillison.net/2025/Dec/11/gpt-52/)**: Simon's breakdown of OpenAI's latest—useful for comparison context
-- **[Useful patterns for HTML tools](https://simonwillison.net/2025/Dec/10/html-tools/)**: Simon on building browser-based AI tools
-- **[claude-code-otel](https://github.com/thinktecture-labs/claude-code-otel)**: OpenTelemetry integration for Claude Code metrics—observability matters
-- **[wasp-lang/claude-plugins](https://github.com/wasp-lang/claude-plugins)**: Wasp framework's official Claude Code integration
-- **[spec-oxide](https://github.com/marconae/spec-oxide)**: Spec-driven development with MCP—Rust implementation
-
----
-
-## The Hit List
-
-1. **Install lazyclaude** — You need visibility into your `.claude/` directory chaos
-2. **Read Shrivu's feature guide** — Stop using Claude Code at 30% capacity
-3. **Audit your skills for over-engineering** — Sometimes CLAUDE.md instructions beat formal skills
-4. **Check claude-code-skills for Linear patterns** — If you use Linear, steal this integration
-5. **Consider presets** — Package your config as an installable setup, not loose files
-
----
-
-*Brewed by Pulse · 13th December 2025 · 19 articles, 15 repos, zero hallucinations*

File diff suppressed because it is too large
+ 0 - 142
news/2025-12-20_pulse.md


File diff suppressed because it is too large
+ 0 - 139
news/2025-12-23_pulse.md


+ 0 - 28
news/state.json

@@ -1,28 +0,0 @@
-{
-  "version": "1.0",
-  "last_run": "2025-12-20T12:00:00Z",
-  "seen_urls": [
-    "https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation",
-    "https://www.anthropic.com/news/compliance-framework-SB53",
-    "https://www.anthropic.com/news/genesis-mission-partnership",
-    "https://www.anthropic.com/news/protecting-well-being-of-users",
-    "https://www.anthropic.com/news/anthropic-accenture-partnership",
-    "https://simonwillison.net/2025/Dec/19/agent-skills/",
-    "https://simonwillison.net/2025/Dec/19/andrej-karpathy/",
-    "https://simonwillison.net/2025/Dec/19/introducing-gpt-52-codex/",
-    "https://simonwillison.net/2025/Dec/19/sam-rose-llms/",
-    "https://simonwillison.net/2025/Dec/18/code-proven-to-work/",
-    "https://simonwillison.net/2025/Dec/18/swift-justhtml/",
-    "https://www.docker.com/blog/develop-deploy-voice-ai-apps/",
-    "https://www.docker.com/blog/add-mcp-server-to-chatgpt/",
-    "https://www.docker.com/blog/docker-model-runner-universal-blue/",
-    "https://www.docker.com/blog/docker-hardened-images-security-independently-validated-by-srlabs/",
-    "https://www.docker.com/blog/from-the-captains-chair-igor-aleksandrov/",
-    "https://every.to/source-code/openai-gave-us-a-glimpse-into-their-ai-coding-playbook",
-    "https://every.to/p/how-ai-can-cut-your-planning-cycle-from-two-weeks-to-two-days",
-    "https://blog.gitbutler.com/gitbutler-with-multiple-accounts/",
-    "https://github.com/anthropics/skills",
-    "https://github.com/obra/superpowers"
-  ],
-  "seen_commits": {}
-}

+ 0 - 0
pulse/.gitkeep


File diff suppressed because it is too large
+ 0 - 173
pulse/BRAND_VOICE.md


+ 0 - 159
pulse/NEWS_TEMPLATE.md

@@ -1,159 +0,0 @@
-# Pulse · {{DATE_WORDS}}
-
-{{INTRO}}
-
----
-
-## The Signal
-
-{{#LEAD_STORIES}}
-### [{{TITLE}}]({{URL}})
-
-**[{{SOURCE_NAME}}]({{SOURCE_URL}})** · {{DATE}}
-
-{{SUMMARY_P1}}
-
-{{SUMMARY_P2}}
-
-**Pulse insights:** {{INSIGHTS}}
-
----
-{{/LEAD_STORIES}}
-
-## Official Updates
-
-{{#OFFICIAL}}
-### [{{TITLE}}]({{URL}})
-
-**[{{SOURCE_NAME}}]({{SOURCE_URL}})** · {{DATE}}
-
-{{SUMMARY}}
-
-**Pulse insights:** {{INSIGHTS}}
-
----
-{{/OFFICIAL}}
-
-## GitHub Discoveries
-
-{{GITHUB_INTRO}}
-
-{{#GITHUB_REPOS}}
-### [{{REPO_NAME}}]({{URL}})
-
-**{{AUTHOR}}** · {{ONE_LINER}}
-
-{{DESCRIPTION}}
-
-**Pulse insights:** {{INSIGHTS}}
-
----
-{{/GITHUB_REPOS}}
-
-## Community Radar
-
-{{#COMMUNITY}}
-### [{{ARTICLE_TITLE}}]({{ARTICLE_URL}})
-
-**[{{SOURCE_NAME}}]({{SOURCE_URL}})** · {{DATE}}
-
-{{SUMMARY}}
-
-**Pulse insights:** {{INSIGHTS}}
-
----
-{{/COMMUNITY}}
-
-## Quick Hits
-
-{{#QUICK_HITS}}
-- **[{{TITLE}}]({{URL}})**: {{ONE_LINER}}
-{{/QUICK_HITS}}
-
----
-
-## The Hit List
-
-{{#ACTION_ITEMS}}
-{{INDEX}}. **{{ACTION}}** — {{REASON}}
-{{/ACTION_ITEMS}}
-
----
-
-*{{FOOTER}}*
-
----
-
-## Template Variables Reference
-
-### Global
-- `{{DATE_WORDS}}` - e.g., "December 12, 2025"
-- `{{DATE_ISO}}` - e.g., "2025-12-12"
-- `{{SOURCE_COUNT}}` - Total sources fetched
-- `{{INTRO}}` - Opening 2-paragraph hook (see BRAND_VOICE.md)
-- `{{FOOTER}}` - Randomised sign-off (see BRAND_VOICE.md footer variations)
-
-### The Signal (1-3 items)
-- `{{TITLE}}` - Article/post title
-- `{{URL}}` - Direct link to content
-- `{{SOURCE_NAME}}` - e.g., "Anthropic Engineering"
-- `{{SOURCE_URL}}` - Parent site URL
-- `{{DATE}}` - Publication date
-- `{{SUMMARY_P1}}` - First paragraph (hook + context)
-- `{{SUMMARY_P2}}` - Second paragraph (substance + implications)
-- `{{INSIGHTS}}` - 2-3 sentence Pulse insights
-
-### Official Updates
-Same as Lead Stories but with single `{{SUMMARY}}` paragraph.
-
-### GitHub Discoveries
-- `{{GITHUB_INTRO}}` - Brief intro to the section
-- `{{REPO_NAME}}` - Repository name
-- `{{AUTHOR}}` - GitHub username
-- `{{ONE_LINER}}` - Brief description
-- `{{DESCRIPTION}}` - 1 paragraph explanation
-- `{{INSIGHTS}}` - 1-2 sentence insights
-
-### Community Radar
-- `{{ARTICLE_TITLE}}` - Specific article title (not just blog name)
-- `{{ARTICLE_URL}}` - Direct article link
-- `{{SOURCE_NAME}}` - Blog/publication name
-- `{{SOURCE_URL}}` - Blog homepage
-- `{{DATE}}` - Article date
-- `{{SUMMARY}}` - 1 paragraph summary
-- `{{INSIGHTS}}` - 1-2 sentence insights
-
-### Quick Hits (4-6 items)
-- `{{TITLE}}` - Item title
-- `{{URL}}` - Link
-- `{{ONE_LINER}}` - Pithy description (max 15 words)
-
-### The Hit List (3-5 items)
-- `{{INDEX}}` - Number (1, 2, 3...)
-- `{{ACTION}}` - What to do
-- `{{REASON}}` - Why it matters (brief)
-
----
-
-## Section Guidelines
-
-### Intro ({{INTRO}})
-Two paragraphs. First hooks with a question or surprising observation. Second expands with "here's what we found" energy. Should feel like the opening of a really good newsletter—makes you want to keep reading. Be cheeky, be specific, avoid clichés.
-
-### The Signal
-Reserve for genuinely important items:
-- Breaking news from Anthropic
-- Major ecosystem shifts
-- Tools/patterns that change how people work
-
-### Community Radar
-**Must include specific recent articles**, not just blog links. Each entry should be a piece of content published in the last 7 days, with its own summary and insights.
-
-### Quick Hits
-Rapid-fire items that don't need full treatment but are worth knowing. Good for:
-- Minor updates
-- Interesting repos without much to say
-- Things to bookmark for later
-
-### The Hit List
-Formerly "Actionable Items." Should feel like marching orders, not homework. Frame as opportunities, not obligations.

File diff suppressed because it is too large
+ 0 - 255
pulse/articles_cache.json


+ 0 - 401
pulse/fetch.py

@@ -1,401 +0,0 @@
-#!/usr/bin/env python3
-"""
-Pulse Fetch - Parallel URL fetching for Claude Code news digest.
-
-Uses asyncio + ThreadPoolExecutor to fetch multiple URLs via Firecrawl simultaneously.
-Outputs JSON with fetched content for LLM summarization.
-
-Usage:
-    python fetch.py                          # Fetch all sources
-    python fetch.py --sources blogs          # Fetch only blogs
-    python fetch.py --max-workers 20         # Increase parallelism
-    python fetch.py --output pulse.json
-    python fetch.py --discover-articles      # Extract recent articles from blog homepages
-"""
-
-import os
-import sys
-import json
-import re
-from datetime import datetime, timezone
-from concurrent.futures import ThreadPoolExecutor, as_completed
-from pathlib import Path
-from urllib.parse import urlparse, urljoin
-import argparse
-
-# Try to import firecrawl
-try:
-    from firecrawl import FirecrawlApp
-    FIRECRAWL_AVAILABLE = True
-except ImportError:
-    FIRECRAWL_AVAILABLE = False
-    print("Warning: firecrawl not installed. Install with: pip install firecrawl-py")
-
-# Sources configuration
-SOURCES = {
-    "official": [
-        {"name": "Anthropic Engineering", "url": "https://www.anthropic.com/engineering", "type": "blog"},
-        {"name": "Claude Blog", "url": "https://claude.ai/blog", "type": "blog"},
-        {"name": "Claude Code Docs", "url": "https://code.claude.com", "type": "docs"},
-    ],
-    "blogs": [
-        {"name": "Simon Willison", "url": "https://simonwillison.net", "type": "blog"},
-        {"name": "Every", "url": "https://every.to", "type": "blog"},
-        {"name": "SSHH Blog", "url": "https://blog.sshh.io", "type": "blog"},
-        {"name": "Lee Han Chung", "url": "https://leehanchung.github.io", "type": "blog"},
-        {"name": "Nick Nisi", "url": "https://nicknisi.com", "type": "blog"},
-        {"name": "HumanLayer", "url": "https://www.humanlayer.dev/blog", "type": "blog"},
-        {"name": "Chris Dzombak", "url": "https://www.dzombak.com/blog", "type": "blog"},
-        {"name": "GitButler", "url": "https://blog.gitbutler.com", "type": "blog"},
-        {"name": "Docker Blog", "url": "https://www.docker.com/blog", "type": "blog"},
-        {"name": "Nx Blog", "url": "https://nx.dev/blog", "type": "blog"},
-        {"name": "Yee Fei Ooi", "url": "https://medium.com/@ooi_yee_fei", "type": "blog"},
-    ],
-    "community": [
-        {"name": "SkillsMP", "url": "https://skillsmp.com", "type": "marketplace"},
-        {"name": "Awesome Claude AI", "url": "https://awesomeclaude.ai", "type": "directory"},
-    ],
-}
-
-# Relevance keywords for filtering
-RELEVANCE_KEYWORDS = [
-    "claude", "claude code", "anthropic", "mcp", "model context protocol",
-    "agent", "skill", "subagent", "cli", "terminal", "prompt engineering",
-    "cursor", "windsurf", "copilot", "aider", "coding assistant", "hooks"
-]
-
-# Patterns to identify article links in markdown content
-ARTICLE_LINK_PATTERNS = [
-    # Standard markdown links with date-like paths
-    r'\[([^\]]+)\]\((https?://[^\)]+/\d{4}/[^\)]+)\)',
-    # Links with /blog/, /posts/, /p/ paths
-    r'\[([^\]]+)\]\((https?://[^\)]+/(?:blog|posts?|p|articles?)/[^\)]+)\)',
-    # Links with slugified titles (word-word-word pattern)
-    r'\[([^\]]+)\]\((https?://[^\)]+/[\w]+-[\w]+-[\w]+[^\)]*)\)',
-]
-
-# Exclude patterns (navigation, categories, tags, etc.)
-EXCLUDE_PATTERNS = [
-    r'/tag/', r'/category/', r'/author/', r'/page/', r'/archive/',
-    r'/about', r'/contact', r'/subscribe', r'/newsletter', r'/feed',
-    r'/search', r'/login', r'/signup', r'/privacy', r'/terms',
-    r'\.xml$', r'\.rss$', r'\.atom$', r'#', r'\?',
-]
-
-
-def fetch_url_firecrawl(app: 'FirecrawlApp', source: dict) -> dict:
-    """Fetch a single URL using Firecrawl API."""
-    url = source["url"]
-    name = source["name"]
-
-    try:
-        result = app.scrape(url, formats=['markdown'])
-
-        # Handle both dict and object responses
-        if hasattr(result, 'markdown'):
-            markdown = result.markdown or ''
-            metadata = result.metadata.__dict__ if hasattr(result.metadata, '__dict__') else {}
-        else:
-            markdown = result.get('markdown', '')
-            metadata = result.get('metadata', {})
-
-        return {
-            "name": name,
-            "url": url,
-            "type": source.get("type", "unknown"),
-            "status": "success",
-            "content": markdown[:50000],  # Limit content size
-            "title": metadata.get('title', name),
-            "description": metadata.get('description', ''),
-            "fetched_at": datetime.utcnow().isoformat() + "Z",
-        }
-    except Exception as e:
-        return {
-            "name": name,
-            "url": url,
-            "type": source.get("type", "unknown"),
-            "status": "error",
-            "error": str(e),
-            "fetched_at": datetime.utcnow().isoformat() + "Z",
-        }
-
-
-def get_firecrawl_api_key():
-    """Get Firecrawl API key from env or config file."""
-    import re
-
-    # Try environment variable first
-    key = os.getenv('FIRECRAWL_API_KEY')
-    if key:
-        return key
-
-    # Try ~/.claude/delegate.yaml
-    config_path = os.path.expanduser("~/.claude/delegate.yaml")
-    if os.path.exists(config_path):
-        try:
-            with open(config_path, encoding="utf-8") as f:
-                content = f.read()
-            # Parse the api_keys block and find firecrawl
-            in_api_keys = False
-            for line in content.split('\n'):
-                stripped = line.strip()
-                if stripped.startswith('api_keys:'):
-                    in_api_keys = True
-                    continue
-                if in_api_keys and stripped and not line.startswith(' ') and not line.startswith('\t'):
-                    if not stripped.startswith('#'):
-                        in_api_keys = False
-                if in_api_keys and 'firecrawl:' in stripped.lower():
-                    match = re.search(r'firecrawl:\s*["\']?([^"\'\n#]+)', stripped, re.IGNORECASE)
-                    if match:
-                        return match.group(1).strip()
-        except Exception:
-            pass
-
-    return None
-
-
-def fetch_all_parallel(sources: list, max_workers: int = 10) -> list:
-    """Fetch all URLs in parallel using ThreadPoolExecutor."""
-    if not FIRECRAWL_AVAILABLE:
-        print("Error: firecrawl not available")
-        return []
-
-    api_key = get_firecrawl_api_key()
-    if not api_key:
-        print("Error: FIRECRAWL_API_KEY not set. Set env var or add to ~/.claude/delegate.yaml")
-        return []
-
-    app = FirecrawlApp(api_key=api_key)
-    results = []
-    total = len(sources)
-    completed = 0
-
-    print(f"Fetching {total} URLs with {max_workers} workers...")
-
-    with ThreadPoolExecutor(max_workers=max_workers) as executor:
-        # Submit all tasks
-        future_to_source = {
-            executor.submit(fetch_url_firecrawl, app, source): source
-            for source in sources
-        }
-
-        # Process results as they complete
-        for future in as_completed(future_to_source):
-            source = future_to_source[future]
-            completed += 1
-
-            try:
-                result = future.result()
-                results.append(result)
-                status = "OK" if result["status"] == "success" else "FAIL"
-                print(f"[{completed}/{total}] {status}: {source['name']}")
-            except Exception as e:
-                print(f"[{completed}/{total}] ERROR: {source['name']} - {e}")
-                results.append({
-                    "name": source["name"],
-                    "url": source["url"],
-                    "status": "error",
-                    "error": str(e),
-                })
-
-    return results
-
-
-def extract_article_links(content: str, base_url: str, max_articles: int = 5) -> list:
-    """Extract article links from markdown content."""
-    articles = []
-    seen_urls = set()
-    base_domain = urlparse(base_url).netloc
-
-    for pattern in ARTICLE_LINK_PATTERNS:
-        matches = re.findall(pattern, content)
-        for title, url in matches:
-            # Skip if already seen
-            if url in seen_urls:
-                continue
-
-            # Skip excluded patterns
-            if any(re.search(exc, url, re.IGNORECASE) for exc in EXCLUDE_PATTERNS):
-                continue
-
-            # Ensure same domain or relative URL
-            parsed = urlparse(url)
-            if parsed.netloc and parsed.netloc != base_domain:
-                continue
-
-            # Clean up title
-            title = title.strip()
-            if len(title) < 5 or len(title) > 200:
-                continue
-
-            # Skip generic link text
-            if title.lower() in ['read more', 'continue reading', 'link', 'here', 'click here']:
-                continue
-
-            seen_urls.add(url)
-            articles.append({
-                "title": title,
-                "url": url,
-            })
-
-    return articles[:max_articles]
-
-
-def discover_articles(sources: list, max_workers: int = 10, max_articles_per_source: int = 5) -> list:
-    """Fetch blog homepages and extract recent article links."""
-    if not FIRECRAWL_AVAILABLE:
-        print("Error: firecrawl not available")
-        return []
-
-    api_key = get_firecrawl_api_key()
-    if not api_key:
-        print("Error: FIRECRAWL_API_KEY not set. Set env var or add to ~/.claude/delegate.yaml")
-        return []
-
-    # First, fetch all blog homepages
-    print(f"Phase 1: Fetching {len(sources)} blog homepages...")
-    homepage_results = fetch_all_parallel(sources, max_workers=max_workers)
-
-    # Extract article links from each
-    all_articles = []
-    print(f"\nPhase 2: Extracting article links...")
-
-    for result in homepage_results:
-        if result["status"] != "success":
-            continue
-
-        content = result.get("content", "")
-        base_url = result["url"]
-        source_name = result["name"]
-
-        articles = extract_article_links(content, base_url, max_articles=max_articles_per_source)
-        print(f"  {source_name}: found {len(articles)} articles")
-
-        for article in articles:
-            all_articles.append({
-                "name": article["title"],
-                "url": article["url"],
-                "type": "article",
-                "source_name": source_name,
-                "source_url": base_url,
-            })
-
-    if not all_articles:
-        print("No articles found to fetch")
-        return homepage_results
-
-    # Phase 3: Fetch individual articles
-    print(f"\nPhase 3: Fetching {len(all_articles)} individual articles...")
-    article_results = fetch_all_parallel(all_articles, max_workers=max_workers)
-
-    # Add source info to results
-    for i, result in enumerate(article_results):
-        if i < len(all_articles):
-            result["source_name"] = all_articles[i].get("source_name", "")
-            result["source_url"] = all_articles[i].get("source_url", "")
-
-    return article_results
-
-
-def filter_relevant_content(results: list) -> list:
-    """Filter results to only those with Claude Code relevant content."""
-    relevant = []
-
-    for result in results:
-        if result["status"] != "success":
-            continue
-
-        content = ((result.get("content") or "") + " " +
-                   (result.get("title") or "") + " " +
-                   (result.get("description") or "")).lower()
-
-        # Check for relevance keywords
-        for keyword in RELEVANCE_KEYWORDS:
-            if keyword.lower() in content:
-                result["relevant_keyword"] = keyword
-                relevant.append(result)
-                break
-
-    return relevant
-
-
-def main():
-    parser = argparse.ArgumentParser(description="Pulse Fetch - Parallel URL fetching")
-    parser.add_argument("--sources", choices=["all", "official", "blogs", "community"],
-                        default="all", help="Source category to fetch")
-    parser.add_argument("--max-workers", type=int, default=10,
-                        help="Maximum parallel workers (default: 10)")
-    parser.add_argument("--output", "-o", type=str, default=None,
-                        help="Output JSON file (default: stdout)")
-    parser.add_argument("--filter-relevant", action="store_true",
-                        help="Only include results with relevant keywords")
-    parser.add_argument("--discover-articles", action="store_true",
-                        help="Extract and fetch individual articles from blog homepages")
-    parser.add_argument("--max-articles-per-source", type=int, default=5,
-                        help="Max articles to fetch per source (default: 5)")
-    args = parser.parse_args()
-
-    # Collect sources based on selection
-    if args.sources == "all":
-        sources = []
-        for category in SOURCES.values():
-            sources.extend(category)
-    else:
-        sources = SOURCES.get(args.sources, [])
-
-    if not sources:
-        print(f"No sources found for category: {args.sources}")
-        return 1
-
-    # Fetch URLs - either discover articles or just fetch homepages
-    if args.discover_articles:
-        # Filter to only blog-type sources for article discovery
-        blog_sources = [s for s in sources if s.get("type") == "blog"]
-        if not blog_sources:
-            print("No blog sources found for article discovery")
-            return 1
-        results = discover_articles(
-            blog_sources,
-            max_workers=args.max_workers,
-            max_articles_per_source=args.max_articles_per_source
-        )
-    else:
-        results = fetch_all_parallel(sources, max_workers=args.max_workers)
-
-    # Filter if requested
-    if args.filter_relevant:
-        results = filter_relevant_content(results)
-        print(f"\nFiltered to {len(results)} relevant results")
-
-    # Prepare output
-    output = {
-        "fetched_at": datetime.utcnow().isoformat() + "Z",
-        "total_sources": len(sources),
-        "successful": len([r for r in results if r.get("status") == "success"]),
-        "failed": len([r for r in results if r.get("status") != "success"]),
-        "results": results,
-    }
-
-    # Output
-    json_output = json.dumps(output, indent=2)
-
-    if args.output:
-        Path(args.output).write_text(json_output, encoding="utf-8")
-        print(f"\nResults saved to: {args.output}")
-    else:
-        print("\n" + "=" * 60)
-        print("RESULTS")
-        print("=" * 60)
-        print(json_output)
-
-    # Summary
-    print(f"\n{'=' * 60}")
-    print(f"SUMMARY: {output['successful']}/{output['total_sources']} successful")
-    print(f"{'=' * 60}")
-
-    return 0
-
-
-if __name__ == "__main__":
-    sys.exit(main())

File diff suppressed because it is too large
+ 0 - 168
pulse/fetch_cache.json


+ 0 - 45
pulse/state.json

@@ -1,45 +0,0 @@
-{
-  "version": "1.0",
-  "last_run": "2025-12-23T12:00:00Z",
-  "seen_urls": [
-    "https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents",
-    "https://www.anthropic.com/engineering/advanced-tool-use",
-    "https://www.anthropic.com/engineering/code-execution-with-mcp",
-    "https://www.anthropic.com/engineering/claude-code-sandboxing",
-    "https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills",
-    "https://github.com/anthropics/claude-code/releases/tag/v2.0.74",
-    "https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation",
-    "https://www.anthropic.com/news/compliance-framework-SB53",
-    "https://www.anthropic.com/news/genesis-mission-partnership",
-    "https://www.anthropic.com/news/protecting-well-being-of-users",
-    "https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools",
-    "https://github.com/musistudio/claude-code-router",
-    "https://github.com/wshobson/agents",
-    "https://github.com/thedotmack/claude-mem",
-    "https://github.com/ryoppippi/ccusage",
-    "https://github.com/czlonkowski/n8n-mcp",
-    "https://github.com/obra/superpowers",
-    "https://simonwillison.net/2025/Dec/22/claude-chrome-cloudflare/",
-    "https://simonwillison.net/2025/Dec/18/code-proven-to-work/",
-    "https://simonwillison.net/2025/Dec/17/gemini-3-flash/",
-    "https://skillsmp.com",
-    "https://awesomeclaude.ai",
-    "https://simonwillison.net"
-  ],
-  "seen_commits": {
-    "anthropics/claude-code": "d213a74",
-    "anthropics/skills": "69c0b1a"
-  },
-  "digest_history": [
-    {
-      "date": "2025-12-12",
-      "file": "2025-12-12_pulse.md",
-      "items_count": 24
-    },
-    {
-      "date": "2025-12-23",
-      "file": "2025-12-23_pulse.md",
-      "items_count": 18
-    }
-  ]
-}