|
|
@@ -1,6 +1,6 @@
|
|
|
---
|
|
|
name: introspect
|
|
|
-description: "Analyze Claude Code session logs - extract thinking blocks, tool usage stats, error patterns, debug trajectories. Triggers on: introspect, session logs, trajectory, analyze sessions, what went wrong, tool usage, thinking blocks, session history, my reasoning, past sessions, what did I do."
|
|
|
+description: "Analyze Claude Code session logs and surface productivity improvements. Extracts thinking blocks, tool usage stats, error patterns, debug trajectories - then generates friendly, actionable recommendations. Triggers on: introspect, session logs, trajectory, analyze sessions, what went wrong, tool usage, thinking blocks, session history, my reasoning, past sessions, what did I do, how can I improve."
|
|
|
allowed-tools: "Bash Read Grep Glob"
|
|
|
related-skills: [log-ops, data-processing]
|
|
|
---
|
|
|
@@ -166,6 +166,91 @@ cat session.jsonl | jq -r 'select(.type == "assistant") | .message.content[]? |
|
|
|
rg '"tool_use"' session.jsonl | jq -r '.message.content[]? | select(.type == "tool_use") | .name'
|
|
|
```
|
|
|
|
|
|
+## Session Insights
|
|
|
+
|
|
|
+After running any analysis, **always** generate a "Session Insights" section with actionable recommendations. The tone should be friendly and mentoring - a helpful colleague who noticed some patterns, not a critic.
|
|
|
+
|
|
|
+### What to Look For
|
|
|
+
|
|
|
+Scan the session data for these patterns and generate recommendations where relevant:
|
|
|
+
|
|
|
+**Tool usage improvements:**
|
|
|
+- Using `Bash(grep ...)` or `Bash(cat ...)` instead of Grep/Read tools - suggest the dedicated tools
|
|
|
+- Repeated failed tool calls with same input - suggest pivoting earlier
|
|
|
+- Heavy Bash usage for tasks a skill handles - recommend the skill by name
|
|
|
+- No use of parallel tool calls when independent reads/searches could overlap
|
|
|
+
|
|
|
+**Workflow patterns:**
|
|
|
+- Long sessions with no commits - suggest incremental commits
|
|
|
+- No `/save` at session end - remind about session continuity
|
|
|
+- Repeated context re-reading (same files read 3+ times) - suggest keeping notes or using `/save`
|
|
|
+- Large blocks of manual work that `/iterate` could automate - mention the loop pattern
|
|
|
+- Debugging spirals (5+ attempts at same approach) - suggest the 3-attempt pivot rule
|
|
|
+
|
|
|
+**Skill and command awareness:**
|
|
|
+- Manual code review without `/review` - mention the skill
|
|
|
+- Test writing without `/testgen` - mention the skill
|
|
|
+- Complex reasoning without `/atomise` - mention it for hard problems
|
|
|
+- Agent spawning for tasks a skill already handles - suggest the lighter-weight option
|
|
|
+
|
|
|
+**Session efficiency:**
|
|
|
+- Very long sessions (100+ tool calls) - suggest breaking into focused sessions
|
|
|
+- High error rate (>30% of tool calls return errors) - note the pattern and suggest investigation
|
|
|
+- Excessive file reads vs edits ratio (>10:1) - might indicate uncertainty, suggest planning first
|
|
|
+
|
|
|
+**Permission recommendations:**
|
|
|
+This is high-value - permission prompts break flow and cost context. Scan the session for:
|
|
|
+- Bash commands that were used successfully and repeatedly - these are candidates for `Bash(<command>:*)` allow rules
|
|
|
+- Tool patterns that appeared frequently (WebFetch, WebSearch, specific MCP tools) - suggest adding to allow list
|
|
|
+- Commands that failed with permission errors or were retried - likely blocked by missing permissions
|
|
|
+
|
|
|
+Generate a ready-to-paste `permissions.allow` snippet for `.claude/settings.local.json`:
|
|
|
+
|
|
|
+```
|
|
|
+**Permissions**
|
|
|
+
|
|
|
+Based on this session, these tools were used frequently and could be
|
|
|
+pre-approved to reduce interruptions:
|
|
|
+
|
|
|
+Add to `.claude/settings.local.json` under `permissions.allow`:
|
|
|
+ "Bash(uv:*)",
|
|
|
+ "Bash(pytest:*)",
|
|
|
+ "Bash(docker:*)",
|
|
|
+ "WebSearch",
|
|
|
+ "mcp__my-server__*"
|
|
|
+
|
|
|
+This would have saved roughly ~N permission prompts in this session.
|
|
|
+```
|
|
|
+
|
|
|
+Only recommend commands that were actually used successfully - never suggest blanket `Bash(*)`. But do encourage wildcard patterns where sensible - `Bash(git:*)` is better than listing `Bash(git status)`, `Bash(git add)`, `Bash(git commit)` separately. Common wildcard groups:
|
|
|
+
|
|
|
+- `Bash(git:*)` - all git operations
|
|
|
+- `Bash(npm:*)`, `Bash(pnpm:*)`, `Bash(uv:*)` - package managers
|
|
|
+- `Bash(pytest:*)`, `Bash(jest:*)` - test runners
|
|
|
+- `Bash(docker:*)`, `Bash(cargo:*)`, `Bash(go:*)` - build/runtime tools
|
|
|
+- `mcp__server-name__*` - all tools from an MCP server
|
|
|
+
|
|
|
+Group recommendations by category for clarity. Reference `/setperms` for a full permissions setup if the list is long.
|
|
|
+
|
|
|
+### Output Format
|
|
|
+
|
|
|
+```
|
|
|
+## Session Insights
|
|
|
+
|
|
|
+Here are a few things I noticed that might help next time:
|
|
|
+
|
|
|
+**[Category]**
|
|
|
+- [Observation]: [What was seen in the data]
|
|
|
+- [Suggestion]: [Friendly, specific recommendation]
|
|
|
+
|
|
|
+**[Category]**
|
|
|
+- ...
|
|
|
+```
|
|
|
+
|
|
|
+Scale the number of recommendations to the session size. A quick 10-minute session might warrant 1-2 observations. A sprawling multi-hour session with 100+ tool calls could easily have 10-15 actionable insights. Match the depth of analysis to the depth of the session.
|
|
|
+
|
|
|
+Only mention patterns that are clearly present in the data - don't guess or stretch. If the session was clean and efficient, say so: "This was a well-structured session - nothing jumps out as needing improvement."
|
|
|
+
|
|
|
## Reference Files
|
|
|
|
|
|
| File | Contents |
|