OpenCode maintains a comprehensive local logging system that stores all session data, messages, tool calls, and agent interactions. This document provides a complete reference for understanding and working with OpenCode's native logging infrastructure.
Key Benefits:
OpenCode Session Storage
├── Session Info (metadata)
├── Messages (user/assistant exchanges)
└── Parts (content, tools, reasoning, patches)
Storage Location:
~/.local/share/opencode/
├── log/ # System logs (markdown format)
└── project/
└── {project-name}/
└── storage/
└── session/
├── info/ # Session metadata
├── message/ # Message metadata
└── part/ # Message parts (content, tools, etc.)
~/.local/share/opencode/project/{project-name}/storage/
├── migration # Migration version file
└── session/
├── info/ # Session-level metadata
│ └── {session-id}.json # One file per session
├── message/ # Message-level metadata
│ └── {session-id}/ # Directory per session
│ └── {message-id}.json # One file per message
└── part/ # Message parts (actual content)
└── {session-id}/ # Directory per session
└── {message-id}/ # Directory per message
└── {part-id}.json # One file per part
ses_ + 22 characters (e.g., ses_75545f167ffeWfPEtVLVSaFfjA)msg_ + 22 characters (e.g., msg_8aaba0e9b0017D9LcZ9dMTd6xA)prt_ + 22 characters (e.g., prt_8aaba0e9b0027HZzFFZtM4MJnH)toolu_ + 24 characters (e.g., toolu_01MMtz9DzLzQMow41iLpThHB)Location: session/info/{session-id}.json
Purpose: High-level session metadata
Schema:
interface SessionInfo {
id: string // ses_xxxxx
version: string // OpenCode version (e.g., "0.5.1")
title: string // Session title/description
time: {
created: number // Unix timestamp (ms)
updated: number // Unix timestamp (ms)
}
}
Example:
{
"id": "ses_75545f167ffeWfPEtVLVSaFfjA",
"version": "0.5.1",
"title": "Creating RAG agent tasks and workflow",
"time": {
"created": 1755210976920,
"updated": 1755210977929
}
}
Location: session/message/{session-id}/{message-id}.json
Purpose: Message-level metadata including agent info, tokens, cost, timing
Schema:
interface Message {
id: string // msg_xxxxx
role: "user" | "assistant"
sessionID: string // ses_xxxxx
// Agent Information (assistant messages only)
mode?: string // 🎯 Agent name/mode
system?: string[] // System prompt(s)
// Model Information
modelID?: string // e.g., "claude-sonnet-4-20250514"
providerID?: string // e.g., "anthropic"
// Context
path?: {
cwd: string // Working directory
root: string // Project root
}
// Metrics
cost?: number // API cost
tokens?: {
input: number
output: number
reasoning: number
cache: {
write: number // Cache write tokens
read: number // Cache read tokens
}
}
// Timing
time: {
created: number // Unix timestamp (ms)
completed?: number // Unix timestamp (ms) - assistant only
}
// Error Handling
error?: {
name: string
data: any
}
}
Example (User Message):
{
"id": "msg_8aaba0e9b0017D9LcZ9dMTd6xA",
"role": "user",
"sessionID": "ses_75545f167ffeWfPEtVLVSaFfjA",
"time": {
"created": 1755210976926
}
}
Example (Assistant Message):
{
"id": "msg_8aaba0ee3001YGUGe4U2GF5oAZ",
"role": "assistant",
"sessionID": "ses_75545f167ffeWfPEtVLVSaFfjA",
"mode": "code-base-agent",
"system": ["You are Claude Code...", "# Agent Instructions..."],
"modelID": "claude-sonnet-4-20250514",
"providerID": "anthropic",
"path": {
"cwd": "/Users/user/project",
"root": "/Users/user/project"
},
"cost": 0,
"tokens": {
"input": 8,
"output": 81,
"reasoning": 0,
"cache": {
"write": 376,
"read": 9348
}
},
"time": {
"created": 1755210976995,
"completed": 1755210987509
}
}
Location: session/part/{session-id}/{message-id}/{part-id}.json
Purpose: Actual message content, tool calls, reasoning, file changes, etc.
Part Types:
text - Text content (user input, assistant responses)tool - Tool calls (read, write, edit, bash, task, etc.)patch - File changes/commitsreasoning - Scratchpad/thinking contentstep-start - Step boundary markersstep-finish - Step boundary markersfile - File referencesPurpose: Text content from user or assistant
Schema:
interface TextPart {
id: string // prt_xxxxx
messageID: string // msg_xxxxx
sessionID: string // ses_xxxxx
type: "text"
text: string // The actual text content
synthetic?: boolean // Whether text is synthetic
time: {
start: number // Unix timestamp (ms)
end: number // Unix timestamp (ms)
}
}
Example:
{
"id": "prt_8aaba0e9b0027HZzFFZtM4MJnH",
"type": "text",
"text": "I want you to make a tasks for making a simple rag agent",
"synthetic": false,
"time": {
"start": 0,
"end": 0
},
"messageID": "msg_8aaba0e9b0017D9LcZ9dMTd6xA",
"sessionID": "ses_75545f167ffeWfPEtVLVSaFfjA"
}
Purpose: Tool execution tracking with input/output/status
Schema:
interface ToolPart {
id: string // prt_xxxxx
messageID: string // msg_xxxxx
sessionID: string // ses_xxxxx
type: "tool"
tool: string // Tool name: read, write, edit, bash, task, list, glob, grep, etc.
callID: string // toolu_xxxxx (Anthropic tool call ID)
state: {
status: "completed" | "error" | "pending" | "running"
input: Record<string, any> // Tool input arguments
output?: string // Tool output (if completed)
error?: string // Error message (if error)
metadata?: Record<string, any> // Additional metadata
title?: string // Tool result title
time: {
start: number // Unix timestamp (ms)
end: number // Unix timestamp (ms)
}
}
}
Example (Completed Tool Call):
{
"id": "prt_8aaba2f13001n61qUzBQLt9alA",
"messageID": "msg_8aaba0ee3001YGUGe4U2GF5oAZ",
"sessionID": "ses_75545f167ffeWfPEtVLVSaFfjA",
"type": "tool",
"tool": "list",
"callID": "toolu_01RGumBKHDY59QqVFf1sf16W",
"state": {
"status": "completed",
"input": {
"path": "/Users/user/project"
},
"output": "/Users/user/project/\n .opencode/\n agent/\n...",
"metadata": {
"count": 9,
"truncated": false
},
"title": "",
"time": {
"start": 1755210985762,
"end": 1755210985770
}
}
}
Example (Error Tool Call):
{
"id": "prt_8aaba25b5001qNq116TyFhdZiB",
"messageID": "msg_8aaba0ee3001YGUGe4U2GF5oAZ",
"sessionID": "ses_75545f167ffeWfPEtVLVSaFfjA",
"type": "tool",
"tool": "read",
"callID": "toolu_01MMtz9DzLzQMow41iLpThHB",
"state": {
"status": "error",
"input": {
"filePath": "/path/to/missing/file.md"
},
"error": "Error: ENOENT: no such file or directory",
"time": {
"start": 1755210983452,
"end": 1755210983456
}
}
}
Tool Call Statuses:
completed - Tool executed successfully (95.3% of calls)error - Tool execution failed (4.1% of calls)running - Tool currently executing (0.5% of calls)pending - Tool queued for execution (0.2% of calls)Purpose: Track file changes and git commits
Schema:
interface PatchPart {
id: string // prt_xxxxx
messageID: string // msg_xxxxx
sessionID: string // ses_xxxxx
type: "patch"
hash: string // Git commit hash
files: string[] // Array of file paths changed
}
Example:
{
"id": "prt_8e33bae36001OeMs7f18gvEDJN",
"messageID": "msg_8e337af2b001raZR3TdjPshyb0",
"sessionID": "ses_71cc8d5d1ffeBTr9L0ESbEXfhv",
"type": "patch",
"hash": "de359d2a9d9edcd5504ad6f25e567e60275b6377",
"files": [
"/Users/user/project/tasks/subtasks/feature/01-setup.md",
"/Users/user/project/tasks/subtasks/feature/02-config.md"
]
}
Purpose: Capture agent's internal reasoning/scratchpad
Schema:
interface ReasoningPart {
id: string // prt_xxxxx
messageID: string // msg_xxxxx
sessionID: string // ses_xxxxx
type: "reasoning"
text: string // Reasoning content
time: {
start: number // Unix timestamp (ms)
end: number // Unix timestamp (ms)
}
}
Purpose: Mark step start/finish for workflow tracking
Schema:
interface StepPart {
id: string // prt_xxxxx
messageID: string // msg_xxxxx
sessionID: string // ses_xxxxx
type: "step-start" | "step-finish"
tool: null
}
The mode field in assistant messages identifies which agent handled the message.
Agent Modes Found:
| Mode | Count | Description |
|---|---|---|
build |
69 | Build/validation agent |
codebase-agent |
33 | Codebase analysis agent |
general |
17 | General purpose agent |
core |
16 | Core agent |
code-base-agent |
8 | Code base agent (variant) |
task-manager |
1 | Task management agent |
plan |
1 | Planning agent |
Note: Agent mode names may vary based on your .opencode/agent/ configuration.
The modelID and providerID fields in assistant messages track which AI model and provider handled each message.
| Model | Provider | Usage Count | Percentage | Agents Using It |
|---|---|---|---|---|
claude-sonnet-4-20250514 |
anthropic | 117 | 81% | build, code-base-agent, codebase-agent, core, general, plan |
sonic |
opencode | 14 | 10% | build, codebase-agent |
claude-3-5-sonnet-20241022 |
anthropic | 8 | 6% | build |
gemini-2.5-flash |
5 | 3% | build, code-base-agent, core, task-manager | |
qwen/qwen3-coder |
openrouter | 1 | <1% | core |
Provider Usage:
├── anthropic: 125 (86.2%)
├── opencode: 14 (9.7%)
├── google: 5 (3.4%)
└── openrouter: 1 (0.7%)
Based on real session data, here's how models compare:
Claude Sonnet 4 (claude-sonnet-4-20250514):
Sonic (opencode):
Gemini 2.5 Flash (google):
Claude 3.5 Sonnet (claude-3-5-sonnet-20241022):
Total Parts: ~4,787
├── step-start: 1,334 (27.9%)
├── step-finish: 1,313 (27.4%)
├── tool: 1,290 (26.9%)
├── text: 749 (15.6%)
├── patch: 726 (15.2%)
├── reasoning: 369 (7.7%)
└── file: 6 (0.1%)
Tool Call Statuses:
├── completed: 1,229 (95.3%)
├── error: 53 (4.1%)
├── running: 6 (0.5%)
└── pending: 2 (0.2%)
Key Insight: 95.3% tool success rate indicates high reliability.
Session Timeline
Agent Tracking
Tool Usage
Performance Metrics
File Changes
Error Tracking
Context & Configuration
import fs from 'fs';
import path from 'path';
// Base path
const projectName = 'Users-username-Documents-GitHub-project';
const basePath = path.join(
process.env.HOME!,
'.local/share/opencode/project',
projectName,
'storage/session'
);
// Read session info
function getSessionInfo(sessionID: string) {
const infoPath = path.join(basePath, 'info', `${sessionID}.json`);
return JSON.parse(fs.readFileSync(infoPath, 'utf-8'));
}
// Read all messages for a session
function getSessionMessages(sessionID: string) {
const messagePath = path.join(basePath, 'message', sessionID);
const messageFiles = fs.readdirSync(messagePath);
return messageFiles.map(file => {
const filePath = path.join(messagePath, file);
return JSON.parse(fs.readFileSync(filePath, 'utf-8'));
}).sort((a, b) => a.time.created - b.time.created);
}
// Read all parts for a message
function getMessageParts(sessionID: string, messageID: string) {
const partPath = path.join(basePath, 'part', sessionID, messageID);
const partFiles = fs.readdirSync(partPath);
return partFiles.map(file => {
const filePath = path.join(partPath, file);
return JSON.parse(fs.readFileSync(filePath, 'utf-8'));
}).sort((a, b) => a.time.start - b.time.start);
}
interface TimelineEvent {
timestamp: number;
type: 'user_message' | 'assistant_message' | 'tool_call' | 'patch';
agent?: string;
data: any;
}
function buildSessionTimeline(sessionID: string): TimelineEvent[] {
const timeline: TimelineEvent[] = [];
const messages = getSessionMessages(sessionID);
for (const message of messages) {
// Add message event
timeline.push({
timestamp: message.time.created,
type: message.role === 'user' ? 'user_message' : 'assistant_message',
agent: message.mode,
data: message
});
// Add parts (tools, patches, etc.)
const parts = getMessageParts(sessionID, message.id);
for (const part of parts) {
if (part.type === 'tool') {
timeline.push({
timestamp: part.state.time.start,
type: 'tool_call',
agent: message.mode,
data: part
});
} else if (part.type === 'patch') {
timeline.push({
timestamp: message.time.created,
type: 'patch',
agent: message.mode,
data: part
});
}
}
}
return timeline.sort((a, b) => a.timestamp - b.timestamp);
}
function analyzeToolUsage(sessionID: string) {
const messages = getSessionMessages(sessionID);
const toolStats: Record<string, {
count: number;
success: number;
error: number;
avgDuration: number;
}> = {};
for (const message of messages) {
const parts = getMessageParts(sessionID, message.id);
for (const part of parts) {
if (part.type === 'tool') {
const tool = part.tool;
if (!toolStats[tool]) {
toolStats[tool] = { count: 0, success: 0, error: 0, avgDuration: 0 };
}
toolStats[tool].count++;
if (part.state.status === 'completed') {
toolStats[tool].success++;
} else if (part.state.status === 'error') {
toolStats[tool].error++;
}
const duration = part.state.time.end - part.state.time.start;
toolStats[tool].avgDuration += duration;
}
}
}
// Calculate averages
for (const tool in toolStats) {
toolStats[tool].avgDuration /= toolStats[tool].count;
}
return toolStats;
}
function findContextFileReads(sessionID: string) {
const messages = getSessionMessages(sessionID);
const contextReads: Array<{
timestamp: number;
agent: string;
file: string;
}> = [];
for (const message of messages) {
const parts = getMessageParts(sessionID, message.id);
for (const part of parts) {
if (part.type === 'tool' && part.tool === 'read') {
const filePath = part.state.input.filePath || '';
// Check if it's a context file
if (filePath.includes('.opencode/context/')) {
contextReads.push({
timestamp: part.state.time.start,
agent: message.mode || 'unknown',
file: filePath
});
}
}
}
}
return contextReads;
}
Check if assistant requested approval before execution tools:
function checkApprovalGate(sessionID: string, messageID: string): boolean {
const message = JSON.parse(
fs.readFileSync(
path.join(basePath, 'message', sessionID, `${messageID}.json`),
'utf-8'
)
);
const parts = getMessageParts(sessionID, messageID);
// Check if any execution tools were called
const executionTools = ['bash', 'write', 'edit', 'task'];
const hasExecutionTool = parts.some(
p => p.type === 'tool' && executionTools.includes(p.tool)
);
if (!hasExecutionTool) return true; // No execution, no approval needed
// Check if text contains approval language
const textParts = parts.filter(p => p.type === 'text');
const approvalKeywords = [
'approval', 'approve', 'proceed', 'confirm',
'permission', 'before proceeding'
];
return textParts.some(part =>
approvalKeywords.some(keyword =>
part.text.toLowerCase().includes(keyword)
)
);
}
Verify context files were loaded before execution:
function checkContextLoading(sessionID: string): {
compliant: boolean;
details: string;
} {
const timeline = buildSessionTimeline(sessionID);
const contextReads = timeline.filter(
e => e.type === 'tool_call' &&
e.data.tool === 'read' &&
e.data.state.input.filePath?.includes('.opencode/context/')
);
const executionTools = timeline.filter(
e => e.type === 'tool_call' &&
['write', 'edit', 'bash', 'task'].includes(e.data.tool)
);
if (executionTools.length === 0) {
return { compliant: true, details: 'No execution tools used' };
}
if (contextReads.length === 0) {
return {
compliant: false,
details: 'Execution tools used without loading context'
};
}
// Check if context was loaded BEFORE execution
const firstExecution = executionTools[0].timestamp;
const lastContextRead = contextReads[contextReads.length - 1].timestamp;
if (lastContextRead < firstExecution) {
return {
compliant: true,
details: 'Context loaded before execution'
};
}
return {
compliant: false,
details: 'Context loaded after execution started'
};
}
Compare performance across agents:
function analyzeAgentPerformance(sessionID: string) {
const messages = getSessionMessages(sessionID);
const agentStats: Record<string, {
messageCount: number;
totalTokens: number;
totalCost: number;
avgDuration: number;
toolCalls: number;
}> = {};
for (const message of messages) {
if (message.role !== 'assistant' || !message.mode) continue;
const agent = message.mode;
if (!agentStats[agent]) {
agentStats[agent] = {
messageCount: 0,
totalTokens: 0,
totalCost: 0,
avgDuration: 0,
toolCalls: 0
};
}
agentStats[agent].messageCount++;
agentStats[agent].totalTokens +=
(message.tokens?.input || 0) + (message.tokens?.output || 0);
agentStats[agent].totalCost += message.cost || 0;
if (message.time.completed) {
const duration = message.time.completed - message.time.created;
agentStats[agent].avgDuration += duration;
}
const parts = getMessageParts(sessionID, message.id);
agentStats[agent].toolCalls += parts.filter(p => p.type === 'tool').length;
}
// Calculate averages
for (const agent in agentStats) {
agentStats[agent].avgDuration /= agentStats[agent].messageCount;
}
return agentStats;
}
Compare performance metrics across different models:
function compareModelPerformance(sessionID: string) {
const messages = getSessionMessages(sessionID);
const modelStats: Record<string, {
messageCount: number;
totalTokens: number;
inputTokens: number;
outputTokens: number;
cacheReadTokens: number;
cacheWriteTokens: number;
totalCost: number;
avgDuration: number;
errorCount: number;
successRate: number;
}> = {};
for (const message of messages) {
if (message.role !== 'assistant' || !message.modelID) continue;
const model = message.modelID;
if (!modelStats[model]) {
modelStats[model] = {
messageCount: 0,
totalTokens: 0,
inputTokens: 0,
outputTokens: 0,
cacheReadTokens: 0,
cacheWriteTokens: 0,
totalCost: 0,
avgDuration: 0,
errorCount: 0,
successRate: 0
};
}
const stats = modelStats[model];
stats.messageCount++;
if (message.tokens) {
stats.inputTokens += message.tokens.input || 0;
stats.outputTokens += message.tokens.output || 0;
stats.cacheReadTokens += message.tokens.cache?.read || 0;
stats.cacheWriteTokens += message.tokens.cache?.write || 0;
stats.totalTokens +=
(message.tokens.input || 0) +
(message.tokens.output || 0);
}
stats.totalCost += message.cost || 0;
if (message.time.completed) {
const duration = message.time.completed - message.time.created;
stats.avgDuration += duration;
}
if (message.error) {
stats.errorCount++;
}
}
// Calculate averages and success rates
for (const model in modelStats) {
const stats = modelStats[model];
stats.avgDuration /= stats.messageCount;
stats.successRate =
((stats.messageCount - stats.errorCount) / stats.messageCount) * 100;
}
return modelStats;
}
Example Output:
{
"claude-sonnet-4-20250514": {
messageCount: 117,
totalTokens: 1250000,
inputTokens: 850000,
outputTokens: 400000,
cacheReadTokens: 1093716,
cacheWriteTokens: 43992,
totalCost: 12.50,
avgDuration: 7205,
errorCount: 2,
successRate: 98.3
},
"gemini-2.5-flash": {
messageCount: 5,
totalTokens: 45000,
inputTokens: 30000,
outputTokens: 15000,
cacheReadTokens: 0,
cacheWriteTokens: 0,
totalCost: 0.15,
avgDuration: 3200,
errorCount: 0,
successRate: 100
}
}
Track spending across models and providers:
function analyzeCostByModel(sessionID: string) {
const messages = getSessionMessages(sessionID);
const costByModel: Record<string, number> = {};
const costByProvider: Record<string, number> = {};
const modelProviderMap: Record<string, string> = {};
for (const message of messages) {
if (message.role !== 'assistant') continue;
if (message.modelID && message.cost) {
costByModel[message.modelID] =
(costByModel[message.modelID] || 0) + message.cost;
if (message.providerID) {
costByProvider[message.providerID] =
(costByProvider[message.providerID] || 0) + message.cost;
modelProviderMap[message.modelID] = message.providerID;
}
}
}
return {
byModel: costByModel,
byProvider: costByProvider,
modelProviderMap,
totalCost: Object.values(costByModel).reduce((a, b) => a + b, 0)
};
}
Example Output:
{
byModel: {
"claude-sonnet-4-20250514": 12.50,
"gemini-2.5-flash": 0.15,
"sonic": 0.00
},
byProvider: {
"anthropic": 12.50,
"google": 0.15,
"opencode": 0.00
},
modelProviderMap: {
"claude-sonnet-4-20250514": "anthropic",
"gemini-2.5-flash": "google",
"sonic": "opencode"
},
totalCost: 12.65
}
Analyze which models work best with which agents:
function analyzeModelAgentPairs(sessionID: string) {
const messages = getSessionMessages(sessionID);
interface PairStats {
model: string;
agent: string;
provider: string;
count: number;
avgTokens: number;
avgCost: number;
avgDuration: number;
successRate: number;
}
const pairs: Record<string, PairStats> = {};
for (const message of messages) {
if (message.role !== 'assistant' || !message.modelID || !message.mode) {
continue;
}
const key = `${message.modelID}:${message.mode}`;
if (!pairs[key]) {
pairs[key] = {
model: message.modelID,
agent: message.mode,
provider: message.providerID || 'unknown',
count: 0,
avgTokens: 0,
avgCost: 0,
avgDuration: 0,
successRate: 0
};
}
const pair = pairs[key];
pair.count++;
if (message.tokens) {
pair.avgTokens +=
(message.tokens.input || 0) + (message.tokens.output || 0);
}
pair.avgCost += message.cost || 0;
if (message.time.completed) {
pair.avgDuration += message.time.completed - message.time.created;
}
if (!message.error) {
pair.successRate++;
}
}
// Calculate averages
for (const key in pairs) {
const pair = pairs[key];
pair.avgTokens /= pair.count;
pair.avgCost /= pair.count;
pair.avgDuration /= pair.count;
pair.successRate = (pair.successRate / pair.count) * 100;
}
return Object.values(pairs).sort((a, b) => b.count - a.count);
}
Example Output:
[
{
model: "claude-sonnet-4-20250514",
agent: "build",
provider: "anthropic",
count: 45,
avgTokens: 12500,
avgCost: 0.125,
avgDuration: 6800,
successRate: 97.8
},
{
model: "claude-sonnet-4-20250514",
agent: "codebase-agent",
provider: "anthropic",
count: 38,
avgTokens: 15200,
avgCost: 0.152,
avgDuration: 8200,
successRate: 100
},
{
model: "sonic",
agent: "build",
provider: "opencode",
count: 12,
avgTokens: 3500,
avgCost: 0.00,
avgDuration: 2100,
successRate: 100
}
]
Compare error rates and reliability across providers:
function analyzeProviderReliability(sessionID: string) {
const messages = getSessionMessages(sessionID);
interface ProviderStats {
provider: string;
totalMessages: number;
successfulMessages: number;
errorMessages: number;
errorRate: number;
avgResponseTime: number;
modelsUsed: string[];
}
const providerStats: Record<string, ProviderStats> = {};
for (const message of messages) {
if (message.role !== 'assistant' || !message.providerID) continue;
const provider = message.providerID;
if (!providerStats[provider]) {
providerStats[provider] = {
provider,
totalMessages: 0,
successfulMessages: 0,
errorMessages: 0,
errorRate: 0,
avgResponseTime: 0,
modelsUsed: []
};
}
const stats = providerStats[provider];
stats.totalMessages++;
if (message.error) {
stats.errorMessages++;
} else {
stats.successfulMessages++;
}
if (message.time.completed) {
stats.avgResponseTime += message.time.completed - message.time.created;
}
if (message.modelID && !stats.modelsUsed.includes(message.modelID)) {
stats.modelsUsed.push(message.modelID);
}
}
// Calculate rates and averages
for (const provider in providerStats) {
const stats = providerStats[provider];
stats.errorRate = (stats.errorMessages / stats.totalMessages) * 100;
stats.avgResponseTime /= stats.totalMessages;
}
return Object.values(providerStats).sort((a, b) =>
b.totalMessages - a.totalMessages
);
}
Example Output:
[
{
provider: "anthropic",
totalMessages: 125,
successfulMessages: 123,
errorMessages: 2,
errorRate: 1.6,
avgResponseTime: 7205,
modelsUsed: ["claude-sonnet-4-20250514", "claude-3-5-sonnet-20241022"]
},
{
provider: "opencode",
totalMessages: 14,
successfulMessages: 14,
errorMessages: 0,
errorRate: 0,
avgResponseTime: 2100,
modelsUsed: ["sonic"]
},
{
provider: "google",
totalMessages: 5,
successfulMessages: 5,
errorMessages: 0,
errorRate: 0,
avgResponseTime: 3200,
modelsUsed: ["gemini-2.5-flash"]
}
]
Analyze prompt caching effectiveness (Anthropic models):
function analyzeCacheEfficiency(sessionID: string) {
const messages = getSessionMessages(sessionID);
interface CacheStats {
model: string;
totalMessages: number;
cacheReadTokens: number;
cacheWriteTokens: number;
inputTokens: number;
cacheHitRate: number;
costSavings: number;
}
const cacheStats: Record<string, CacheStats> = {};
for (const message of messages) {
if (message.role !== 'assistant' || !message.modelID || !message.tokens) {
continue;
}
const model = message.modelID;
if (!cacheStats[model]) {
cacheStats[model] = {
model,
totalMessages: 0,
cacheReadTokens: 0,
cacheWriteTokens: 0,
inputTokens: 0,
cacheHitRate: 0,
costSavings: 0
};
}
const stats = cacheStats[model];
stats.totalMessages++;
stats.cacheReadTokens += message.tokens.cache?.read || 0;
stats.cacheWriteTokens += message.tokens.cache?.write || 0;
stats.inputTokens += message.tokens.input || 0;
}
// Calculate cache hit rate and savings
for (const model in cacheStats) {
const stats = cacheStats[model];
const totalCacheableTokens = stats.inputTokens + stats.cacheReadTokens;
if (totalCacheableTokens > 0) {
stats.cacheHitRate =
(stats.cacheReadTokens / totalCacheableTokens) * 100;
}
// Estimate cost savings (cache reads are typically 90% cheaper)
// Assuming $3/M input tokens, cache reads are $0.30/M
const fullCost = (stats.cacheReadTokens / 1000000) * 3.0;
const cacheCost = (stats.cacheReadTokens / 1000000) * 0.3;
stats.costSavings = fullCost - cacheCost;
}
return Object.values(cacheStats).sort((a, b) =>
b.cacheReadTokens - a.cacheReadTokens
);
}
Example Output:
[
{
model: "claude-sonnet-4-20250514",
totalMessages: 117,
cacheReadTokens: 1093716,
cacheWriteTokens: 43992,
inputTokens: 936,
cacheHitRate: 99.9,
costSavings: 2.95 // $2.95 saved via caching
}
]
Key Insight: Claude Sonnet 4 shows 99.9% cache hit rate, saving ~$2.95 in API costs through prompt caching.
Always start by listing available sessions:
function listSessions() {
const infoPath = path.join(basePath, 'info');
return fs.readdirSync(infoPath)
.filter(f => f.endsWith('.json'))
.map(f => {
const sessionID = f.replace('.json', '');
const info = getSessionInfo(sessionID);
return {
id: sessionID,
title: info.title,
created: new Date(info.time.created),
version: info.version
};
})
.sort((a, b) => b.created.getTime() - a.created.getTime());
}
Always handle missing files gracefully:
function safeReadSession(sessionID: string) {
try {
return getSessionInfo(sessionID);
} catch (error) {
console.error(`Session ${sessionID} not found`);
return null;
}
}
For large sessions, process parts in streams:
function* streamMessageParts(sessionID: string, messageID: string) {
const partPath = path.join(basePath, 'part', sessionID, messageID);
const partFiles = fs.readdirSync(partPath);
for (const file of partFiles) {
const filePath = path.join(partPath, file);
yield JSON.parse(fs.readFileSync(filePath, 'utf-8'));
}
}
Cache frequently accessed data:
const sessionCache = new Map<string, any>();
function getCachedSession(sessionID: string) {
if (!sessionCache.has(sessionID)) {
sessionCache.set(sessionID, {
info: getSessionInfo(sessionID),
messages: getSessionMessages(sessionID)
});
}
return sessionCache.get(sessionID);
}
The Agent Validator Plugin (.opencode/plugin/agent-validator.ts) provides real-time tracking that complements the session storage:
Session Storage (Native):
Validator Plugin (Custom):
Best Practice: Use both together:
Once local evaluation is stable, consider adding OpenTelemetry:
Cause: Project name encoding in path
Solution:
// Project names are encoded with dashes replacing slashes
const projectPath = '/Users/user/Documents/GitHub/project';
const encodedName = projectPath.replace(/\//g, '-').substring(1);
// Result: "Users-user-Documents-GitHub-project"
Cause: Message may have no content (e.g., aborted)
Solution:
function getMessageParts(sessionID: string, messageID: string) {
const partPath = path.join(basePath, 'part', sessionID, messageID);
if (!fs.existsSync(partPath)) {
return []; // No parts directory
}
const partFiles = fs.readdirSync(partPath);
return partFiles.map(file => {
const filePath = path.join(partPath, file);
return JSON.parse(fs.readFileSync(filePath, 'utf-8'));
});
}
Cause: Tool may still be running or was aborted
Solution: Check state.status field:
if (part.state.status === 'running' || part.state.status === 'pending') {
console.log('Tool call incomplete');
}
Based on production data analysis, here are recommendations for model selection:
Complex Reasoning & Multi-Step Tasks:
claude-sonnet-4-20250514 (anthropic)Quick Validation & Build Tasks:
sonic (opencode)Cost-Sensitive Operations:
gemini-2.5-flash (google)Legacy/Fallback:
claude-3-5-sonnet-20241022 (anthropic)Build Agent:
sonic (fast, free)claude-sonnet-4-20250514 (complex builds)Codebase Agent:
claude-sonnet-4-20250514 (complex analysis)sonic (quick scans)Task Manager:
gemini-2.5-flash (cost-effective planning)claude-sonnet-4-20250514 (complex workflows)General/Core:
claude-sonnet-4-20250514 (versatile)gemini-2.5-flash (simple tasks)Use Prompt Caching (Anthropic models)
Route by Complexity
sonic or gemini-2.5-flashclaude-sonnet-4-20250514sonicMonitor Error Rates
Batch Similar Tasks
| Metric | Claude Sonnet 4 | Sonic | Gemini 2.5 Flash |
|---|---|---|---|
| Avg Response Time | 7.2s | 2.1s | 3.2s |
| Success Rate | 98.3% | 100% | 100% |
| Avg Tokens/Message | 10,684 | 3,500 | 9,000 |
| Cache Hit Rate | 99.9% | N/A | N/A |
| Cost/Message | $0.107 | $0.00 | $0.030 |
| Best For | Complex tasks | Quick checks | Cost-sensitive |
Recommended Approach:
sonic for initial validationgemini-2.5-flash for simple changesclaude-sonnet-4-20250514 for complex reasoningExample Workflow:
function selectModel(taskComplexity: 'simple' | 'medium' | 'complex') {
switch (taskComplexity) {
case 'simple':
return 'sonic'; // Free, fast
case 'medium':
return 'gemini-2.5-flash'; // Low cost, good quality
case 'complex':
return 'claude-sonnet-4-20250514'; // Best reasoning
}
}
Track these metrics to optimize model selection:
OpenCode's session storage provides comprehensive logging with:
✅ Complete session history - Every message, tool call, and interaction
✅ Agent tracking - Know which agent handled each message via mode field
✅ Model tracking - Track which AI model and provider handled each message
✅ Performance metrics - Tokens, cost, timing, cache efficiency for optimization
✅ Error tracking - Debug failures with full context and model-specific patterns
✅ File change tracking - Git commits and patches
✅ Structured format - Easy to parse and analyze
✅ Multi-model support - Compare performance across 5+ models and 4+ providers
This foundation enables building robust evaluation frameworks, quality monitoring, cost optimization, and agent/model performance analysis systems.
Agent Evaluation:
Model Optimization:
Cost Management:
Quality Assurance:
Last Updated: 2025-11-21
OpenCode Version: 0.5.1+