Purpose: Quality assurance and scoring for LLM outputs.
Last Updated: 2026-01-09
Evaluations in Mastra use Scorers to assess the quality, accuracy, and safety of LLM-generated content. They provide a quantitative way to measure performance and detect issues like hallucinations or factual errors.
mastra_scorers table for long-term analysis and reporting.// Scorer definition
export const hallucinationDetector = new Scorer({
id: 'hallucination-detector',
description: 'Detects hallucinations in LLM output',
execute: async ({ output, context }) => {
// Logic to detect hallucinations
return { score: 0.95, rationale: 'No hallucinations found' };
},
});
// Registration
export const mastra = new Mastra({
scorers: { hallucinationDetector },
});
Reference: src/mastra/scorers/, src/mastra/evaluation/
Related: