Skip to main content

Overview

QA scorecards define the criteria that Reddy’s AI uses to automatically evaluate and grade calls. Each QA item consists of a title, type, requirement description, point value, and status indicator. These elements work together to power Reddy’s automatic grading and agent feedback.
Need step-by-step setup instructions? See the QA Setup Guide for detailed configuration steps.

Understanding QA Item Elements

Best for: Binary requirements where agents either meet the standard or don’tExamples:
  • “Agent provided account verification”
  • “Agent used proper greeting”
  • “Agent followed closing script”
Scoring: Simple pass or fail determination
Best for: Requirements where performance can vary in qualityExamples:
  • “Agent demonstrated empathy” (1-5 scale)
  • “Agent provided accurate information” (1-10 scale)
  • “Agent resolved customer issue effectively” (percentage-based)
Scoring: Numerical grade based on performance quality

Organizing Your Scorecard

Group related QA items into sections that follow your call flow. This makes scorecards easier to navigate and ensures evaluations track the natural progression of customer interactions. Example:
  • Introduction (section) contains: Greeting, Agent identification, Confirm call purpose
  • Closing (section) contains: Confirm next steps, Thank customer
Use drag-and-drop to reorder items within and across sections.

Fine-Tuning Your Scorecard

After configuring your QA scorecard, test it against real customer calls to ensure Reddy’s grading accuracy matches your quality standards. Start with 20 manually scored calls to establish a solid benchmark dataset. Focus on calls representing typical scenarios.

Prerequisites

Before evaluating QA scorecard accuracy, upload real customer calls to test your scorecard against actual interactions. Use these calls to verify that Reddy’s AI grades match your quality standards and to identify any scoring logic that needs refinement. Upload Call Recordings

Evaluation Process

Review AI-scored calls and flag discrepancies to refine Reddy’s QA logic. This feedback improves scoring accuracy over time. QA Evaluations Guide