top of page

Designing Evaluation Criteria That Matter: Tips for Rubric B-1 of the EPiC™ Key Assessment

  • kelly93055
  • Aug 15, 2025
  • 3 min read

Great assessments don’t happen by chance—they’re built with precision and purpose. In Rubric B-1: Designing Evaluation Criteria to Assess Student Learning, part of the EPiC™ Key Assessment Implementation series, teacher candidates are challenged to move beyond generic tests and vague rubrics to create tools that truly measure what matters. The goal is clear: design rigorous, standards-aligned rubrics and assessments that capture higher-order thinking, offer clear performance distinctions, and provide feedback students can actually use to grow. This isn’t just about checking a box—it’s about creating evaluation tools that drive learning forward.


📽️ Watch: Tips for Rubric B-1 – Designing Evaluation Criteria to Assess Student Learning

 


✓ Align to Standards and Learning Targets

Tip: Identify the exact standards and learning targets the assessment will measure.

Example: Use student-friendly language when sharing learning targets so students know exactly what success looks like.


✓ Choose the Right Assessment Type and Complexity

Tip: Select or design an assessment that aligns with the standards and measures complex thinking (e.g., DOK 2–3, Bloom’s Applying/Analyzing levels).

Example: Assess at least two to three learning targets within a task, and avoid multiple-choice questions in favor of tasks that require deeper reasoning.


✓ Design a Standards-Based Rubric

Tip: Build a rubric that directly reflects the identified learning targets, ensuring alignment between expectations and measurement.

Example: Include descriptive criteria for multiple performance levels (e.g., Approaching, Proficient, Distinguished) so both teachers and students can clearly distinguish between levels of mastery.


✓ Be Clear and Specific in Descriptors

Tip: Use precise, qualitative descriptors that differentiate proficient performance from both lower and higher levels.

Example: Replace vague phrases like “good detail” with “provides three or more specific examples supported by evidence.” Clarity helps students understand exactly how their work is evaluated.


✓ Avoid Subjective or Vague Criteria

Tip: Keep rubric criteria focused on measurable, observable evidence of learning rather than personal impressions.

Example: Replace subjective terms like “creativity” or “effort” with specific, assessable indicators tied to the learning target—such as “includes three or more examples supported by historical evidence” or “uses accurate terminology throughout the explanation.”Clear, objective language ensures that performance levels are evaluated fairly and consistently.


✓ Follow Submission Guidelines

Tip: Include the blank rubric, blank assessment, and answer key in the Part B-1 submission.

Example: If the assessment was created by another source, clearly credit it on the blank copy. Use the provider’s rubric template if one is required.


✓ Collecting Strong Evidence

  • Show clear alignment between standards, learning targets, and rubric criteria.

  • Demonstrate that the assessment measures higher-order thinking, not just recall.

  • Provide rubrics that are both rigorous and student-friendly.

  • Ensure all materials are complete, clearly labeled, and professionally formatted.


Example in Practice

Standard: Analyze the causes and effects of the American Revolution.

Detail: Students examine key events, figures, and documents—including the Declaration of Independence and major battles—to assess their political, economic, and social impacts on the United States.

Learning Target: “I can explain why the American Revolution happened by describing important events and people, like the Declaration of Independence and key battles, and show how these events affected the United States.”


By aligning rubrics to learning targets, using clear and descriptive criteria, avoiding subjective measures, and ensuring assessments measure complex thinking, teacher candidates can create evaluation tools that provide meaningful feedback and promote student growth. These practices not only strengthen assessment design but also ensure that every evaluation leads to measurable student success.


 
 
 

Comments


bottom of page