top of page

Turning Data into Decisions: Building a Continuous Improvement Cycle

  • kelly93055
  • 1 day ago
  • 4 min read

Educator preparation programs collect large amounts of data throughout the candidate journey. Observations during clinical practice, results from the EPiC™ Key Assessment, course assignments, and candidate reflections all provide insight into how future teachers are developing. The challenge is not gathering the data. The challenge is turning that information into meaningful program decisions.



Too often, assessment data is reviewed once for reporting purposes and then set aside until the next accreditation cycle. Faculty may discuss trends informally, but the connection between candidate performance and program changes can remain unclear. A sustainable continuous improvement cycle requires a more intentional process that moves data from collection to conversation to action.


When programs structure that process carefully, candidate performance data becomes one of the most powerful tools for strengthening teacher preparation. With the EPiC Key Assessment and the EPiC Support Dashboard, EPPs can examine candidate performance using Evidence-First™ indicators that identify specific instructional practices rather than relying on broad rubric interpretations. This approach helps programs see patterns in teaching practice more clearly and use that information to guide program improvement.


From Data to Dialogue

Continuous improvement begins when data becomes a regular part of program conversations. Instead of reviewing results only during accreditation preparation, successful programs build structured opportunities for faculty and program leaders to examine candidate performance together.


A data-to-dialogue approach helps guide those discussions. Faculty teams review patterns in candidate assessment results, observation evidence, and instructional artifacts to identify areas of strength and areas that may require additional support. These discussions focus not only on individual candidate performance, but also on what the data suggests about program design.


For example, faculty might examine whether candidates consistently demonstrate strong instructional planning but struggle with questioning strategies that promote deeper thinking. Evidence-First observation markers make these patterns easier to identify because reviewers are documenting specific observable practices, such as the types of questions asked or the level of thinking students are asked to demonstrate.


This type of analysis shifts the conversation from individual performance to program-level insight.



Connecting Candidate Performance to Program Decisions

Data becomes most valuable when it informs concrete program actions. Once patterns emerge, educator preparation programs can examine how coursework, clinical experiences, mentoring structures, or professional learning opportunities may be influencing those results.


For example, if observation evidence shows that candidates rely heavily on low-level questioning strategies, faculty may revisit how questioning techniques are taught within methods courses. Programs may add modeling, practice opportunities, or targeted coaching within clinical placements to strengthen that skill.


Because Evidence-First scoring identifies specific instructional behaviors, faculty can more easily connect candidate performance to targeted program adjustments.


By linking candidate performance data to specific program decisions, EPPs ensure that improvement efforts remain focused on the areas that matter most for future classroom practice.


Using the EPiC Support Dashboard to Connect the Data

One of the biggest challenges for educator preparation programs is connecting multiple sources of candidate data to tell a clear story about candidate growth. The EPiC Support Dashboard helps bring these pieces together so faculty can see how different aspects of preparation influence classroom practice.


Candidate observation data allow programs to examine how candidates implement instructional strategies during clinical experiences. When these observations are reviewed alongside lesson planning evidence, faculty can better understand the relationship between how candidates design instruction and what actually occurs in the classroom. The Lesson Plan Review tool provides a structured way to analyze planning artifacts across cohorts, helping programs determine whether instructional expectations introduced in coursework are being translated into classroom practice.


Programs can also use EPiC Practice Modules and PD Bytes to respond directly to trends that appear in the data. If observation results show that candidates need additional support with questioning strategies, classroom management, or differentiation, targeted professional learning can be assigned to strengthen those skills. Because these modules are connected to the same instructional practices evaluated in observations, professional learning becomes a natural extension of the improvement process.


Additional tools within the Dashboard provide broader program insights. The LoTi Digital Age Survey helps programs understand how candidates perceive their preparation and how they integrate technology into instruction. Professional Dispositions tracking allows programs to monitor key professional behaviors, such as collaboration, responsibility, and ethical practice, throughout the preparation experience.


When these tools are viewed together, educator preparation programs gain a more complete picture of candidate development. Faculty can examine how candidates plan instruction, how they enact those plans in the classroom, how they reflect on their practice, and how they grow through targeted professional learning.



Capturing Closing the Loop Evidence

Accreditation organizations consistently ask programs to demonstrate how assessment data leads to improvement. Agencies such as CAEP, GaPSC, NASDTEC, and AAQEP expect programs to show not only what data they collect, but how that information informs program decisions and ultimately improves candidate outcomes.


A well-structured improvement cycle makes this process much easier. When programs document how data informed faculty discussions, what changes were implemented, and how those changes influenced candidate performance, they create clear closing-the-loop evidence.


Instead of assembling evidence retroactively for accreditation reviews, programs can demonstrate a living system of continuous improvement that is already embedded in their regular practices.


Sustaining an Evidence-First Improvement Culture

Continuous improvement is not a single initiative or annual review process. It is a culture that develops when programs consistently use evidence to guide decisions.


When candidate assessment results, observation evidence, and program discussions are intentionally connected, educator preparation programs gain a clearer understanding of how their design influences classroom practice. Faculty conversations become more focused, program adjustments become more strategic, and improvements in teaching practice become visible over time.


In the end, the goal is simple. Data should not sit quietly in reports or spreadsheets. It should actively inform the decisions that shape how future educators are prepared for the classroom.


 
 
 

Comments


bottom of page