About Us
Request a demoSupport

Evaluating educational programs: 5 tough questions to ask

By Eric Stickney, Director of Educational Research

Here’s something you probably know all too well—educators are inundated with all sorts of claims when it comes to the myriad of educational programs and software solutions that are available. Walking through a vendor exhibit hall at ISTE, ILA, or any other education-focused conference, the message received is that any classroom challenge can be solved through using these products.

Now more than ever, educators need to exercise caution when evaluating new instructional, assessment, or data-management solutions. At Renaissance Learning, we have long had a robust and ambitious research and development agenda— to ensure we are focused on our mission of accelerating learning for all students while keeping teachers at the heart of our solutions, we frequently check in to assess how we measure up to the following five tough evaluation questions. Below we answer these questions about our Accelerated Reader 360 practice solution—in order of least rigorous to most important:

5.) Is foundational research available for the solution that is consistent with evidence-based practice?

Even if a program represents a new approach to teaching and/or learning, it should still be consistent with what has been established in the research literature as good practice.

For Accelerated Reader 360, a foundational research paper describes in detail the program’s history, from its inception to the powerful impact in (and out of) the classroom it has today. Beyond the foundational evidence supporting the program’s tenets (e.g., personalized practice, feedback, goal setting, among others), this paper summarizes many of the 151 independently led studies on Accelerated Reader 360.

4.) Do educators think the solution works?

Many edTech companies talk about educator perceptions of their products through success stories, case studies, testimonials, and the like. Ideally there should be several dozens of such studies from across the country in diverse settings—large urban schools, small rural schools, and so forth. This way, teachers can access examples where the program has worked in a setting like theirs.

42 correlational studies have been conducted on Accelerated Reader 360, in addition to 67 school-based case studies that feature achievement gains attributed by educators to the solutions’ use, effectiveness, and impact.

3.) Is the solution reliable and valid?
Programs that include any assessment of students or teachers should be designed to be consistent with assessment standards in all areas, from content development to implementation to reporting. This means evidence of reliability (or consistency of scores) and validity (the extent to which the program measures what it claims to measure) should be available.

Accelerated Reader 360 quizzes have been shown to yield consistent results from administration to administration and measure aspects of reading comprehension as intended. For more information, see the Accelerated Reader 360: Understanding Reliability & Validity report.

2.) Is there rigorous evidence of impact on growth?

Producing real evidence of effectiveness under the most rigorous research conditions takes time and a substantial commitment. Claims of causality—that using Product X impacted student learning—require “gold standard” scientific research designs, namely randomized controlled trials, also known as experimental studies. These are favored because they remove selection bias as a potential reason student participants may have experienced gains or not. Other compelling study designs include regression discontinuity and high-quality quasi-experimental methods. Using large databases and appropriate statistical techniques and controls, it is also possible to understand to what extent and under what conditions students realize good outcomes.

In total, 31 rigorous experimental and quasi-experimental studies support Accelerated Reader 360, including these highlights:

  • Shannon et al. (2015) assigned 344 students in grades 1–4 at 3 ethnically diverse Midwestern schools to either treatment or control groups and found a significant positive impact on reading achievement for students using AR.
  • Siddiqui et al. (2016) assigned 349 Year 7 students from 4 UK schools to use AR or serve as a control group. All students previously fell short of national benchmarks. After 22 weeks, students using AR attained higher literacy scores than those not using the program.
  • Nunnery et al. (2006) assigned 978 students in grades 3–6 at 9 urban schools to treatment or control conditions. Students using AR experienced significant positive effects, and the program seemed to benefit students with disabilities in particular.
  • Nunnery and Ross (2007) compared 22 ethnically and socioeconomically diverse schools in Texas where students used AR or served as matched controls over multiple years of implementation. Achievement was significantly higher for AR users (including English learners) as compared to the control group not using this tool.

1.) Has the program undergone systematic, rigorous review?

This could consist of a meta-analysis if there are enough studies, or an independent review of studies by an independent organization. Additionally, it’s important that some of the studies on the product have gone through the peer-review process and appeared in research literature.

Recent, ongoing reviews have concluded Accelerated Reader 360 is a “proven program” that boosts student achievement (the Promising Practices Network Promising Practices Network), and has “strong evidence of effectiveness” (the National Dropout Prevention Center/Network). Moreover, Accelerated Reader has been the subject of 28 peer-reviewed studies, which effectively hold the methods and analyses used in the research supporting the program to the highest possible scrutiny.

We know that as educators, your primary goal in selecting an educational program is to ensure it is based on solid science with compelling evidence that it does what it claims to do. Asking these five questions is a great way to begin separating proven, research-based solutions from the rest.

Shannon, L. C., Styers, M. K., Wilkerson, S. B., & Peery, E. (2015). Computer-assisted learning in elementary reading: A randomized control trial. Computers in the Schools, 32(1), 20–34.
Siddiqui, N., Gorard, S., & See, B. H. (2016). Accelerated Reader as a literacy catch-up intervention during primary to secondary school transition phase. Educational Review, 68(2), 139–154.
Nunnery, J. A., Ross, S. M., & McDonald, A. (2006). A randomized experimental evaluation of the impact of Accelerated Reader/Reading Renaissance implementation on reading achievement in grades 3 to 6. Journal of Education for Students Placed At Risk, 11(1), 1–18.
Nunnery, J. A., & Ross, S. M. (2007). The effects of the School Renaissance program on student achievement in reading and mathematics. Research in the Schools, 14(1), 40–59.

Eric Stickney, Director of Educational Research
Eric Stickney, Director of Educational Research
Eric Stickney works with external independent researchers who conduct evaluations of Renaissance programs. He specializes in analyzing reading and mathematics data collected from millions of students in North America and the UK.

Comments are closed.

Select your school

Searching for schools in ZIP code ---

Loading schools…

Don't see your school?