Driven by teachers—How our products evolve with real user feedback

By Mark Angel, Chief Technology Officer

Lots of companies claim to be customer-focused because it is an easy thing to say. Renaissance is grounded in our mission—to accelerate learning for all. We care about the teachers we serve. We work hard because we want to help students learn. But does that alone make us customer-centric?

UX Diamond

In our business—creating great educational software—success hinges on the user experience. When using our tools, teachers need an experience that delivers value, is easy to use, is designed well enough to engage, and is engaging enough to motivate continued use.

Being customer-centric means looking at every touch point as scientifically as possible to create experiences that work. This constant testing for value is called “validation.” If you aren’t validating, you really aren’t customer-centric. Caring isn’t enough. Hard work isn’t enough. To deliver value, you have to always be validating.

Over time, we’ve ramped up both the depth and breadth of validation we do. How does this help you, the Teacher?

Out of nearly 1,000 employees at Renaissance, roughly 80 create new code and content, manufacturing our product development roadmap. Validation helps to ensure that these resources are dedicated exclusively to work that has high value for you. Last summer, we conducted a range of surveys and focus groups on a dozen different applications we might build. This scientific exploration of buyer attitudes helped us to prioritize where we will focus our energy. We focus our energy on building what teachers want and need, instead of wasting your (and our) time.

In some recent validation studies, depicted in the graph below, we found that about 40 percent of assessment decision makers are interested in unifying CAT (computer-adaptive testing) and CBM (curriculum-based measurement). This discovery will help shape our roadmap investments.

 

CATvCBM Demand

 

Of course, validation most significantly impacts and influences design and development. Over the lifetime of the work done toward releasing a new feature (called an “epic”) up to 10 validations occur. Work is typically done at the storyboard level first to identify key features. Usability testing then shapes development, identifying teacher preferences for everything from color to layout.

The overall goal of validation is to ensure that—with whatever we ship—we have exceeded the threshold of “minimum viable product” (MVP), and that we are solving an important problem for the teacher.

Here’s a simple example of how validation works. Just below is a screen showing the cover flow for Accelerated Reader (AR) Student Book Discovery, as originally conceived:

 

Screen 1

 

As the team in Minneapolis worked this epic, their design and validation owners noticed that students weren’t always spotting key elements of the navigation. So, we moved the scroll arrows outside the book covers, lightened the shelf cover, and rendered the Library icon more prominently, as you can see in the screen below:

 

Screen 2

 

In ways large and small, validation creates a direct pipeline between our users and our products, enabling their needs and preferences to control our work.

One of the questions I’m asked most often is how can we still have user complaints and calls in the wake of so much validation. When you actually sit in focus groups and 1:1 usability reviews, the answer to that question becomes clear. People have very different tastes, navigation styles, computer literacy levels, and learning approaches. No amount of testing creates a user experience (UX) that works for everyone. (You know this better than we do, because you look out at each of your classes toward a roomful of unique learners every day!)

Here’s a great example: Even the basic idea of book discovery in AR isn’t a hit with every student. While 75 percent very much liked the idea of “Top Books For You,” about 15 percent didn’t want it, with the remainder being indifferent. When, as Renaissance does, you have millions of users, even incredibly valuable features for most will be distasteful, unusable, or extraneous for some. The test of “goodness” is not whether everyone likes a new capability, but whether in aggregate there is a meaningful advance in the overall value of the offering.

And, that brings us to validation after the software ships. There is a nasty lesson I’ve learned over thirty years, accounting for a reasonable percentage of my grey hair. No version 1 software is perfect. There is a reason Google is famous for hanging “Beta” on everything. There is a reason no one remembers Windows v1.0. There is a reason so-called “early adopters” amount to just two percent of all software buyers. There is a reason 70 percent of Silicon Valley entrepreneurs achieve an exit value of essentially zero for their start-ups. Bootstrapping innovation is incredibly hard, shipping new software is difficult, and even getting a small new feature right can be a challenge.

So, how do we identify and deal with the inevitable mistakes and less-than-perfect code? The answer is even more validation. But, after shipping, new kinds of “checking” become possible. Once users are actually using the product, we can use “clickstream” analysis to see what they are doing and where they are going wrong. At Renaissance, we’ve had to make a huge investment to get into the clickstream game. While folks at Facebook run 1000 “A/B tests” a day, until recently we’ve had very little real-time data about how users are actually interacting with Renaissance applications.

Now data is flowing. R&D teams are able to use a tool called “SPLUNK” to explore millions of application instrumentation events, creating new insight into how our users are touching our features.

 

Book Recommendation Activity

 

Above, you see an hour in the clickstream life of AR Book Discovery, showing which “slot” in the shelf users are clicking on.

Can you call yourself customer-centric without knowing what users think of the content you are showing them?  Probably not, so we are about to introduce validation by reputation. The power of reputation has revolutionized product experiences. Whether it comes to knowing the reputations of drivers on Uber, movies on Netflix, or instructional resources on TeachersPayTeachers, validating goodness via the wisdom of the crowd has enormous power. The screen below is an example of how we’ll capture user ratings to track the reputation of content available in our platform.

 

Reputation

 

By using clickstream data, post-release surveys, reputation, and more, we can pinpoint problems with newly shipped code and content, driving constant improvement more efficiently.

Success hinges on the quality of the user experiences (UX) we deliver. In last fall’s customer survey, for the first time, we asked about the quality of our UX. We’re pleased to report the results were pretty good.

 

Customer Ratings Scale

 

We can and we will get even better. We’re new at validation, but we’re learning fast. Over time, we’ll grow our ability to focus on the right work, discover MVP before we ship, and fix our errors quickly post-release. Validation is destined to be our teachers’—your—best friend.

Mark Angel, Chief Technology Officer

Mark Angel has 30 years of experience creating widely used software. In his career, he has served as CTO for publicly traded KANA and Knova Software, CTO and founder of Papyrus Technology, and head of advanced development teams at Ernst & Young.

Mark Angel, Chief Technology Officer
Mark Angel, Chief Technology Officer
Mark Angel has 30 years of experience creating widely used software. In his career, he has served as CTO for publicly traded KANA and Knova Software, CTO and founder of Papyrus Technology, and head of advanced development teams at Ernst & Young.

11 Comments

  1. Nadine Miller says:

    I liked our old version of AR because students didn’t have to type in the exact title. With the web based AR if the student doesn’t spell part of the title correctly it doesn’t show any book title to test on. With the old disk version of AR, a list of books with titles that started with that word or close to that word would come up. It is really frustrating for teachers and students at times. I know we could have them use the test number, but we don’t always have that handy. Is there any way you can change this with the web based version?

  2. Mark Angel, Chief Technology Officer Mark Angel says:

    Thank you for your feedback, Nadine. We recently released some changes to improve AR’s title search functionality. Today, students should be able to type a portion of the title they’re looking for and receive related results. In some instances, even a misspelled word will return the desired title, but not necessarily in every case. We’re working on additional enhancements to improve the search function even more. At some point down the road, after we have collected data around the kinds of errors people make in searching for books, we hope to introduce a predictive, type-ahead search that works kind of like Google’s.

    We always welcome customer input. If you’re interested in taking a more active role in validating new features and functionality in our products, you can sign up for our Research Panel at https://www.renaissance.com/Resources/Research-Panel.

  3. Barbara James says:

    I really enjoyed reading your blog post about how Renaissance is constantly utilizing the user’s experiences to improve their products. Not many people think about all the hard work that goes into making continuous improvement, and your post really brought that home with real, tangible examples that I’ve seen in action in your software. In my job as an Accountability Coordinator, I am a data analyst and data consumer. Thank you for working so hard to make your products be the best they can be and for using the data to base that improvement on how your product is used each and every day.

  4. Tanya Bares says:

    I would like to make several suggestions. I like how the new STAR reports add Lexile. Before the update, I was able to see Lexile and Scaled Score (SS). It is much more useful to see both instead of having to switch preferences. The word count report would benefit from having % correct on the report. We are stressing volume of words, and it is very time consuming to cross-reference with the diagnostic. I would also like the word count and diagnostic reports to include number of fiction and non-fiction books that the student has read. Thanks.

  5. Mark Angel, Chief Technology Officer Mark Angel says:

    Thanks for these suggestions. We are continuing to expand the use of Lexiles in our products and appreciate your suggestion to display both Scaled Scores and Lexiles together. We’ve forwarded this idea to the reading product team. Regarding your suggestion to show the student’s percent correct, we are going to be revising Accelerated Reader’s reports over the next 12 months, and you can expect to see percent correct added.

  6. Peter Cress says:

    Considering the vast database of books covered by AR, I was surprised to find a certain classic title missing. (This is meant as a compliment, of course–if your list was inadequate, a missing title would be no surprise.) I was about twelve when I first read Flatland: A Romance of Many Dimensions by Edwin Abbott Abbott. This 1884 novel is at once both social satire (addressing the class structure of Victorian England) and an exploration of polydimensional geometry. It helps the mathematically inclined student better understand the problems with sexism and other forms of bigotry, and helps the socially inclined student better understand geometric concepts.

    How do we go about getting this classic added to the list?

    • Mark Angel, Chief Technology Officer Mark Angel says:

      Thanks for reaching out with this question. We tend to get a lot of requests for quizzes from our (prolific) AR readers, so we prioritize based on the popularity of the books and the number of customer requests. If you’d like to request a quiz, you can go to this page on our website: https://www.renaissance.com/customer-center/suggest-quizzes. From there, our quiz writers will take a look.

      • Diane Hubacz says:

        I have had the experience of quiz requests being denied. If a quiz is denied, can we create our own? There are many reluctant readers out there, and I am thrilled when they actually read ANY book (not just those who pass the AR’s criteria). Reading is reading. In third grade, when most children begin their journey with chapter books, every word should count. Thank you!

        • Mark Angel, Chief Technology Officer Mark Angel says:

          Thanks for the question, Diane. Teachers can already create their own quizzes in AR, called Teacher-Made Quizzes. Help in AR describes how to create and manage these quizzes. Here’s a link to that section: http://help.renaissance.com/AR/TMQuizzes. Also, the Resources in the program itself include advice on how to write these quizzes. (Go to Resources, then Resources to Advance Your Implementation.)

  7. Diane Hubacz says:

    AR is one of the most valuable tools a teacher has to help our readers grow, and I couldn’t live without it in my classroom. I teach third grade, and last year our curriculum changed to focus more on genres of reading, especially in fiction. It would be awesome if AR categorized the fiction books more specifically to include realistic fiction, historical fiction, mysteries, fantasy, fairy tales, etc.

  8. Mark Angel, Chief Technology Officer Mark Angel says:

    The AR BookFinder website lets teachers, students, parents, and librarians search for books with corresponding AR quizzes according to a variety of criteria, including the genres you mentioned. Because student self-selection is so important for independent reading practice, we recommend suggesting different genres and then letting students choose based on their interests. If your question referred to the ability to see specific genre information in the reports in Accelerated Reader, this is not currently available, but we are considering it for future development. Thanks for the suggestion!