Reporting Dashboard

Optimizely

Creating a centralized reporting dashboard for Optimizely customers to see high-level trends in their experimentation data in terms of experiment volume and quality.

Problem

A survey of Optimizely experimentation customers in July of 2023 found that only 10% of respondents could place a value (financial or otherwise) on their experimentation program. Our customers were paying for our experimentation tool but they were mostly unclear on the value they were getting out of it.

This problem was compounded by the fact that our customer retention numbers for our experimentation product were falling below acceptable levels.

Helping customers understand how many experiments they are running, how many are successful, and how much they are improving on key metrics is not only key to understanding the value of experimentation but also a key part of building a culture of experimentation.

User Research

Although the need for a reporting dashboard was clear, in order to fully understand the problem and move beyond merely anecdotal evidence, we conducted a series customer research calls over the course of several months. These calls focused on a few key questions:

  1. What method are you using to generate reporting data now and what are its shortcomings?
  2. What KPIs are most important to see?
  3. What types of personas are using this data?

“It takes me 6-8 hours to create this report every month. I’m stressed right now because I’m creating it this week!”

– Customer Quote

During these calls our customers walked us through their own self-built reporting processes which often involved a lot of complex spreadsheets and multiple steps of exporting and importing data from multiple sources.

Some of these DIY dashboards were very powerful but still required quite a bit of effort to get them to work.

“There’s no easy answer to the question ‘How many experiments have I run in the past 30 days?’ and it’s even harder to see month-over-month or year-over-year numbers.”

– Customer Quote

There’s nothing like seeing a customer struggle with a problem to really inspire you to design something better for them. I was looking forward to getting this feature designed and built.

Competitor Research

In addition to user interviews, we also surveyed the competitive landscape to see what others were offering in this space.


Competitors offered a wide variety of dashboard displays including:

  1. Basic analytic data such as page visits and overall conversion rate
  2. Estimates of total ROI on all experiments (which will always be flawed and innaccurate)
  3. Total counts of AB tests in different statuses

Overall, what they offered was ok but we thought we could do better.

Key Personas & Needs

Combining our own customer research and competitive research we determined that our dashboard would need to meet the needs of these key personas:

PersonaPersona DescriptionPersona Needs
Program Manager / Experimentation SpecialistThe person in charge of the experimentation program and who may also be creating their own experimentsHow many experiments are being run?

Where they are being run?

How successful are they?

How do those numbers change over time?
Executive LeaderThe person approving the budget for Optimizely (or not)Is experimentation worth the price?
Data AnalystThe person digging deep into the data to better understand the cumulative value of experimentsHow can I get filter the data for my own specific needs?
Experimentation ObserverThis person could be anyone else who is curious about experimentationWhat types of experiments are successful?

How can I generate better experiment ideas for the company?

Key MVP Features

Having identified our personas and their needs from our user research sessions, we could now hone in on the key features we wanted to design and build into our first MVP release to fulfill those needs. We decided we wanted to display some combination of:

  1. Velocity
    How many experiments are we running?
  2. Quality
    How many experiments are successful?
  3. Benchmarks
    How do we compare with others?
  4. Trends
    Are we getting better at this?
  5. Insights
    What actions should we be taking?

What about ROI?

One feature we decided not to pursue fully was a ROI calculator which would determine exactly how much revenue was generated by all successful experiments. After talking to subject matter experts, we determined that these calculations are inherently flawed and attempting to present this sort of calculation would be too complex to design and build for our initial launch. Nevertheless, the key MVP features listed above would still do a lot to show the value of our experimentation product and solve many other customer problems.

Design Iterations

Now that we had a rough idea of the things we wanted to show on this reporting dashboard in order to address the needs of our personas, it was time to start doing some mockups and getting feedback from our team, subject matter experts, internal stakeholders, and our customers.

Below are some early ideas and explorations:


(click thumbnails to expand)

MVP Designs

Over the course of several months we got feedback on the design iterations both internally and externally with our customers. Included in this process were frequent check-ins with developers so they could weigh in on the degree of difficulty required in implementing each aspect of the dashboard. Combining all this feedback, we reached a final design for two dashboard screens that included features that balanced user needs, ease of implementation, and strategic business goals.

See final designs below:

Development

Throughout the development process we had regular check-ins to refine the design and remove roadblocks for the dev team. In some cases, designs were adjusted or features were removed if developers flagged them as too costly to build. Dev communication is so important during this process so that we know we are balancing the needs of the business, the needs of the user, and the development costs and finding solutions that work for all three.

Early Results

As of this writing, this project is in early beta release stages and therefor we only have a handful of customers actively using it. But we’ve received great feedback and we are tracking key usage and adoption metrics via a custom dashboard in Gainsight PX.


Our long-term goals for this project were to increase customer retention by providing more insights into our customers’ experimentation programs. We can, of course, track customer retention in the aggregate, but it is difficult to directly attribute a rise or fall in retention to any one feature. In this case, we need to make an assumption that if we have addressed some of the major reporting issues contributing to our drop in retention, we can feel reasonably confident that we have worked to solve the problem.

We can more easily track the usage of our new dashboard feature (quantitatively) and also track user feedback about it (qualitatively) in order to assess whether the feature is providing the value we had hoped for. We’re looking forward to seeing more results come in and iterating on user feedback.

Thanks for reading!