If you are an avid reader of our blog and especially our design posts, I hope you’re asking this question:
If DataHero is a data analytics company, why are their design posts so light on metrics?
This is something I’d like to highlight today. In this post, I’m going walk through an example of a hypothesis we developed thanks to user feedback, but further tested using quantitative methods.
As we’ve highlighted in our prior posts, our designs and design process starts purely qualitatively. Our design process starts with lots of iterations, lots of sketches and lots of mockups. Once we have narrowed down a few of our favorites, we user test them extensively. We start by sitting beside users and asking questions about how they’d use our mockups and sketches. Once we feel like we have a good design, only then do we start to bring the prototype into code. Even after bringing the design into code, it still typically goes through a few iterations. At any given time, we are usually running at least one experiment with a subset of DataHero users on our live production site.
Just as it is extremely important to use the Steve Krug methodology of in-person user testing (he’s right, you can learn a lot about what most people will do observing only 5-10 people), it is also just as important to make sure you measure your hypothesis once you do release your feature to more users.
So what about the metrics I promised I’d deliver at the start of the post? How do we do use quantitative methodologies to test our designs at DataHero? Let me give a concrete example using DataHero as both the working example and, of course, the product I used to visualize and analyze my quantitative data.
Once you upload a dataset in DataHero, you have a few things you can do, but the primary goal we want the user to complete is to create a chart. You can do this in a couple of ways either: a) create your chart from scratch or b) use one of our suggested charts.
Initially, we were getting feedback from users via our emails that sometimes the chart they wanted wasn’t suggested. In qualitative observations of our users, we noticed they would almost always pick a suggested chart. Plus, they would ask questions that indicated they didn’t realize they could create their own chart! They were completely overlooking the new chart button. We worked to change its location, color, etc, but kept receiving the same feedback and we watched as a significant number of suggested charts were used instead of creating a new chart from scratch (about 52% of all charts accessed or created were our suggested charts). Here is how the page looked before the redesign:
To make it more apparent you could create your own chart, we created a new chart thumbnail that we displayed inline with the suggested charts. However, we broke a bunch of user experience rules and made the button move based on your state. That’s right, the primary action button moves based on your previous actions and the number of charts you have created.
If you had not yet created or accessed a chart for a dataset, we place the chart thumbnail to create a new chart in the first position. The idea was to make it obvious first that you can create your own chart and then, as your eye moved from left to right, you could do a quick scan of the suggested charts and use one of those if your question was already answered.
Once you created or accessed a suggested chart already, we put the new chart button as the last item of the list. Again, we were concerned that the primary action for the page was always in a different place. More importantly, the more charts you created, the further down the list the new button moved; eventually moving below the fold.
For our qualitative testing, we tested the moving primary action button idea, even though it broke some major usability rules. Our initial testers liked the solution. They immediately understood they could create their own chart and would in testing, click the large new chart thumbnail; however, we still had doubts. After all, the button moved, changed position and users who wanted to quickly navigate between existing charts and creating a new chart had to wait for the button to appear after all the existing charts were loaded. Could this really be a better solution? This was an ideal case to test quantitatively.
To test, we left both new chart buttons on the page and used Mixpanel to track the number of clicks for each new chart button: the new chart thumbnail that moved based on your behavior, as well as the new chart button that persisted in the menu bar. What we found backed up our hypothesis from our qualitative testing testing: users preferred the new chart thumbnail button.
First, this new chart button is 92% more likely to be used than our static button. You can see in the results below, the vast majority of our new chart clicks are on the moving thumbnail button rather than on the static new chart button.
But what about our advanced users? After all, does this work as our users become more comfortable with how to use DataHero? Will they start to use the static button instead because they don’t want to “search” for where the new chart thumbnail button is on each page? Taking a look at our most active and paying users, you can see they, too, predominantly prefer the new chart thumbnail.
Effectively, we debunked our own preferences. In the current layout, users prefer to use the thumbnail button to create a new chart. For now, we left the static button; but don’t be surprised to see this button go the way of our user testing and data driven results in the near future.
Overall, design is about combining both qualitative and quantitative metrics. In this case our metric-driven process led to the same conclusion, but this is not always the case. Using five users is great to test a solution until that single user that disagreed really does represent 20% or more. Only by pairing qualitative and quantitative analysis are you able to start to quantify your design.
Get the fastest, easiest way to understand your data today.Sign up for free