This is the first post in a three-part series. Subscribe to The ForeSee Blog to receive immediate notification of parts two and three.
If you want to improve something, you have to observe how people use it first. This is true for improving products, experiences, and even websites. In the world of web analytics, observation has historically taken a back seat to behavioral (clickstream) and attitudinal (survey) data, largely because of the high costs, slow process, and questionable representative results of solutions like focus groups which have traditionally been popular options.
Not only are focus groups or usability labs fairly costly, participants are aware they’re being watched, which leads to observation bias that can make results questionable. Also, the extremely limited number of subjects in traditional focus groups leads to, at best, directional results. Enter the advent of true observation in the web world: web session recording or replays, where you can watch the actual mouse clicks, scrolling, and form entry of someone using your website from anywhere in the world. Sample sizes are increased, observational bias is removed…it’s a perfect solution.
Except how do you tell which of the session recordings or replays will give you the good stuff? Which will show you the users that can teach you something? Which will show you the experiences that are representative of hundreds or thousands or millions of others?
The way we’ve solved this problem at ForeSee is to filter by satisfaction. In other words, we replay the sessions only of users who have completed a satisfaction survey. First, the ForeSee technology tells you which areas of your website should be your biggest priority for improvement. You may think it’s merchandise, but a good, scientific, predictive methodology may show you that fixing navigation will give you a bigger bang for your buck.
Want to find out where people are getting hung up on your navigation? Filter by users who gave your navigation low ratings, or even by those who specifically mentioned your navigation in a comment. You can even filter by first-time visitors who gave your navigation low scores compared to repeat visitors, or by shoe shoppers vs. apparel shoppers. Then watch the individual sessions of a handful of people to find out where people are getting hung up. It’s sort of like a on-demand focus group.
It’s important to have this kind of insight to help you improve your visitors’ satisfaction because then they will be more likely to return to the site, recommend the site to others, purchase, subscribe, or do whatever other conversion event you’re hoping to elicit.
In other words, observation is a critical part of the web analytics ecosystem. You actually have to have all these kinds of analytics in play to be making good decisions about your website—you need clickstream (behavioral) to tell you what already happened, you need satisfaction analytics (attitudinal) to help you predict what will happen next, and you need observational data to SHOW you what happened and provide depth and dimension to your other analytics.
Next up in this series, we’ll talk about how to use the observational data and your conclusions for increased efficiency.
We’d love to hear comments about how you’ve used observation in the web analytics ecosystem and what methods you prefer, whether it’s session replays, focus groups, usability labs, etc.