For the first time in a while, I’m designing and running my own usability studies. Carnegie Mellon cross-trains its HCI students in both research and design, so it’s a bit like riding a bike – it comes back eventually.
Post-university, I used to run studies and focus groups when I was a producer in the video game industry, because we rarely employed dedicated UX folks – but that didn’t diminish the importance of getting feedback about work in progress. At Microsoft, however, the field of user experience is split into two specialties – Design and Research – so I usually had the luxury of partnering with a dedicated researcher when seeking to observe the use of our products.
Amazon also splits the discipline, but the ratio of designers to researchers is far more extreme. With so much data to draw from, many observations are made directly from usage patterns. My project team is (relatively) small, with no dedicated researcher, and as a v1 product we have no existing data to analyze.
Thus it’s time to step up to the plate again – and as I designed my study, it included an interview portion, an interactive portion, and a post-test survey. I couldn’t help but question the inclusion of the survey. I’ve done it for most prior tests – including an early Disney Friends playtest with 8 year olds, which yielded critical insights, amazingly. (And also revealed that some kids felt their favorite Disney character was Shrek. Open ended question FTW.)
And yet in a way it seems silly – why not just ask the participant these questions, when they’re sitting right in front of you?
As I watch my users go through this process, I am reminded of the great utility of post-test surveys, strange as it may seem to watch someone in front of you filling out a sheet of paper.
Why use post-test surveys?
Post-test surveys are particularly well-suited to get at specific types of feedback
More honest feedback
None of us can deny the bias that occurs when a moderator, perceived to be involved with the project, asks you how you felt about a product. Humans often aim to please, and what we’re told to our faces may not match reality. Surveys partially mitigate this bias. I have consistently found that participants become more open about shortcomings in experiences when telling them to a piece of paper, rather than a human being.
Pseudo-quantitative feedback
We often want some sort of numerical value assigned to satisfaction so we can compare across users. However, it is awkward to ask participants to rate a series of experience dimensions numerically, and verbally it is easy to lose track. Not to mention that many times participants will want to go back and change prior answers once they get the “hang” of whatever rating system you’ve chosen (1-5, agree/disagree, etc.). Putting this request in survey form gives more consistent and comparable results across participants.
Feedback from bashful participants
Yesterday, I was impressed at the thoughtful detail I got in a post-test survey from a participant who was particularly quiet, despite stated enthusiasm for the product. The survey gave her an opportunity to express herself in a way more natural to her. And as mentioned before, she opened up on both extremes – negatives and positives.
Post-test Survey tips and tricks
So you want to add a quick survey to your study? A few tips and cautions when designing your post-test survey:
Lead with the negative
I remember this advice very clearly – given by my late professor Randy Pausch in his Building Virtual Worlds class. There were many surveys in that class, both after user testing our worlds and on our fellow students. Randy cautioned to always lead with asking for negatives – otherwise, the positive experiences recollected will dim the recollection of the negative. And if we’re being honest, the negative feedback is more often the more critical of the two. By leading with the negative, you’re sending a subtle subliminal signal that it’s OK to be critical.
As a result, I always start each post-test survey with 2 questions, in this order:
1. What were the two worst things about the experience?
2. What were the two best things about the experience?
Be very clear about your quantitative scale, and check for error before ending the session.
It happens – no matter how clearly you think you’ve laid out your 1-5 scale, someone’s going to flip it accidentally. While I’d recommend against heavily scrutinizing survey results in front of a participant, choose a couple of barometer values to peek at. If they don’t align with the reaction you’ve gotten in the test, inquire.
Keep open answer questions to a minimum
Beyond the 2 questions above, I try to limit myself to only a few open questions. In many cases, open questions can be asked in an interview format, which also allows you the opportunity to drill down into interesting answers and gauge your participant’s unspoken emotional response to the question. This also generally means you’ll have a video clip of said answer, which speaks louder than printed words when convincing key stakeholders.
For open answers, provide lines that hint at feedback length
Everyone’s handwriting differs, and those with small handwriting might become overwhelmed by the sight of a blank space. A single line indicates a sentence or two at most, and may actually free your participants to respond.
Gauge purchase intent if it’s a retail product
It never hurts to ask whether a participant would not just use a product, but purchase it for themselves. It often changes things significantly. Similarly, asking what price a participant would pay for an experience can be particularly eye-opening.
If it’s not a direct-purchase product, gauge how likely they’d be to tell the people in charge of purchasing about any preference they have. This is also important in case your screener captured folks genuinely not in your target market.
Don’t repeat your questions
Did you get the information you’re asking for in your screener? Often you can get most demographic info you need from there – make sure to keep those responses so you can cross-reference them.
Leave the participant some space to complete the survey
It’s understandably awkward to sit and watch a participant complete a survey – they will likely feel rushed and may change answers. Even if it’s not necessary, I like to leave the room for a bit – and if not possible, I will busy myself with other tasks, like resetting equipment, while keeping an ear out for the inevitable pencils-down moment.
Administer the survey as quickly as possible, ideally BEFORE any followup interviews.
You might inadvertently bias participants by drilling down too deep prior to the survey. If your test is very long (2+ hours), consider breaking the survey into several smaller chunks to get feedback closer to the moment of experience. Don’t wait until they leave the lab.
Post-test surveys are a much different beast than their cousin, the SurveyMonkey style of asynchronous question-and-answer series (often delivered to a much larger audience). Post-test surveys require a lighter touch and a bit of work setting the mood to create a canvas for constructive feedback that doesn’t have to be delivered directly to your face. It’s cheap, fast, and a very valuable tool in your tool belt.