| Posted 09/27/11 at 05:21 PM||Reply with quote #1 |
Sometimes we have an eager group of people who want to test their website, but there is an executive or manager who, in the end, won't agree to any changes no matter how reasonable or fact-based they are. They might disagree publicly during the meeting, or, worse, smile during the meeting and then deep-six all the recommendations afterward. I'd love any ideas about how to anticipate and overcome this problem - it's frustrating for everyone involved and wastes a lot of time.
| Posted 09/29/11 at 06:29 PM||Reply with quote #2 |
|Clever quips come to mind, but I'll avoid trivializing your situation.|
Jonathan, I wonder if, for whatever reason, you just didn't have adequate buy-in from that manager to begin with. Did they understand the process? Did they observe the participants during the tests? (I assume so, because First Fridays doesn't let anyone into the debriefing if they didn't observe at least one session, right?)
Or, going back further, did this manager have a chance to talk with other managers whose sites were the subject of earlier tests? In other words, could this manager ask them about their expectations going into the tests, what they learned from the results, and the reasons they made the changes they did? (Sometimes seeing what the other kids have done can make an impression.)
Sometimes, no matter how much you prepare, the manager just won't get it. I tried finding a great animation that shows a usability tester trying to explain the process while at every turn the manager is asking about these focus groups -- How will you find them? What will you ask them? How will you analyze their comments? You know that testing session won't have the impact it should.
The answer might be to run more tests, with different participants. Seeing essentially the same result again and again can have an impact. (Soft as it is, a steady stream of water can eventually wear stone away.)
Or to get into the manager's head: What personas do they perceive as the people using the website? What tasks do they believe those personas would have to do? How do they expect those personas to interact with the design? Then recruit three participants who clearly represent each of those personas and run the usability tests with them.
And, at some point, you have to realize that you can't make your customer (the manager) serve their customers (the people using the website) well.
Or as my grandfather used to tell me, "Never try to teach a pig to sing, because you'll accomplish only two things.
"You'll frustrate yourself, and you'll annoy the pig."
| Posted 10/09/11 at 04:31 PM||Reply with quote #3 |
|I've had this kind of experience too. I try to do as many of the following these days and I mostly manage to avoid it, but the kind of person you talk about still crops up from time to time. I find these kinds of scenarios arise mainly when you haven't got to know the important stakeholders well enough before you begin...|
- Talk to the key people about how they feel about their website or app or whatever. What do they think is great? What concerns them? What would they like to see happen or be developed in the future. You'll have your own ideas about what needs to be addressed before you begin testing, but if you're not sensitive to their positions you'll probably have them in a corner when you share your findings and they won't take kindly to what you say. Get key stakeholders to help you set the tasks.
- If recruiting the target audience is a problem, work with stakeholders to develop a persona or two. Then ask the participants to play the role of the persona. I know Steve says recruit loosely and grade on a curve, but sometimes this isn't enough for the cynical change-averse boss. It might not be perfect but what you can do is - agree who Bob the persona is, what the company want people like Bob to do and what they expect Bob wants to do. You'll only have people pretending to be Bob in the testing sessions, but hey - if 5 people pretending to be Bob independent of each other say and do the same kinds of things then it's a slightly stronger case than talking about results from tests with any old participant. Don't give the boss the chance to think - "that's fine but our customers aren't that stupid..."
- Video the whole experience. Reports and stats are fine, but what really gives immediacy to what you've found is a 30 second clip of someone having big problems or vocalising what is frustrating them. I've found that these carry more weight than the amalgamated findings of several participants.
- Back up the anecdotal findings with stats from the website analytics. So we saw 4 out of 5 participants do this, and our analytics indicate that x thousand of visitors are doing something similar every month. Analytics gives you the what and user testing gives you the why. Together they're greater than the sum of their parts. I saw Lou Rosenfeld present last week where he likened it to blind men feeling their way round an elephant...
- And put a pound sign on it. Again, analytics is good for helping with this. Even if you have to make some fairly wild stabs in the dark as you do your sums, so long as you explain each step and the assumptions you've made it should be fine. In my experience, when you make estimates in this kind of thing, you tend to get -"You're estimating the number of... why?", -"Because we don't know so this is our best guess based on...", -"We don't know x?! We need to know this!!" In the end it's the potential loss/gain that really catches the eye. No matter what some people say, they don't give a toss about the user experience. The challenge is to tie it to the bottom line.