Thanks for stopping by. Please get comfortable. We’re going to be taking a few notes while you listen, but pay that no mind. Now, what we’d like you to do is listen to the podcast. Oh. And don’t worry about that big mirror over there. There may be 2 or 3 or 10 people watching. Wow. We’re terrible moderators when it comes to this sort of thing. That’s why Els Aerts from AGConsult joined us to discuss user research: what it is, where it should fit in an organization’s toolkit, and some tips for doing it well.
[music]
00:04 Announcer: Welcome to the Digital Analytics Power Hour. Tim, Michael, Moe and the occasional guest discussing digital analytics issues of the day. Find them on Facebook, at facebook.com/analyticshour and their website analyticshour.io. And now, the Digital Analytics Power Hour.
[music]
00:27 Tim Wilson: Hi everyone. Welcome to the Digital Analytics Power Hour. This is episode 88, and as this happened before, I’m not Michael. Unfortunately, the illness bug bit him just before the recording of one of our tri-continental shows. So we’re gonna soldier on without him, even if the intro winds up being a little shaky and no one is around to keep me in check throughout the episode. But Moe’s gonna do her best. I’m Tim Wilson from Search Discovery, and I’m joined, as always, by Moe Kiss from The Iconic. Welcome, Moe.
01:00 Moe Kiss: I will 100% keep you in check. I feel like that’s my podcastly duty, right?
01:06 TW: It should be, we’ll see how you do. No pressure. So onto the show. In today’s day and age, everyone tells us to be customer-centric and listen to our customers. Or do they? Today, we wanted to put the qual, as in qualitative, back into our work. We wanted to understand why our customers say what they say, or maybe even why they do what they do. And, frankly, we have no idea how to do that. So, we’ve invited in an expert. Even if she cringes when we say that. Els Aerts is the expert in user research, the expert. She’s been working in the field for nearly 20 years, and is the co-founder of AGConsult in Belgium. Welcome, Els.
01:52 Els Aerts: Hi, Tim. Very nice to be here. Stop calling me the expert, dude. But, yeah, that’s fine. I have been doing this for a long time, so some expertise is expected, and so, yeah. [chuckle]
02:03 MK: I’ve met you, and I feel like you know your shit. So I’m… Yeah, I’m happy to stand by that claim.
02:10 EA: Cool.
02:10 TW: So maybe to kick off, as an expert, if maybe you could… No. [laughter]
02:17 EA: Ask me anything.
02:19 TW: It was interesting, as we were kicking this around, there are a lot of terms around user research, usability, consumer research, user testing. So, can we maybe start with… Are those all different sides of the same coin? Are they all very specific disciplines? Is one an umbrella? Can you give the landscape of user research or whatever the broadest terminology is for that, and what falls within it?
02:47 EA: Yeah. Well, I would say user research is… I would consider that the umbrella term for, indeed, things like user testing, or as it is preferably known these days, usability testing, but also other user research tools, for example, heat maps or surveys. I would call all of that user research. Now, there’s also something called consumer research, or, I guess I would usually call it market research, which I would consider to be something completely different. Market research is basically where you wanna find out if there is a market for your product, or you wanna find out what the right price setting is, etcetra. That’s basically… It’s quantitative research, and it’s really more about your product than it is about the user. User research is about the user. As qual and quant as well. But for me that would be the big difference, if I described it.
03:41 TW: Okay. So we’re focusing on the experiences and perceptions and how they do things, as opposed to the market research being… Could be primary or secondary, and it’s more, “Yeah, can we sell this stuff?” Or, “What are they looking for as a product or service?” As opposed to, “What are they looking for from an experience on a website or in a mobile app or something else?”
04:07 EA: Yep. More or less like that.
04:09 MK: I’m gonna ask something completely ridiculous, but I just gotta ask. Is there a certain number of people, users, whether they be at a user level or a customer level or a consumer level that makes this type of work viable? I’m just thinking from the context of, if you go out into the street and talk to three people, that does qualify? What’s best practice, from your perspective?
04:38 EA: Well, there used to be this article, and there’s this graph that is still widely shared, that you only need five to six users to find about… I forget, is it 80% or 90% of usability issues? This is, of course, sadly not true. [chuckle] You need a lot more people. But it is true that it is a very good idea to start testing with a very small number of users, because very often, you just see the same issues crop up time and again, and it is a smarter thing to test with five or six people. I would always say per target audience, five or six people, and then see, “Okay, what are the issues happening here?” And before you do another round of testing, first of all, fix those issues. Because otherwise, man, user testing can get boring really quickly, because you just see the same people making the same mistakes again, and again, and again. And that’s just… Yeah, that’s no fun for a client, ’cause they’re paying good money to do this. And it’s also not much fun as a researcher, because you feel like, “Okay, we’ve established this, let’s fix this, and then see what else we need fixing.”
05:44 MK: You just mentioned there, target audience. Can you elaborate a little bit on… In this context, what does that mean, and how do you define it? How do you break up the different audience segments and what not?
05:56 EA: Well, for example, we did a project for the Department of Education. And they had a website that targeted teachers on one hand, but also admin people, people in the admin office. Those two needs for the website were completely different. So it’s a good idea to have one day of testing, five or six people, teachers, one day of testing, five or six people, the admin side. And I call these target audiences. It’s really because… And this is not a matter of demographics, this is really a matter of these people come to this website with different tasks in mind, so they’re really coming to do different things, and it would make no sense to let a teacher perform a task on a website that basically only an admin person is really gonna wanna do, and vice versa. There’s no point in letting the admin people look up things that only teachers will want to do on this website.
06:52 MK: That makes sense.
06:53 TW: So before we actually get into the what and how of user research, what is your… You may have a distorted view of this based on, it is your field. I feel like there are tons of companies that are not doing, really, user research at all. They may have web analytics, they may be doing some A/B testing. I’ve certainly run into companies that the user research is a very rarely done, occasionally, “Oh, we’re doing a full website redesign. We’re gonna bring in an outside firm.” What’s your sense in the industry of companies that are operating at scale, that they have it fully baked into their process, they have it at least in some extent, or they really don’t do it at all? Is it most companies aren’t doing it, and are missing a big opportunity?
07:48 EA: Sadly, yes, that is the case. Because I feel like with all the tools that have come up in… Well, as you said in the beginning, I’ve been doing this for a very long time, I’ve been doing this before we even had Google Analytics. I saw Webtrends come up, and click tracks and all of these back then crazy, fun stuff, but right now we can measure and track anything. And there’s this emphasis on the quantitative aspect, which is very important. And my God, I’m super happy that we have all these tools, because it teaches us a lot. But somehow, indeed, qualitative has been forgotten, or indeed, is questioned… The validity of qualitative research is questioned a lot more than the validity of quantitative research. When, really, if you have your Google Analytics set up wrong, you might be looking at distorted data. You might be looking at false data. The same is true for qualitative research. If you do it wrong, and the thing is, doing it wrong in qualitative is a huge risk, ’cause you need to really have a certain expertise in moderating a user test to make sure that, indeed, that you get good results, that you get valid output there. With regards to do big companies have it baked into their process? Sadly, no. [chuckle] I very often still see terrible things happening.
09:17 EA: There’s actually a story that I found very interesting, ’cause I was at a Microsoft conference quite recently, and one of these guys is talking about the launch of… Oh my god, was it Microsoft Hello? Where there was facial recognition? Anyway, he was going on about, “Oh, and we shipped it out on 1,000 or 100,000 laptops, and then, we got back the data. We dug into the data, and we noticed that only 10% of users were actually doing this, setting it up with the camera recognition. Then we found out that, ah, in one in 10 times, there was a camera issue.” Really? If you had done two days of user testing people, you would’ve totally caught that. You would’ve caught that nobody… There was no prompt to do this. This would’ve come up in qualitative user testing. If one in 10 times the camera didn’t work, I’m sure that would’ve come up. So two days of testing could’ve saved you a lot of trouble. Usually it’s like, “Oh, we don’t have the time for that. We don’t have the budget for that. Let’s just ship this shit, and then we’ll see afterwards.” They’re like, “Yeah, I’m all for shipping, but… ”
10:34 MK: So do you think that’s the main issue? This is something I really do wanna dig into and understand. Why do you think businesses struggle to see the value of user research? Is it the time, the money? What is that resistance?
10:51 EA: Well, I think there’s a resistance, as I said, because it’s qualitative, and it doesn’t have the big numbers, which isn’t as sexy. What also is a thing, is that basically user testing is unpredictable. I cannot predict what the outcome will be. So after the user test, “How long after that will we be able to ship?” Well, that depends. That depends whether your first prototype, for example, is doing really well, and you only need to fix minor things. “How much time will we need to plan in for the development team?” Well, I don’t know, dude. If all works well, then happy days. And if it doesn’t, maybe you’ll have to do some very big work. And it’s this unpredictability, I think also, that holds people back. Also, “But we have to ship in a month. So if we’re user testing, we wouldn’t have time to fix anything anyway, so we’ll not do it.”
11:44 TW: Let’s put our head in the sand.
11:46 EA: Yeah, it’s true. This has happened, people were going like, “Oh, we wanna gonna do this. Oh no, actually, we’re gonna cancel it. It’s too late in the game.” Then a month later, they’ve shipped it, it’s live, numbers are not looking good, and all of a sudden, guess what? There is time to do user-testing. There is [12:03] ____ to do user testing. Isn’t that magical? Yeah. Welcome to my life, people. Yep.
[laughter]
12:11 TW: So is there also the classic… And this goes… I know you’re kind of a Jakob Nielsen fan, I think, or at least some of his stuff, that way back when, when he was making the point that we see ourselves, we think that, “Hey, I’m a user, so why do I need to do user testing? Because I’ve gone through this and it makes sense?” We had a little incident before we started recording, where Moe and I totally knew some terminology and you’re like, “What are you talking about?” You actually knew what we were talking about, and we, very quickly, were trying to blame you. Not blame you, but that seemed like that happens with web experiences as well. “I’m a user. Hey, I showed it to my spouse or my kid, and why do I need to do user testing? Because I’m a human being, and you’re just gonna test other human beings.” I feel like that comes in as well.
13:02 MK: I think we think that we have empathy for our users, and that we can easily jump into their perspective. But I, actually, I think about this all the time, because there is this thing… And I’m gonna be totally honest, it’s killing me right now. We have multiple users that stumble on this step. We’ve always known it’s a problem, but very recently we discovered a way that it’s become really, really… We’ve been getting user feedback about it, and it’s been becoming super obvious to us. We knew it was an issue, but how do you have that customer empathy of being like, “I don’t understand the checkout the way you do. I’m not in it every single day. I don’t know your platform, your tool, whatever your business is.” We think we’re doing a good job at it. And I just think, no matter how well you think you understand your customers, nothing compares to hearing their own perspective.
14:01 TW: You’re kind of enlightened on that, right? That doesn’t change. There’s a [14:06] ____ level of that. My sense is that many, many, many marketers and designers haven’t reached that. Or, I guess, Els, when somebody who is a skeptic about user testing, finally, for whatever reason, they get dragged kicking and screaming into actually doing it, do you get immediate converts, or do you have people who you’ve gone and done some user testing for them? You’ve shown the results, and a week later, they’re still saying, “This doesn’t have value. We could’ve done this ourselves.” It seems like the sort of thing that once people actually see it and get value from it, they’ll be converts and evangelists forever and ever, or does it not work that way?
14:46 EA: Well, it works that way, and I think the operative word here is they need to see it. And that is why always we encourage our clients. We actually beg our clients to, “Please, please, please, come and sit in on the user testing with us, follow it live. If you can’t free up your schedule for a whole day, just come and see one or two test participants,” Because very often we’ll start a day of user testing, and if there are people in the observation room who are, indeed, like you say, a bit skeptical, or they think, “We’ve got this sorted. I know how this works, it works fine.” And they see the first users struggle.
15:26 EA: Sometimes they’ll say things like, “Wow, this person is just not very intelligent, are they? Where on Earth did you get this stupid person?” And I’m like, “Excuse me. One, this is not very nice, and two, just sit back, relax, let’s have person number two, who will be a completely different demographic background.” Like the first person could be a 20 year old guy, second person could be a 50 year old lady, if you have a broad audience. And what we see is, if something is an issue, it is an issue for everyone. And usually, by participant number three, at the very latest, people go, “Oh, shit. Okay. Oh, okay. Oh it wasn’t that the first person was stupid and the second person is stupid, because this is really starting to look like a pattern here.” And usually people say, after test participant number three, “It has already been worth it, just seeing these people do this. Just… ”
16:29 TW: And then they say, “Now we’ve got it figured out, can we stop now?” And you’re like, “No, no, no. We need to get through five or six.”
[laughter]
16:33 EA: Yeah, that’s also… Do a whole day, then you can see what are the issues that keep coming back, and then you need to, of course, as a business and as an expert, “Okay. What are really stopping issues? What are things that would actually hold people back from, example, from buying your stuff? From completing the checkout? And what are minor irritations?” I’m a big fan of fixing everything, but if you need to prioritize, then you, of course, need to fix stoppers first.
17:03 MK: So one question I wanted to ask, because this happens in my company quite a bit when we’re talking about qualitative anything, there’s always this thing about the sample size. It always is an issue about getting buy-in. And it sounds like bringing people in the room is the best way to overcome that. But, there’s always gonna be stakeholders in the business who can’t physically come and watch, and for them, they’ll always be like, “Oh, yes, so five people complained about this. Well, we have five thousand customer service contacts about this other issue. So clearly, that’s a much more important issue.” How do you typically manage that type of discussion, when it comes… What’s your advice?
[laughter]
17:47 EA: This is a very… It’s a tough question, and it is true that qualitative research is… It is a tough sell, because it’s hard to come in and say, “We’ve seen five or 10 people do this, or fail here.” Now, what we sometimes do to help hammer the message home, so to say, is we make Best Off video clips. Actually, it’s Worst Off. And when you see person after person fail at the same thing, and it’s in a compilation video, this is already a quite, “Okay, okay.” When you hear potential clients say things like “Wow, they really didn’t think this through, did they?”. Not me saying that, not the expert that you’re paying or where you think maybe it’s my opinion. No, this is an actual client or a potential client, and you can feel the disappointment in their voice. And if you share those videos, that already very often says something. Also, it can help you if you’re not willing to… And I completely understand, if you have… Like you, Moe, you have the numbers to basically let the user test data inform your A/B tests. And I go, “Okay, let’s use the complaint, the 5,000 complaints from customer research. Let’s test that. But let’s also, indeed, test the stuff that we saw in our qualitative research. And see if we can fix that.”
19:09 TW: I feel like it seems like you said earlier that doing user testing orally can be very, very damaging, and I think a key to that is the actual person and their skills. And I know you’ve got some… What is the role of the moderator? How does that work? What are the things they absolutely should not be doing versus what do they need to do?
19:29 EA: Yeah, I think a lot of this comes down to the fact that people think moderating a user test is like doing an interview. It’s like you’re asking this test participant, you’re asking this person questions, when really, that’s just not the case. In an interview, you’re after a person’s opinion. You wanna know what they think and you wanna know… You wanna hear them talk. You wanna have them express their opinion the whole time. That’s not what you want in a user test. You’ll talk to the person beforehand, before you start the actual test. And the point of that, that is a sort of interview, where you find out something about this person. Are they very nervous, or are they happy to chat? What’s their situation with regards to your product? So you can hark back to that when you’re setting the tasks. That’s important distinction number one. In a user test, you don’t ask questions, you set tasks, because you’re not after opinions, you’re after observations. You want to observe their behaviour.
20:33 EA: So you wanna have them do something on your website, or on your app. You don’t wanna ask them something about the app. You want them to use it, so you can observe that. And as a moderator, while that sounds super easy… [chuckle] And I think this is the reason why so many people call user testing useless testing, because it sounds so easy. “You just ask them to do something and they do it, and then we’re done!” No, because the way you ask it is very important. You have to, in that pre-test interview, when you welcome people into the room, I always use Jared Spool, another one of our oldies. His analogy says you need three personalities. And the first one is that you need to be like a flight attendant. You have to set your test participant at ease, you have to make sure this person is comfortable, you have to offer them a drink, talk about the weather, how their day was.
21:28 EA: In Belgium, we talk about traffic the whole time, because traffic is shit the whole time. So you have something to gripe about together. [laughter] Really bonds. It’s really good there. So that’s it. This person has to be comfortable sitting next to you, so they don’t feel judged. That’s a very important thing. You have to be a people person. And while you can learn that, that is also something that you just have to be. We do a lot of courses on moderating as well. And sometimes, I sit next to someone as a mock test participant, to let them practice moderating. And you can just feel not comfortable because the person is not comfortable. So, yeah, that’s basically the first thing. You have to be a people person, and you have to have set the participant at ease.
22:13 EA: At the same time, what you need to do is… I talked about the observation room where people can follow the test. I love it when… Oh man, I like to get a really big room full of them, of our clients watching the test, because that way I know I have a lot less work to do afterwards, because basically, they’re pretty much convinced, they’ve seen it with their own eyes. Because as a moderator, you’re sitting right next to your test participant. You can see everything that they’re doing, and you can also see their face really well. And sometimes they’ll say something, or they’ll point to the screen and look at you. And so, people in the observation room won’t get that because they’re not using their mouse. So there’s no mouse tracking on the screen. They’re not looking at the screen, so there’s no eye tracking. So you need to… I call it, you have to be a sports caster, you have to be a reporter, so the people in the observation room can really know what’s going on in the testing room. So that’s something, also.
23:10 TW: If somebody looks at you, are you literally making a note right then and there, or how is that being recorded?
23:16 EA: Well, I prefer to let the person in the observation room, to have an extra expert in the observation room, and let them do the note taking, the primary note taking. However, when I am moderating, I do like to take notes as well. And this is, again, something that in your pre-test interview, you talk about with the test participants, you tell them, “I’ll ask you some questions. I’ll let you do some stuff on the site, and everything is being recorded, this and that. I’ll also be making notes, but don’t worry about that.” And the way you sit next to the participant, the way you take your notes… If you’re hammering away on a keyboard, that’s no good. [laughter] If you do it, I sometimes just make paper notes, which is… Again, makes the process, in the end, takes a bit longer, but it’s the least intrusive for the participant, so… And, of course, they should never see your notes, and you should never write stupid stuff like, “Failed again.” [laughter] Even though sometimes there’s fail after fail after fail. So, yeah, I always prefer to have two sets of notes. Even though you have the recording, two sets of notes.
24:30 TW: Awesome.
24:31 EA: And that’s basically what also Jared says, is pretty much, you have personality type number three. You are a scientist. You are a researcher. You’re not just sitting there and talking to this person. There is actual gathering of data here. It’s just in a very different way than with heat maps or with Google Analytics, but it is data that you’re gathering. So, yeah, that’s a very important one as well.
24:55 TW: Well, that’s interesting. We say qualitative, and it is qualitative, but I think in web analytics, the commonly used phrase is, “We don’t wanna ask people what they think. We’re gonna watch what they do,” and this is the same thing. You’re not asking them what they think. You’re not asking for opinions. You’re still doing behavioral tracking. You’re still, as much as possible, watching what they do, not at the scale of everyone on the website, but you’re getting a whole lot more context with what they’re doing, so it’s what they’re doing, but in a much, much richer data set with a smaller sample.
25:35 EA: Exactly.
25:36 TW: So it’s actually a closer cousin to web analytics than, I think, sometimes it gets treated.
25:40 EA: Oh, man. I’m…
25:41 MK: Can you just bridge the gap, though, in that I often hear people say user test… The web analytics… Analytics. Who calls it web analytics anymore, Tim? Honestly. [laughter] Digital analytics helps you understand the what and the how or something along those lines, and user testing is the why, so where does the real insight for you come from the…
26:08 TW: Is the user testing the why?
26:11 MK: Well I don’t know. I thought so. What do you think? Wait. Let me ask my question first, and then maybe you [26:17] ____.
26:17 TW: Okay.
26:18 MK: But I thought it was the why. Why are people struggling? So how do you capture that if it’s like, “Okay they’re there. They can’t get through this bit of the checkout form… ” I always go back to the checkout. It’s my default. Or if they’re stuck on this thing, is it because they’re talking to you as they’re stuck? Is that what bridges the gap?
26:37 EA: Mm-hmm. And, see, I love what you said there, Tim, even though you said web analytics, but that user testing is a much [laughter] closer cousin to analytics. Much closer cousin to analytics than a lot of people think, because, indeed, it is not about opinions. It is about behavior. And, yeah, sometimes we can see something in analytics. “Oh, obviously, there’s an issue with this step. Why is this an issue?” Either you can go and look at the step as an expert, or you can have a look at the user sessions, and sometimes user session recordings can be quite valuable, because you just see people… When you see a mouse hover between two fields the whole time or between two issues in a menu, this means it’s unclear what the difference is. People are doubting. Why are they doubting? What would be a clearer term? Ah, maybe that is something that you can dig deeper into with user testing.
27:34 EA: User testing, basically, when people go through a step and when something is wrong, they’ll make a sound, or they’ll say something. They’ll say something like, “Meh.” Or just go, “Meh!” Or they’ll say, “That’s annoying,” and you’ll be like, “Annoying?” That’s all you have to say! You don’t have to… “Annoying?” Which basically means, “Why are you calling it annoying?” And they’ll go like, “Well, yeah, it’s really annoying that I have to… ” And then they start talking the whole time. It’s not that you have to encourage them to speak out loud the whole time, and this is, again, often misinterpreted, the talk-aloud protocol, “I’m clicking here. I’m opening a new browser window.” No, it’s not like that. But it’s when something happens, people just go, “Well. [28:18] ____ Oh, yeah, I hadn’t expected this to happen. I thought we were gonna go and do… ” Aha! As a moderator, you just react to actions, to behavior of your test participant. And, basically, the best thing is to never…
28:35 TW: Or to ask a direct question as little as possible. Very often when you echo what they say… And there’s a video of that that Jakob Nielsen has that has all of these different techniques. The test participant is very willing to help you out, and to explain why something is going wrong, because that’s what, indeed, Moe, like you said, that’s what we’re after. “Why is this thing happening?” Or, “Why do they think they’re exhibiting this behavior?”
29:01 TW: Well, so what is the… I hate to say ideal, but what’s the typical… I think I’m still a little unclear, and maybe it’s always situational, but if you have in your arsenal… You’ve got A/B testing. You’ve got digital analytics. And my reason was, I was trying to… To me, web analytics is a subset of digital analytics, and just trying to be more specific. [laughter] Digital analytics is starting to be the all-encompassing, all types of data. But you’ve got A/B testing. You’ve got digital analytics. You’ve got user testing. What’s the sequence, or which one feeds the other one? Can they all feed each other? Can you do an A/B test and say, “We can’t explain why this is different, so we drive user testing, or we see something in digital analytics, and it drives user testing, or we do user testing and then put the measurement on it?” Is there an ideal setup, or inter-relationship between those different data collection analysis tools?
30:03 EA: Well, I would say it’s definitely, yes, situational. Now, if you have a website that has been… Or a product, an app, that has been up and running for quite some time. It is always, always, always a good idea to have a look at the analytics first. Because that informs you of, sometimes, the places where you definitely have to make sure that these are part of your user test, of your usability test. So, indeed, let’s go to the checkout. Well, if you see there’s a problem in a certain step of your checkout, then that’s something to pay extra attention to during user test. You should never inform the user of this, of course, that would be totally damn silly. “We’re particularly after your input on the checkout in step two.” Some people do this, this is why I say it. So, God, no. [laughter] This is not the way to go, but for you as a researcher, you need to know, “Okay, I need to pay extra close attention here, because everything that the user does here for me is interesting to watch, and interesting to observe, and interesting to get something out of.” So I would say if you have analytics, that is a first go to, great, always.
31:13 EA: Sometimes, also user surveys, targeted surveys can help to get a bit of that qualitative input first, before you dive into the user testing. And it really depends on how fast it is necessary to go into moderated user testing. I always say it very much depends on the complexity of the product or the service that you’re selling. If you’re a straight up e-commerce, then user testing is still very interesting, but it is not maybe the first thing that you need to do. You can already achieve quite a lot with the other methods, with, indeed, heat maps, with watching user session recordings, with your analytics, of course, and your surveys, and then doing A/B testing based on your hypotheses of that. But when you have quite a complex service or product, then making sure that you understand whether the user actually really understands what you’re trying to sell them, what you’re trying to offer them, and the steps they need to go through, that is super important. And that is very often not something analytics can inform you of. That is something that you really need to go one on one with people.
32:28 TW: And I guess as you were talking, it also makes sense then if you… In the example earlier, where if you’re going to launch something, you don’t have the analytics data, you don’t have… There’s still value in saying, “Before we… This should be baked into our development process, that we get to a functioning… Get far enough into stage, now’s when we pull in users, and have them check it before we roll it out to the masses.” And that’s a way to really minimize some risk, because you haven’t actually rolled it out to actual customers who actually want to do something, right?
33:02 EA: Absolutely. And this is why my basis is in usability, and in information architecture. Which, as we say, I’ve been doing this a long time, we’ve made websites for organizations that was their first website. So there is no data whatsoever, and doing user testing on a prototype before even one line of proper HTML or whatever has been written, is super valuable, because you can… Thank God I haven’t had to do this very often, but sometimes you can just bin that prototype. You can just go, “Okay, this is test participant number three on this, and it stinks. It’s bad. We need to totally rethink the structure of this. We need to totally rethink our approach to this.” And while that is no fun to do, it’s a lot more fun to do that when you’re just prototyping and have wire frames, then when you have everything basically up and running in shiny colors. So at the start of a project, super important to do user testing.
34:05 TW: And I could see it for companies, they’ve had a website, but they’re saying, “We’re gonna pivot it to be a responsive design,” or, “We’re gonna roll out a mobile app.” There’s still, even in 2018, I think, plenty of entirely new experiences where it would make a lot of sense to say, “Do the subset, do the prototype, do the mock up, before you pull the trigger on full on development.” So it makes a lot of sense.
34:30 MK: Els and I were actually at a conference recently where the Head of Growth at Instagram was talking about exactly this. Where they completely redesigned their onboarding flow overnight, went from one flow to the other flow. They just talked about having a heap of lessons out of that, about the fact that, one, they didn’t know what was wrong, which bit of this completely new flow were users struggling with. This is a clear case where user testing might’ve helped solve some of those problems.
35:03 EA: I think very often still people think, “Oh, it’s onboarding, how difficult can it be? It’s log-in and registration, for Pete’s sakes.” Still, yes, there are best practices, but if you’re a company like Instagram, and this is quite an important page, [chuckle] my God, do a couple of days of user testing to make sure, and to do those tweaks.
35:28 MK: So do you think user testing can be done well in-house? Particularly in the case where it’s not a person’s sole job, like a person’s an analyst, and every once in a while, they’re gonna go run some user… Have you ever seen this work well? Do you have any thoughts, feedback, opinions? Hopefully strong ones.
35:49 TW: I wanna broaden that a little bit, ’cause I think it’s a good kind of a wrap question, as we’re gonna be running short on time, but broadening, what is the… On the one hand, we hear somebody who’s super knowledgeable, Els, and it could be terrifying like, “Oh, crap, where can I start? Do I have to go get a huge budget and make a case? Can I do a little bit of this myself? But, wow, it sounds like I could fuck that up, which that would be bad.” So what’s a getting started… And, Moe, I don’t know if I’m completely hijacking your question, but I think it’s a similar form of… We’ve talked a lot about the… If you have an expert who can do this, and when and where, is there a, “How can I dip my toe into this? How can I do a little bit of it safely, and get a little bit of value? Where should we actually start?”
36:40 MK: I kind of wish people could see Els’ face right now.
[laughter]
36:45 MK: I totally forgot. There you go. Oh, yeah. I’m okay. Yeah, I see what you’re saying. Well, I think you can hone your moderator skills. Again, I do think there has to be an innate quality that you have to have in order to do it. There’s some people in my team who are naturals at it, where we just sharpened the skill, and there’s some people that I wouldn’t let near a user, if they were the last person on Earth to be moderating, because they just can’t. That’s just how it is. I think, basically, what is a very important thing if you’re a user test moderator, is also a good knowledge of usability. And so I would say, read Steve Krug’s both books. Read his “Don’t Think…
37:37 TW: “Don’t Make Me Think”, and…
37:37 EA: Yeah.
37:38 TW: Wait, both books are “Don’t Make Me Think”, and what’s the other one?
37:42 EA: “Rocket Surgery”.
37:42 TW: [37:42] ____. [laughter] It’ll be in the show notes.
37:46 EA: [37:46] ____ Rocket Surgery [37:47] ____ something like that. “Rocket Surgery Made Easy”, that was it. See, wasn’t hard! “Rocket Surgery Made Easy”. So I would say read both of those, because the second one goes into the specifics of user testing a bit more. Then there’s a lot of practising, and do practice on your partner, and practice on colleagues. And when you’re doing user testing for real, don’t use your partner. [laughter] Don’t use your mother. I’ve done that. But for practice, they’re fine. They’re good enough.
38:19 TW: Awesome. So I think as always, looking through the various things that we wanted to touch on, we’ve barely scratched the surface. But given the fact that normally I’m the one going… Making weird facial expressions at Michael, trying to get him to move towards a wrap, I think we have to move towards a wrap, which we will do by the little thing we do on this show called a Last Call. Last round of drinks, go around and everybody shares something they’ve seen that’s interesting, or amusing, or share-worthy. And Els, since you’re our guest, are you up for going first on that front?
38:56 EA: Well, you mentioned Jakob Nielsen in the beginning, Tim, and I know that it is deeply uncool to still like Jakob Nielsen, but I do. Fudge it, I totally do. And I think if you’re serious about getting into user testing, basically read what they have written, the Nielsen Norman Group, about user testing. Watch those, yes, slightly dorky videos that they sometimes make. And yep, they’re dorky, but seriously, the content in there is totally good. So set aside your coolness, and go for old school. Everything Nielsen does on user testing.
39:33 TW: That’s awesome. I actually have not heard or thought of him in a while, until this episode came up. And he has been doing it for a long, long time as well, and out there sharing. So, awesome. Moe, what have you got?
39:48 MK: I’ve got one, but the issue is that normally I’m like, “Oh I love this thing,” and I want to talk about it, ’cause I’m excited. And this time I’m like, “I still don’t know what I think.” So I’ve been listening to This Week in Machine Learning, which Jim Stern recommended. I feel like it’s This Week in Machine Learning, and something else on the end.
40:09 TW: And AI. It’s This Week in Machine Learning…
40:14 MK: And TWiML. TWiML or something. I’m halfway through episode three, and I’m like… So also a tip if you decide to give it a crack, fast forward the first 30 seconds ’cause it’s mainly just absolute shit. Like, “You should check out this, or you should do this,” and I have really low patience. The reason that I wanna bring it up is because I wanna hear which episodes other people loved, so that I can listen to those, because I’ve listened to a couple and I’m like, “Yeah.” I’ll get snippets that I think are really useful, and then there’s bits that I’m like, “I’m not sure about this,” or it’s super technical and a bit over my head. So I actually bring it up because I’d love to start a bit of a discussion and hear some other people’s thoughts about what are the things that they’re getting out of it, so I can hone into that.
[chuckle]
41:01 TW: That’s awesome. So, yeah, tweet @analyticshour, and let us know which episodes of TWiML. I would love that as well. So mine, I will do… I’m going to wind up doing a twofer, ’cause one of them’s just kind of a little silly.
41:14 MK: You always do a twofer.
41:16 TW: That’s just my thing. One’s kind of silly, one’s kind of cool. Once, I get joked about as being the podcast junkie, and I’m definitely a Gimlet Media junkie, so in the States, ABC has Alex Inc., which is a new show starring Zach Braff, based on Alex Bloomberg’s experience leaving NPR and going out and starting a podcast company. I love Zach Braff. He’s just a funny, quirky dude, and I’m a huge fan of Gimlet Media, Startup, Reply All, Homecoming, their shows. So it’s interesting to see a podcast actually inspired a broadcast television sitcom. We’ll see if it makes it past one season. It’s available on Hulu. That’s my funny one. The other…
42:03 EA: Australians don’t have Hulu, so Australians, I recommend you Google how to get this.
42:05 TW: You know what? When you’re here for Marketing Evolution Experience, we’ll just hole up and binge watch an American sitcom.
42:10 EA: Yeah, yeah!
42:11 TW: That’s what you want to do in Vegas. Everybody in Vegas sits and watches family sitcoms. The other one I stumbled across is a website called fronkonstin.com. Have you heard of this? It’s basically a guy who just likes to do cool, random things with R, I think mostly. One specifically, the first one I found was taking the Traveling Salesman Problem, and using that to actually take an image and draw, basically, a line drawing of an image, which I’m not describing well at all, but do you know the Traveling Salesman Problem? Person has to start at point A and they have to end at point A, and they have to hit all these other destinations? So what’s the math to make that the shortest path possible? So what he did was took images and said convert it to grey scale, and then randomly drop a bunch of points out of that and then apply Traveling Salesman algorithm, and it winds up with a line drawing of the face or the image. It’s cool. He’s got all sorts of other little examples, and he posts the code, so you can actually go play with it yourself.
43:21 MK: You are a massive nerd.
[laughter]
43:27 TW: It takes one to know one, and I’m pretty sure you’re looking at this. If it wasn’t hard to spell, weird to spell, you’d be looking for it right now, and I’d lose you for the rest of the show. So, with that rambling, Els, thanks so much for coming on. This has been a great… I’ve enjoyed the discussion. I wish we had time to ask 100 more questions, but for our listeners, what do you think? We’d love to hear from you. You can reach us on our Facebook page, on Twitter, in the Measure Slack team, via carrier pigeon if you still use the term web analytics, and therefore that’s the sort of mechanism you use. You can reach Els on Twitter at @els_aerts, at ELS_AERTS. Is that the best way for them to reach you?
44:16 EA: Yeah, that’s a [44:16] ____.
44:17 TW: Is it? That’ll work? So get on out there and research some users. And as always, for Moe Kiss and Michael on the mend, keep analyzing.
[music]
44:32 S?: Thanks for listening, and don’t forget to join the conversation on Facebook, Twitter, or Measure Slack Group. We welcome your comments and questions. Visit us on the web at analyticshour.io, facebook.com/analyticshour, or @analyticshour on Twitter.
[music]
[background conversation]
Subscribe: RSS
[…] #088: User Research Meets Analytics with Els Aerts […]
[…] For complete show notes including transcript and links to items mentioned in the show, see the original show page: #088: User Research Meets Analytics with Els Aerts […]