Copy
header_image

Issue 29 // Research and Process

Iteration is an important part of the design and development process. Through iteration, we constantly improve and polish our products by experimenting, evaluating, and adjusting. The same holds true for research: we get better at gathering user insights as we continue to experiment and reflect.

In this issue, the design research team offers a peek into our processes. Laurissa Wolfram-Hvass tackles remote usability tests and describes her quick and dirty process for conducting and sharing test results. June Lee follows up with lessons she's learned from surveying new customers. As always, we wrap up with a list of links that've recently captured our attention.
Tweet
Share
Forward to Friend
Editors: Laurissa Wolfram-HvassGregg Bernstein, and Aarron Walter
Artwork: Caleb Andrews
On Twitter: @MailChimpUX

Usability Tests: The Quick 'n Dirty


by Laurissa Wolfram-Hvass
Usability tests are a powerful tool for learning how people really interact with our products and services. They uncover unanticipated issues, and they make us all squirm a bit when we realize a feature or workflow isn’t quite as obvious as we intended. One of the challenges with usability tests, however, is figuring out the best way to pass along our findings to our design and development teams. In Issue 25, I wrote about some of the ways the UX Research team shares our findings with the rest of MailChimp, but in this issue, I’m going to specifically describe how I collect and share usability test findings. 

At MailChimp, our goal is to gather insights about users and pass them along as quickly as possible. With usability tests, though, this can be especially tricky. We could compile findings into a report–they’re easy to skim and quickly glean information from, but they don’t create the same sense of empathy and urgency as actually watching someone struggle. On the other hand, hours of usability testing footage can be tedious to watch. And let’s be honest, is anyone really going to do that? Ideally, we want to create a research deliverable for our usability tests that combines the “skimmability” of a report and the human connection of a video. 

Rather than give you a set of hard-and-fast rules for conducting a usability test, I’m going to run through one of the mobile tests I conducted recently and explain my process of:
  • Setting up the usability test 
  • Documenting information during the test 
  • Managing analysis and post-processing
  • Sharing the test findings 
(If you do want specifics for setting up and running a usability test, Usability Testing: Ready, Set . . . Test! and Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests are great resources to get you started.)
 


Setting up the usability test


Not long ago I had a conversation with a MailChimp user who mentioned that he hadn’t heard of our Editor tablet app before, but he was excited to try it out. I was curious about the on boarding flow of the app, especially when the user was already very familiar with Mailchimp on desktop. He kindly agreed to participate in a usability test over Skype the next day. 

Since this was a remote test of our mobile app, I used the laptop hugging method (pictured below) to view my participant’s mobile screen as he moved through the test. 
With the laptop hugging method, we can conduct remote mobile testing with a bit more ease. Instead of fancy recording equipment, our participants can often use tools they already have.  
My goal for this test was to simply observe how a new user would approach our Editor app for the first time, so the objectives for our 60-minute session were pretty loose: 
  1. Find and download the Editor app. 
  2. Play around or explore as you normally would with a new app.
  3. If there’s time, create and send a MailChimp campaign to yourself. 
I asked my participant to please “think aloud” and walk me through his thought process as he went along. I let him know that I was just as curious to understand what was going on in his head as I was in seeing how he interacted with the app. I also told him that while my goal was to simply observe, I might occasionally ask him to describe what he was doing or what was going through his mind at a particular moment. 



Documenting information during the test


As my participant moved through the test, I quietly watched and listened.  I abandoned my computer and took notes in my iPhone’s Notes app instead, so the tapping of keys wouldn’t distract my participant. I pecked out quick observations of where he struggled, got confused, became stuck, or had questions. 

Along with each observation, I noted the recording time so I could quickly return to a particular section after the video was complete. For me, time-stamping during a usability test is the key to speeding up analysis and post-processing. 
Taking time-stamped notes on my iPhone helps me keep track of any problems I notice during the test and speeds up analysis and post-processing after the test. 


Managing analysis and post-processing


At the end of the test, I emailed myself the time-stamped notes, pasted them into an Evernote file, and cleaned them up a bit. 
After the testing session, I copied my notes into Evernote, our research repository
Next, I imported the video from the test into iMovie. Looking back over my time-stamped notes, I located the sections in the video that dealt with the most critical issues, such as bugs, crashes, or points of confusion. At the beginning of each section, I inserted a 5-10 second text overlay that briefly described the issue using simple tools in iMovie. 

iMovie provides several different themes you can use to quickly add consistent-looking elements like text overlays (iMovie calls them opening and ending titles) and transitions. For this particular project, I used iMovie’s “Bright” theme. 
Using iMovie’s “Bright” theme, I add text to the describe key issues the participant uncovered during the testing session. 
Since I already had a rough time-stamped transcript of the test, this entire process took about an hour and a half. 
 


Sharing the test findings


I exported the completed video to my desktop (File > Share > File), size “Large.” This process took several hours, so I just left it running overnight. 
Exporting edited usability test footage from iMovie. 
The next morning, I uploaded the video to a private Vimeo channel and sent the link, along with my time-stamped notes, to our Mobile Lab team. The timestamps not only list out the most significant issues from the test, but they also give our Mobile Lab team a quick way to locate specific sections of the test without wasting time searching through an hour-long video. 

These videos aren’t particularly beautiful or elegant, and they don’t have to be. They just have to relay information quickly, accurately, and effectively. The faster we can get our research into the hands of our designers and developers, the faster our products improve. 

Surveying Our Surveys


by June Lee
Surveys are a big part of our research process—we’ve even written about them in Issue 4 and Issue 10. They’re a great way to quickly gather feedback from many different people and to identify possible interviewees for more in-depth conversations.

Even after all the surveys we’ve sent, designing them still isn’t a quick task. With each survey, we learn more and more about how we can refine our process and make it better. Each quarter, we send out a new user survey, so we can understand our users’ motivations for choosing us and assess their onboarding experience. This survey is great for learning about our customers, but it’s also been a valuable tool for reflecting on our own survey process.



Making long-term comparisons


What began last year as an isolated survey to understand why customers sign up with MailChimp has turned into an ongoing project to evaluate sentiment over time. With the MailChimp app changing (and hopefully improving) every 5 weeks, we thought it would be interesting to see if (and how) our customer perceptions and motivations change longitudinally.



Writing surveys collaboratively


We always write our surveys collaboratively, so this isn’t really a lesson learned from our new user survey, but it’s still worth mentioning. At the beginning of each new survey, someone from the Research team drafts up a list of questions in Google docs and shares them with the rest of the team. Over the course of a day or two, we work together and make edits, comments, and suggestions before we’re ready to send out a pilot. For bigger surveys, we gather in a room, each of us with the survey pulled up in Google docs on our own screens. We talk through each question, writing and editing as we go, until we have a version we’re all confident in.



Tweaking questions or word choice from survey to survey


Survey questions can be tricky. If respondents misinterpret a question or don’t understand the terms we use, our results become distorted. Each time we prepare to send the new user survey to another round of customers, we look carefully at the responses from the previous survey. Did respondents answer our questions the way we intended? Could we be clearer? Should we use different words? It’s easy to forget that our perspectives aren’t the same as our respondents’, and new users in particular might not be familiar with email jargon. For example, in each new user survey we ask customers to indicate their top reasons for choosing MailChimp. One of the options we gave them was “Deliverability,” a term we realized new users probably aren’t familiar with. In our most recent survey, we edited “Deliverability” to “Making sure my emails get delivered” and saw that response jump 16%.



Limiting options and prioritizing answers


We like multiple choice/multiple answer questions—they give people options and they give us lots of information. In our new user survey, we try to understand the motivations for signing up for a MailChimp account by asking, “What are your top reasons for choosing MailChimp?” We give participants 14 options to choose from—plus an open response field, just in case we missed something. Originally, we asked respondents to select all that applied, but after running the survey several times, we wondered what would happen if we made respondents prioritize by limiting them to 5 choices. Overall, we found that the percentage for each response dropped, but the ranking of the choices remained about the same. For example, "signing up for a free account" is always the top choice, but it dropped 12% after we limited the options.



Automating survey scheduling


After refining and sending our new user survey five different times, we've finally settled on a version that can be sent without additional edits or tweaks. With all variables remaining consistent from survey to survey, we can begin to study new user responses over time. Now that that’s taken care of, we’re moving on to our next survey experiment: automating the sending of our survey, allowing us to focus our time on evaluating the data and conducting comparative analyses.

UX Around The Web

Ask Us Anything

We want this newsletter to be a dialogue. If you have questions for the MailChimp UX team about our travel bucket list, the music we listen to while we work (Tyrick is partial to Bob Dylan and Laurissa prefers Ludovico Einaudi), or what kind of equipment we take with us on customer interviews, send them in! Seriously: hit reply and ask us anything. We'll try to answer every email and maybe even share our conversation in future newsletters.
 


© 2001-2014 All Rights Reserved.
MailChimp® is a registered trademark of The Rocket Science Group


view in browser   unsubscribe   update subscription preferences   

Email Marketing Powered by Mailchimp