Protobrand’s Survey UI— mobile




Protobrand is a full-stack market research consultancy that builds its surveys in-house to collect System 1 data. These surveys are sent out to respondents via third-party sample providers, who often vary in age and background. One of my first projects at Protobrand was modernizing its survey UI. 


August 2019— ongoing
︎ expected June 2020
Role:
I am leading this project alongside Hongyu Zhou (technology director), Kenneth Ott (research director), and Luke Westerfield (front-end developer). 
Result:
︎ We are in the midst of A/B testing!



︎The Problem

Protobrand’s survey UI is outdated, causing frustration amongst users.  


Protobrand’s survey UI was originally contracted almost ten years ago using poor development practices. So in addition to not being user friendly, the product team quickly found that the structure of the survey was hard to change as well. Furthermore, it accrued complexity both in its design and its code over the years, making this a particularly tough project. 

The team knew the survey needed a reskin as respondents would often mention their poor experience with it, and it’s easy to see why. Protobrand’s surveys had a bunch of unrecognizable question types with unclear ways to answer them. Furthermore, many of these questions were desktop versions smushed into a mobile screen. 😬


Grouping Exercise:
This question asks users to group all available images/statements according to how they see fit. There’s a lot going on here— hard to say what needs to happen first!
Image Ranking:
Users are asked to rank selected images according to which satisfies their view of a given brand or product the most. The instructions were straightforward, but the UI was not.
Ranking:
Users can also be asked to rank statements of varying lengths. This introduces the problem of real estate, especially on mobile.



︎The Solution

Identify and resolve UI issues with research and continuous testing.




︎The Success Metric

Create more enjoyable survey experiences while maintaining our quality of data.  


Because we did not have issues with quantitative measures like dropoff rates, we decided to focus on reskinning the question designs to provide a more enjoyable survey experience for respondents, without impacting our quality of data. 



Before any designing took place, I researched every question type we had. And we have a lot. Here is a list from our testing stage: 



In addition to meeting with various members of the research team, I also built my own surveys to understand the industry’s needs and get a grasp of the survey “flow”. I found that surveys are long and boring— and the added frustration of horizontal scrolling, lack of feedback, and more make them feel even longer.

I sat with 5 people outside of Protobrand, including those from offices in the same building, friends, and family, and observed them as they went through variations of different surveys while noting down their frustrations. Here are a few:


Matrix:
Matrix questions had the look and feel of a desktop design squeezed into a mobile screen. If there were too many columns to fit on mobile, the user was forced to horizontally scroll.
Long Response:
25 words is quite a lot to type, especially if you’re counting. The lack of a progress indicator on this question makes it frustrating to answer.
Video Response:
Some surveys ask users to record themselves answering a question. The recording viewport is pretty small (see cactus) and is not a strong indication of what the user is recording.

Grouping Exercise:
This question asks users to group all available images/statements according to how they see fit. There’s a lot going on here— hard to say what needs to happen first!
Image Ranking:
Users are asked to rank selected images according to which satisfies their view of a given brand or product the most. The instructions were straightforward, but the UI was not.
Ranking:
Users can also be asked to rank statements of varying lengths. This introduces the problem of real estate, especially on mobile.


The team and I took a step back to read through our findings following the interviews. It seemed that there were changes that could be made across the board as well as some specific to different question types. 

For example, we wanted users to scroll as little as possible and ensure that the Next button is easy to find. We needed to improve feedback across the board and simplify the more complex questions.

Outdated and unnecessarily complicated existing question types were also tossed out during this phase.



The engineering team and I rolled out our design process for this project following research. 


As we lacked a front-end dev at the start of this project, the team and decided to roll out a process by which designs would be wireframed, tested, and approved. 


Long Response

The area of improvement for this question type was to indicate in some way that the user has satisfied the minimum of 25 (or any number) of words. In the past, the product team avoided adding in a word counter because they wanted users to write as much as possible rather than risk stunting their responses.

We approached this issue in a couple of ways and A/B tested locally within the building to determine which worked best.






Grouping Exercise

One of our most complex question types! In this example, users are asked to group all available images according to how they see fit.

I checked out a number of different apps and services to find out the best way we could design this question for clarity. The end result was to split up the question into a series of steps with clear labels︎






Screens went through multiple rounds of testing during both the user testing and implementation phase. 


Thanks to Figma Mirror︎, we were able to create responsive prototypes to test on our building-mates. Users were again observed as they completed a dummy survey including 20 common question types, this time with the updated designs. The team would then regroup to discuss successes, pain points, and the next course of action.

Designs that passed our user tests moved onto implementation, where a handful were found to be nearly impossible to build because of technical debt buried deep within the codebase. Thus, engineering team and I were forced to reevaluate its importance to the MVP and seek creative solutions to these question types.

The following is a redesign of one of the most personally challenging questions, the Check All Matrix:


Check All, Matrix

Version 1

︎The Intention:

Users can swipe between columns to select their answers. The Next button becomes enabled when all columns have at least one answer.

︎The Problem:
This design passed user testing but was found to be problematic during the implementation stage. We found that that this design really only works if we have 4-6 columns to satisfy, and that too if answers from each column is required. Leaving a column unanswered is fair game in a Check All question.





Version 2

︎Our Solution:

We simplified the matrix question by breaking it up into expandable and collapsable sections. Users must still scroll, but scrolling vertically is still more comfortable and expected than doing so horizontally.





We are now in the A/B testing phase! 



︎ WIP ︎

This project is currently in progress— please check again later!





Made with ❤ by sukanyaray_

#blacklivesmatter