Building for Better Data



︎Overview

Protobrand is a full-stack market research consultancy that builds its surveys in-house to collect System 1 data. The question types were originally contracted to multiple developers without design input almost a decade ago, leaving the respondent side of the survey with poor usability and affecting the data the company received.

One of my first projects at Protobrand was to tackle this issue.  I worked closely with both my team (engineering 💫) and market research to understand the intent of each question’s design and our data needs.

While the goal was to make Protobrand’s surveys a breeze to take, my most important objective was to base all designs off of research and best practices.




August 2019— ongoing
︎ expected June 2020
Role:
I lead this project alongside Hongyu Zhou (technology director), and Kenneth Ott (research director).
Result:
⏳This is an ongoing project ⏳


︎Research

The product team (at the time two back-end developers, research lead, CEO, and myself) adopted a mobile-first approach to the UI revamp.


I took a look through Protobrand’s most common question types on mobile and noted down basic UI frustrations. As everyone in the office was already familiar with the survey design, I booked time with other offices within the building to conduct user interviews. Each interview consisted of the user/test-taker, myself, a dummy of our current survey, and a fellow colleague to take notes. I observed and asked questions as the user went through the survey with a goal for each question in mind. I conducted a total of 12 interviews which uncovered some key grievances


① Basic UI frustrations
As a product designer, it was clear to me which elements negatively affected usability. Here are just a few:



Single Matrix:
Matrix questions had the look and feel of a desktop design squeezed into a mobile screen. If there were too many columns to fit on mobile, the user was forced to horizontally scroll.
Long Response:
25 words is quite a lot to type, especially if you’re counting. The lack of a progress indicator on this question makes it frustrating to answer.
Video Response:
Some surveys ask users to record themselves answering a question. The recording viewport is pretty small (see cactus) and is not a strong indication of what the user is recording.


② Key Grievances
We found that the greatest issue users had during the research phase was either being unable to recognize a handful of question types or feeling overwhelmed by how many steps there were to a question. Here are some examples:



Grouping Exercise:
This question asks users to group all available images/statements according to how they see fit. There’s a lot going on here— hard to say what needs to happen first!
Image Ranking:
Users are asked to rank selected images according to which satisfies their view of a given brand or product the most. The instructions were straightforward, but the UI was not.
Ranking:
Users can also be asked to rank statements of varying lengths. This introduces the problem of real estate, especially on mobile.


︎Research Retrospective

The team and I took a step back to read through our findings following the interviews. It seemed that there were changes that could be made across the board as well as some specific to different question types.


For example, we wanted users to scroll as little as possible and ensure that the Next button is easy to find. We needed to improve feedback across the board and simplify the more complex questions.

Outdated and unnecessarily complicated existing question types were also tossed out during this phase.



︎Wireframing
August— December 2019

Protobrand lacked a front-end developer at the start of this project, so in the meantime I focused on designing our most common question types so they’d be ready for implementation and thus satisfying our MVP.


Here were a few notable designs:


Long Response

The area of improvement for this question type was to indicate in some way that the user has satisfied the minimum of 25 (or any number) of words. In the past, the product team avoided adding in a word counter because they wanted users to write as much as possible rather than risk stunting their responses.

We approached this issue in a couple of ways and A/B tested locally within the building to determine which worked best.

According to our results, simply enabling the Next button when the field has been satisfied is an elegant and impactful way to enhance usability︎




Grouping Exercise

One of our most complex question types! In this example, users are asked to group all available images according to how they see fit.

I checked out a number of different apps and services to find out the best way we could design this question for clarity. The end result was to split up the question into a series of steps with clear labels︎







︎Testing, Implementation, and Some Unforeseen Issues

Screens went through multiple rounds of testing during both the user testing and implementation phase. Thanks to Figma Mirror︎, we were able to create responsive prototypes to test on our building-mates. Users were again observed as they completed a dummy survey including 20 common question types, this time with the updated designs. The team would then regroup to discuss successes, pain points, and the next course of action.


Designs that passed our user tests moved onto implementation, where a handful were found to be nearly impossible to build because of technical debt buried deep within the codebase. Thus, engineering team and I were forced to reevaluate its importance to the MVP and seek creative solutions to these question types.

The following is a redesign of one of the most personally challenging questions, the Check All Matrix:


Check All, Matrix

Version 1

︎The Intention:

Users can swipe between columns to select their answers. The Next button becomes enabled when all columns have at least one answer.

︎The Problem:
This design passed user testing but was found to be problematic during the implementation stage. We found that that this design really only works if we have 4-6 columns to satisfy, and that too if answers from each column is required. Leaving a column unanswered is fair game in a Check All question.




Version 2
︎Our Solution:

We simplified the matrix question by breaking it up into expandable and collapsable sections. Users must still scroll, but scrolling vertically is still more comfortable and expected than doing so horizontally.




︎A/B Testing

We plan to A/B test the new and current designs following implementation with real samples Protobrand has access to. Originally scheduled for April 2020, our engineering team is working hard to develop these question types so we can test as soon as possible.


︎ WIP ︎

This project is currently in progress— please check again later!

Made with ❤ by sukanyaray_
#blacklivesmatter