A Multiplatform Case Study for an Inclusive Virtual Try-on Experience

A Multiplatform Case Study for an Inclusive Virtual Try-on Experience

Getting Started:

At Touchlab, it is typical for us to take on leads and projects that are new to us. We’ve worked on hardware, watch faces- all kinds of things- with no previous experience; we embrace the challenge and rise to it. It is atypical, however, for client research to blossom into a philosophical, platform agnostic, week-long design sprint.

But, we are nerds.

It all started as a simple conversation with people in the retail cosmetics industry. First, we noticed how fun it was to take virtual selfies with funky eyeshadows. Then, we noticed that some virtual try-on apps were waaay better than others. Finally, we got feisty (and immensely curious) when two makeup apps failed to recognize Jeff’s face correctly.

Not quite. 

We saw value in spending time to develop a deeper understanding of the virtual try-on world- really to foster our own awareness of what it takes to accomplish inclusive and feasible design in this space.

Our goal? To design a user-centric, e-commerce experience for a single makeup brand in one week.

Cultivating Empathy and Acknowledging Context:

When it comes to people who wear makeup, you cannot generalize into one or two flattened personas; anyone can wear makeup and the reasons behind applying even a simple black eyeliner are fluid and highly subjective.

Matching eyebrows to lids: the things we notice now. More from Ziggy here:

As Nielsen puts it in the 2018 Future of Beauty Report, “There is no one beauty shopper.” It is also important to note that wearing makeup — be it skin tone or neon green — is less related to confidence, vanity, conformity or belonging than some might think.

For most, it is a form a self-care and for many, a creative and even artistic expression of Self. This is how we are approaching makeup during this design sprint.

(By the way, if you didn’t already know, the biggest makeup trend in 2019 is gender fluidity.)

This idea of makeup as a form of expression is by no means new. But culturally and technologically, the beauty industry must embrace/is embracing the “no rules” approach as social boundaries are broken, social media influences the path-to-purchase and buyers across industries begin to trust and expect VR offerings.

Our Discovery Process:

Armed with what we feel is a grounding contextual view of the modernizing makeup industry, Touchlab’s design team kicked off a more hands-on discovery process by conducting a contextual inquiry and investigation through many of Manhattan’s makeup stores.

Why? In order to get off of our laptops and challenge any preconceived assumptions about makeup shoppers IRL. 🕵🏻‍♀️ 🕵🏻

 Left: Sephora’s virtual try on installation. Right: An example of MAC’s in-store product organization.
As we made our way around the different retailers, four interactions stick out:
  1. Quiet 20-somethings experiment with every bright-blue lipstick in the store. The makeup artists were kind and attentive, but let everyone play around with bold blues and shiny reds unbothered. Everyone was rubbing makeup on, wiping it off, checking themselves out, asking questions. Allowing and encouraging customers to play creates a positive retail environment and a magnetic vibe.
  2. A seasoned makeup artist explains their goals for their popular youtube channel; to target people who are “going somewhere” and do memorable street makeovers to match expectations around that experience.
  3. An older woman’s experience at an in-store virtual installation is somewhat marred by a technical gaffe; her virtual lipstick is overtly offset from her actual lips, seemingly because of her wrinkles. Despite this error, she expressed how fun it was and stayed for over 15 minutes.
  4. Two girls hesitate at the door of the store that seems a bit, well, “stuffy.” They leave almost immediately.

Amongst the perfumed chaos of the wildly different retail environments, we do detect an overarching trend: all kinds of people want to play and experiment with fun makeup, no matter their intention to buy.

Trying on different shades on your hands is commonplace in stores. Virtual try-on certainly solves this “pain point.” (Is it a pain point?)

User Testing and Product Audits

We moved into user testing and product audits of the more popular virtual makeup apps, the goal: to test not only the usability of these virtual experiences but also people’s reactions to e-commerce-oriented flows versus more playful user flows.

Examples of our testing sessions during which we asked users to complete 3 tasks. Users were consistent in having trouble completing retail transactions and finding the virtual try-on feature.

We tested the most popular apps in both the App Store and Google Play. The reactions were emotionally loaded. Most reacted negatively to commerce-oriented user flows that sent them back and forth between traditional, scrollable product grids and the virtual try on camera. Some had trouble even finding the virtual try-on feature on many apps.

Memorable quotes include:

“I don’t even know what this picture means. How do I even get to the try-on? I feel like it’s trying to just sell me *things*.”

“Where are my lipstick colors even going after I try them?”

“Wait! I wanted to see the whole look!”

However, we observed that users had the most fun when playing with colors as a form of self-expression:

“Dang, I look good. 👩‍🎤

“I’m going to go with… this color — “Dangerous” — because that’s how I feel.”

 

Danger is a pretty cool middle name.

Defining our Scope:

Surfacing from our deep immersion into virtual-try on make-up apps proved rather difficult for us designers; we are both now addicted to makeup apps and were getting pretty good at selfies. 🤳🤳🤳👩‍🎨 👨‍🎨

Touchlab Designers conducting product audits WHILE looking really, really good. If we do say so ourselves.

It was time to start taking our data and synthesizing it into actionable points of departure. Harnessing the passion of the makeup-wearers we interviewed and observed, we constructed a persona spectrum.

“Instead of defining one character, persona spectrums focus our attention on a range of customer motivations, contexts, abilities, and circumstances.” — Margeret P. for Microsoft Design (More here)

We know from our process of empathetic understanding that customer motivations for trying on makeup virtually vary greatly, but an overarching motivation might best be called “play”- with an underlying asterisk: ***”play” without the blight of glaring technical virtual misunderstandings for which a modern user has little tolerance.

Left: virtual try on of lipstick. Right: IRL photo of that same lipstick. I would (and did) buy this in-store, but not online. Not all virtual makeup was as inaccurate. 

We are moving forward with the assumption that users are less likely to buy a makeup product (in-app or in-store) if they do not trust the accuracy of the application of the virtual makeup. We define “Accuracy” here as the belief that the product will look, in real life, just as it does on camera. Accuracy, then, is relative to the proper recognition of the user’s facial features, skin tone, race, age, etc.


 Brainstorming at the office. #nomakeup #nofilter

From user testing sessions, we gathered that users of virtual try-on experiences want:

  • A balance of “Play” and “Pay” (Allowing product exploration and experimentation to be the natural point of sale, similar to the in-store experience. )
  • A means for creative expression with makeup
  • A variety of products to try
  • Accuracy in the virtual try-on experience. (Where product accuracy cannot be achieved, at least a positive, fun experience can funnel users in-store.)

Users of virtual try-on experiences need:

  • Easy control of the camera view with one hand/thumb
  • Proper architecture and timing of e-commerce product areas during the play to pay funnel.
  • A well-lit camera view, flexible enough for a myriad of environments
  • A natural multi-platform user experience
  • Facial Recognition for all genders, ages, and races*

*Feasibility-wise, this is an idealistic thought summarily strangled by the inequity in facial recognition we’ve witnessed thus far in our research. The feasibility of speedy and comprehensive machine learning for all faces will come from developers out there; as designers, we feel it is important for us to engage in the conversation and point out the inequity. Idealistic? Yes. Nonetheless, an underlying need.

With our user-centric motivations in mind, as well as recognition of the strengths and shortcomings of existing products- we arrived at our goal:

Create a modern, e-commerce, camera-focused experience that feels seamless and authentic.

Ideation meets its frenemy- Feasibility:

With our guiding principles in mind, we asked ourselves: How do we harness existing powerful API’s and a cultural affinity towards dramatic and playful makeup to create a delightful and inclusive e-commerce experience?

 ~Low Fidelity Ideation Techniques including sketches and thumb maps~ AKA Paper!

For the ideation process, we considered both constraint and context, respectively restricting and potentially accrediting features down the road. Constraints become especially important here when dealing with a camera/virtual experience:

For instance, design-wise, we must consider the one-handed “thumb zone.” and commonplace user experiences and, related, expectations around popular camera-oriented apps.

More about Scott Hurff’s thumb zone here. Note that these photos reflect right-handed thumb zones.

Feasibility-wise, we must remember:

  1. Our sprint time constraint 👮🏼‍♀️
  2. The importance of an e-commerce funnel
  3. The availability and robustness of cosmetic API’s
  4. The machine learning curve of accurate makeup application for the whole spectrum of race, gender, and age.

Some of the contextual circumstances that we considered include:

  • Physical limitations, temporary or permanent. (We are, after all, asking the user to take a selfie and therefore should absolutely keep the one-handed experience in mind.)
  • Users with the inability to deeply focus on the task at hand- be it temporarily due to various forms of distraction, or permanently, perhaps due to disability.
  • A range of environmental lighting and how that might influence virtual quality and accuracy

Scoping in, “Must Have”, feasible features include:

  • Ability to save looks/products for later
  • A predominantly camera view
  • Cart / Wearing list
  • Guided process
  • Color picker
  • Control over makeup intensity
  • Adding to bag
  • Seeing Makeup Details
  • Easily Changing/Removing/Adding Products

Features that we should or could have:

  • Automatic camera light correction
  • The ability to report or fix incorrect virtual makeup application
  • Saved “Looks” that apply all at once
  • The ability to share

How exactly those retail elements are presented was guided by user reaction and also screen space available, considering that we don’t want to cover the users’ face with text during their selfie session.

Design Progression:

During low-fidelity prototyping, we noted and observed users’ expectations.

Because of the e-commerce element of this project, we needed to test when, where and how to show the user the product information sooner rather than later. We started off by letting them play first, then show them their choices in a cart-like setting after the fact.

Even though the process to get to the cart did not take long, users rejected being in the dark; they wanted to know exactly what they are wearing when they were wearing it.

 Mid-fidelity prototypes did not show the user the e-commerce details until they saw their cart. Both the copy and the timing were problematic.

Mid-fidelity prototyping identified weak spots in both visual hierarchy and the timing of e-commerce introduction as we strove to balance UI and branding with the selfie-camera view.

During our iterative process, we determined what we think is the most natural way to introduce e-commerce elements into a playful virtual experience using thoughtful choice architecture:

For lipstick application, for instance, you first narrow the product areas by color-which is easily changed-, then choose the preferred finish (and thereby the exact product line and color), and then finalize with manipulation of the intensity of the makeup application.

For brows, however, you should first choose what type of application method you prefer, then choose between the few colors that are offered (thereby the exact product line and color), and then select the “style” (which is where things get fun again, for the user).

Hi-fi prototyping with re-architecture of e-commerce details

In our “final” prototype, we wanted to reflect a selfie-focused experience that encourages the user play with products without feeling forced to buy those products. 

In order to do so, we introduced the product description as a minimalistic, clickable overlay at what users deemed an appropriate time- and added the ability to change the products that the user is wearing from a “now wearing” list.

This person has opened a virtual makeup app, they want to try on makeup, they want to have fun, they want to see the price of the makeup, and ultimately, they just might buy something that they like.

The takeaway from designing this sprint for multiplatform is the importance of maintaining a human-centric approach so that when we pass off to developers, they understand our thoughts on tackling feasibility in this space.

“Final” prototype for both iPhone 8 and Pixel 3

A note on prototyping:

Because of our one-week time limit, we decided against framer and instead chose to work with still images to imitate a live camera view. A natural next best option would have been to use Sketch, but keeping in mind that early concepts need iteration to happen fast we decided to use Adobe XD

XD allowed us to change and test things between and across the team quickly and efficiently. Additionally, the lack of an app for previewing Sketch files on Android was a deal breaker.

XD’s auto-animate unlocked effortless micro-interactions on the final prototype that gave it an extra layer of realism; this was absolutely key in our virtual + selfie experience.

Ending Thoughts:

If you have read anything about Touchlab recently (see this post about our recent announcement with Square!) you have probably picked up that we are proponents of multiplatform design and development.

In this sprint, we forego assumptions about the differences between iOS and Android users and instead design for our spectrum.

Does it matter if iOS users “spend more money on makeup products” or if Android users “take fewer selfies”? Defining a person (or persona) by the phone that they buy seems silly and too blanketed.

As the industry of mobile development moves to a future where iOS and Android Engineers are becoming Mobile Engineers, our design team becomes Mobile Designers.

We focus on expressing our client’s brands across platforms the right way, instead of focusing on items on a list.

Frances Biedenharn & Nelmer De La Cruz

Our philosophy is to step back from “I’m an iOS designer. I’m an Android designer” and instead advocate for experience designers that understand the possibilities and constraints of each platform.

Designing ResearchStack’s open source UX framework

Designing ResearchStack’s open source UX framework

In the fall of 2015, touchlab partnered with Cornell Tech and Open mHealth to bring precise medical research apps to Android with an open source sdk and ux framework called ResearchStack.

Executing this project would be unlike most projects – rather than designing one experience and interface for a single product, we would be designing and building an extensible framework, with designs covering many potential bits and pieces researchers and developers might need to carry out medical research on Android.

Beyond that, we would be designing and building an Android version of Mole Mapper, an existing ResearchKit™  app for iOS that helps users document and keep track of moles across their body, allowing them to share that information with their doctor and promote earlier detection of skin cancer.

So, how would we approach the UX framework? One of the primary goals (besides IRB approval) was to create an experience that was as easy and straightforward as possible but which also ensured that the user is – and remains – completely aware of how their data is managed and what information the apps collect.

The other major piece was creating something users would want to use to keep track of their health and participate in studies. Anecdotally, we learned that some researchers creating apps for ResearchKit had challenges with user retention. The real value of mobile research studies is in accessibility – if users can join a study and track their information easily, studies could operate with unprecedented sample sizes. That’s why we decided to tackle this problem head-on.

So with both of those goals in mind, the UIUX framework behind ResearchStack was broken into three main pieces – onboarding, dashboard and tasks, and data management and visualization. These three pieces would get the users smoothly into the app/study, make completing research and documentation tasks easy (and maybe even enjoyable), and make data clear for them and their care providers.

Onboarding

Pre-roll onboarding (or onboarding that has to happen before the user can open the app) is something that’s easy to get wrong, creating an experience that annoys the user or makes it feel like the barrier to entry is too high.

That said, a medical research app requires the user to be informed before joining, so the challenge here was to build out a process that felt light even though there could be as many as sixteen actions or more between the user and officially being part of the study. If the user doesn’t want to be part of the study, there is an affordance for simply using the app to keep track of their own information.
The onboarding is itself broken into three pieces.

First, eligibility. Is the user actually eligible to join the study? This step uses general qualifying questions to determine if a user is eligible to participate in the study. Like the rest of the onboarding process, the interface is minimal by design. Question layouts and list controls are predictable and uniform, with a high contrast color palette of just white, gray, and blue to visibly highlight questions, selections, and progressive actions.

Next is consent. The user needs to have a clear picture of what the study entails, the purpose of the app, and the fact that they can leave the study at any time. Besides providing this information to the user with illustrations, videos, or links to further reading, the consent process can ensure the user is truly informed by requiring a brief quiz covering the preceding information.

The quiz steps maintain the predictable question layouts from before, but are more dynamic. After a user submits their answer, a message will tell the user whether they are correct or not, highlighting both the user answer and the correct answer, and providing an explanation when needed.

The bottom action bar in this case is also dynamic – if the user needs to read an explanation about the answer that extends off-canvas, the “NEXT” button transforms to “MORE,” and – when touched – scrolls the text progressively until the end, when the user can finally touch “NEXT.”

The same mechanic is used at the end of the quiz when the user finds out whether they passed the quiz. Answers are explained one more time, and the user can go back to give it another try.

Once the user is fully informed and ready to join the study, they need to register and create  an account. We designed a flow that allows the user to sign up, enter personal info, and create a secure passcode to encrypt their data, even if they exit the app during signup and then come back.

Dashboard & tasks

Inside a research app, we wanted to create some UX definitions for at least two primary goals – study participation through “tasks,” and the visualization of data for the user and their care providers.

Most basic tasks come in the form of surveys or measurements. These kinds of tasks rely on the same predictable question layouts mentioned above, but tasks can contain various types of questions and inputs, either on the same screen or on multiple sequential screens. So we came up with designs that allow for this kind of modular assembly of tasks.

Apps like Mole Mapper make use of these patterns too, even while simultaneously including tasks requiring more unique interactions like using the camera and visually measuring objects.

 

Tasks, in the framework design, are managed on one half of the dashboard interface. They appear as a list of items that can show visual categorization with colored indicators. In most cases, completed tasks will slide to the bottom of the list for a given day, and previous days’ responses can be found in a chronological timeline below that.

But this dynamic can change, even within the visual definitions we provided. For example, Mole Mapper’s tasks are simply divided into two subheads – to-do and done. Since Mole Mapper works on a monthly cycle and previous mole measurements are stored in an interface that visually connects them to the moles, there’s no reason to keep duplicate entries in a chronological timeline on the main interface.

As explained before, keeping users engaged and participating is tough. Part of this is ensuring that participants complete their tasks to create full data sets. To try and encourage participation, we made creative use of visual cues to get users back into the task list until all the tasks are done.

In the tab bar, the task icon or text label will be badged with a small circle to indicate some unfinished business. If the user is still ignoring that and looks into their data dashboard, visual cues will prompt them to “FINISH” tasks, with the button simply leading back again to the task list.

Data management & visualization

Designing charts and graphs is and was a pleasure. Data visualization is a rewarding exercise, offering the chance to design something that looks deceptively smooth for how rigid the underlying data are.

For the initial release of the open source framework, we designed several kinds of visualizations, from a basic pie chart to interactive line graphs and multivariate bar charts.

The same reduced palette as before allowed us a lot of freedom to clearly communicate information and add new colors to the palette for multiple variables as needed. Keeping things minimal meant we had plenty of room to add new touches of information.

And as always, one of the goals of the design was to create something predictable for the user that would easily and quickly build a mental model of how data visualizations worked and what they meant. Consistent “expand” icons allow users to see larger views of their information, cards can be arranged in any order and still appear familiar, and clear headers and labeling explain everything.

On the management side of data, we wanted to make sure the user always had options to manage their data and participation status. Persistent action icons in the app’s toolbar provide access to information about the study and settings around data sharing, consent, security, and study participation. If the user wants to learn more, stop sharing, or even leave the study, it’s only a tap away.

Conclusion

Extensible design is a recurring theme in our work, and nowhere does it better shine than an open-source UX framework.

The design of ResearchStack’s framework is one that allows for easy customization and organization without losing the hallmarks that make it easy for users to get started and stay engaged.

Simple palettes, clear and predictable layouts, and a visual hierarchy that doesn’t get in the way of unique and diverse interactions come together to lay a foundation for designing precise medical research apps for Android.

Designing ClassPass for Android

Designing ClassPass for Android

In the late summer of 2015, touchlab began working with ClassPass, a service that gives users access to a huge network of fitness facilities for a monthly subscription.

Already popular on iOS, ClassPass wanted to bring its experience to Android so, in collaboration with ClassPass’ developers, touchlab took on its first design-only project and brought ClassPass to the material world on Android.

When we started out on the project, the ClassPass app could basically be split into two main functions: finding and signing up for classes, and managing classes you planned to attend or took interest in.

Both of these tenets were already solved for the iOS app, so in bringing the app to Android, the goal was to see how we could learn from the iOS experience to craft one that would work beautifully on Android. One that would feel comfortable for Android users, would offer the same functionality in a streamlined package, and innovate on both material design and the ClassPass experience.

As with any iOS-to-Android transition, we would keep ClassPass’ strongly developed branding – its color palette, typography, imagery, and voice would be faithfully maintained in the Android app, keeping it on-brand even as it made the jump to material.

The first challenge we tackled was discovery – in other words, finding a class. On iOS, ClassPass offers a map and results view, with a toggle to switch between them. Search or filter results, combined with the user’s mapped position, will reflect on the list.

On Android, we recommended using the material metaphor to unite these two views into one seamless interface that gave users access to ClassPass’ deep architecture of Class and studio information in a light, welcoming interface.

The “main” map screen has a search bar with a filter action, and the bottom sheet houses relevant class information, starting with the user’s location and updating with filters, search terms, or specific selections.

The user can also pull the sheet up to reveal a full class schedule relevant to their results. As the sheet nears the top, it snaps up to the top of the screen by extending a toolbar. With a similar gesture, users can swipe the sheet down or simply touch the back button. And since the sheet covers the search bar, a FAB (floating action button) for filtering your results appears.

Over several motion sketches and prototypes, we landed on a great experience for this interaction: the user is in control of the sheet’s position as long as they are touching it, but when the finger leaves the material, the sheet – based on its location and velocity – decides whether to snap upward or downward.

The experience keeps the user immersed in the context of the map, avoiding any sudden context shifts, and allows for fast, free exploration.

The modular nature of the class results list also meant the list was flexible – ClassPass could “promote” tiles with background imagery or re-colored typography without substantively changing the experience.

The typography of the tile was obviously important too. In several rounds of revisions with the ClassPass team, we etched out exactly the right fonts, weights, colors, and placements to keep the information clear, readable, and hierarchically sensible.

So, what happens when the user touches that tile?

The class detail screen looks simple, but it actually houses a lot of information. It includes

  • Class name
  • Venue name and location
  • Date and time
  • “Favorite” action (that appears with a quick heartbeat animation)
  • Amenities (things like lockers, parking, showers)
  • Activities (what you’ll be doing in the class – like Yoga above)
  • Class description
  • What to bring
  • Location
  • Map snippet
  • Link to venue profile
  • Venue description

We had an opportunity here to condense the class detail screen and make it feel simple and light, without losing its bold, graphic nature. ClassPass has a library of venue-specific imagery at its disposal, so we wanted to keep that highlighted. Basic metadata that the user would want to see at a glance was overlaid on the imagery at the top of the page, and the “reserve” or “cancel” action was highlighted using a bold full-width button.

Rather than leaving amenities and activities as separate vertical sections, we condensed them into one scrolling row (inspired by Google’s Play Store) with a “chip” paradigm showing – with color and custom iconography – the supporting information relevant to the class. This area too is extensible – whether the class has 1 or 10 activities, the layout can accommodate without becoming too long.

A previous revision of the class detail chips A previous revision of the class detail chips

Initially when considering the “chips” idea, we designed the chips with different colors for activities/categories, with one color for amenities. But keeping to the forward-looking extensible idea, it made more sense to use one color for activities and one for amenities, to keep things clearly organized for the user. If, in the future, there were 10 chips, the screen would be ready to accommodate them.

Next came a simple text layout for the description and “what to bring,” with the map, studio link, and description following at the bottom of the page.

Of course under every action, there are states to account for. The main action on the class detail screen is the class reservation button. A few things can happen when you press this button: your reservation is confirmed, your cancelation is confirmed, your reservation fails due to a system error, or your reservation fails because the class has already filled up.

Besides coloring the button to match reserved/unreserved states, we used Android’s light-weight “snack bar” element to communicate changes between states, coloring the bar as necessary to communicate success or failure.

After booking (and/or attending) some classes, the user can manage them through My ClassPass. This part of the app gives basic schedules for upcoming and past classes, and a more graphic “favorites” list that exposes favorite studios and that familiar bottom sheet for classes from those studios.

Beyond the bottom sheet, the modular tiles make another appearance, along with the full-width action button. By reusing paradigms like this across the app, we ensured that users would feel comfortable and confident throughout the experience, quickly and easily developing a mental model of how everything behaved.

Users can access My ClassPass (or the main screen) from the navigation drawer, which has space for future destinations in the app as well.

Making use of the paradigms from Google’s material design nav drawers, we designed the header as a dynamic area, serving information about the user’s class schedule. If there’s an upcoming class, the area is updated with relevant imagery and information, presented with the familiar typography found elsewhere.

There’s a lot that can be said for settings menus, but maybe not so much that should be said inside them.

The settings screen lets users manage an app – whether they’re updating information, changing options, or customizing their experience, the settings screens might seem like a very technical place. In reality they are, but the user doesn’t have to experience it that way.

Material design gives great guidance on settings in an effort to keep the overall experience similar between apps and between apps and the system overall. We used these paradigms to build the simplest, most organized settings experience possible, while staying true to the voice and feel of the broader app.

The full-width button makes another appearance as the “log out” button, and ClassPass’ signature typography and colors are applied to emphasize actions and hierarchies.


The takeaway from designing ClassPass for Android is the importance of extensible and reusable design elements.

Keeping things extensible and flexible is crucial for an app like ClassPass, which offers its services in multiple locations around the world.

Class schedules, availability, details, relevant filters, and other information changes from location to location, from day to day, so making something that looks and feels great no matter what the case was one of our main goals – and we pulled it off with material design 😉