A Multiplatform Case Study for an Inclusive Virtual Try-on Experience

A Multiplatform Case Study for an Inclusive Virtual Try-on Experience

Getting Started:

At Touchlab, it is typical for us to take on leads and projects that are new to us. We’ve worked on hardware, watch faces- all kinds of things- with no previous experience; we embrace the challenge and rise to it. It is atypical, however, for client research to blossom into a philosophical, platform agnostic, week-long design sprint.

But, we are nerds.

It all started as a simple conversation with people in the retail cosmetics industry. First, we noticed how fun it was to take virtual selfies with funky eyeshadows. Then, we noticed that some virtual try-on apps were waaay better than others. Finally, we got feisty (and immensely curious) when two makeup apps failed to recognize Jeff’s face correctly.

Not quite. 

We saw value in spending time to develop a deeper understanding of the virtual try-on world- really to foster our own awareness of what it takes to accomplish inclusive and feasible design in this space.

Our goal? To design a user-centric, e-commerce experience for a single makeup brand in one week.

Cultivating Empathy and Acknowledging Context:

When it comes to people who wear makeup, you cannot generalize into one or two flattened personas; anyone can wear makeup and the reasons behind applying even a simple black eyeliner are fluid and highly subjective.

Matching eyebrows to lids: the things we notice now. More from Ziggy here:

As Nielsen puts it in the 2018 Future of Beauty Report, “There is no one beauty shopper.” It is also important to note that wearing makeup — be it skin tone or neon green — is less related to confidence, vanity, conformity or belonging than some might think.

For most, it is a form a self-care and for many, a creative and even artistic expression of Self. This is how we are approaching makeup during this design sprint.

(By the way, if you didn’t already know, the biggest makeup trend in 2019 is gender fluidity.)

This idea of makeup as a form of expression is by no means new. But culturally and technologically, the beauty industry must embrace/is embracing the “no rules” approach as social boundaries are broken, social media influences the path-to-purchase and buyers across industries begin to trust and expect VR offerings.

Our Discovery Process:

Armed with what we feel is a grounding contextual view of the modernizing makeup industry, Touchlab’s design team kicked off a more hands-on discovery process by conducting a contextual inquiry and investigation through many of Manhattan’s makeup stores.

Why? In order to get off of our laptops and challenge any preconceived assumptions about makeup shoppers IRL. 🕵🏻‍♀️ 🕵🏻

 Left: Sephora’s virtual try on installation. Right: An example of MAC’s in-store product organization.
As we made our way around the different retailers, four interactions stick out:
  1. Quiet 20-somethings experiment with every bright-blue lipstick in the store. The makeup artists were kind and attentive, but let everyone play around with bold blues and shiny reds unbothered. Everyone was rubbing makeup on, wiping it off, checking themselves out, asking questions. Allowing and encouraging customers to play creates a positive retail environment and a magnetic vibe.
  2. A seasoned makeup artist explains their goals for their popular youtube channel; to target people who are “going somewhere” and do memorable street makeovers to match expectations around that experience.
  3. An older woman’s experience at an in-store virtual installation is somewhat marred by a technical gaffe; her virtual lipstick is overtly offset from her actual lips, seemingly because of her wrinkles. Despite this error, she expressed how fun it was and stayed for over 15 minutes.
  4. Two girls hesitate at the door of the store that seems a bit, well, “stuffy.” They leave almost immediately.

Amongst the perfumed chaos of the wildly different retail environments, we do detect an overarching trend: all kinds of people want to play and experiment with fun makeup, no matter their intention to buy.

Trying on different shades on your hands is commonplace in stores. Virtual try-on certainly solves this “pain point.” (Is it a pain point?)

User Testing and Product Audits

We moved into user testing and product audits of the more popular virtual makeup apps, the goal: to test not only the usability of these virtual experiences but also people’s reactions to e-commerce-oriented flows versus more playful user flows.

Examples of our testing sessions during which we asked users to complete 3 tasks. Users were consistent in having trouble completing retail transactions and finding the virtual try-on feature.

We tested the most popular apps in both the App Store and Google Play. The reactions were emotionally loaded. Most reacted negatively to commerce-oriented user flows that sent them back and forth between traditional, scrollable product grids and the virtual try on camera. Some had trouble even finding the virtual try-on feature on many apps.

Memorable quotes include:

“I don’t even know what this picture means. How do I even get to the try-on? I feel like it’s trying to just sell me *things*.”

“Where are my lipstick colors even going after I try them?”

“Wait! I wanted to see the whole look!”

However, we observed that users had the most fun when playing with colors as a form of self-expression:

“Dang, I look good. 👩‍🎤

“I’m going to go with… this color — “Dangerous” — because that’s how I feel.”

 

Danger is a pretty cool middle name.

Defining our Scope:

Surfacing from our deep immersion into virtual-try on make-up apps proved rather difficult for us designers; we are both now addicted to makeup apps and were getting pretty good at selfies. 🤳🤳🤳👩‍🎨 👨‍🎨

Touchlab Designers conducting product audits WHILE looking really, really good. If we do say so ourselves.

It was time to start taking our data and synthesizing it into actionable points of departure. Harnessing the passion of the makeup-wearers we interviewed and observed, we constructed a persona spectrum.

“Instead of defining one character, persona spectrums focus our attention on a range of customer motivations, contexts, abilities, and circumstances.” — Margeret P. for Microsoft Design (More here)

We know from our process of empathetic understanding that customer motivations for trying on makeup virtually vary greatly, but an overarching motivation might best be called “play”- with an underlying asterisk: ***”play” without the blight of glaring technical virtual misunderstandings for which a modern user has little tolerance.

Left: virtual try on of lipstick. Right: IRL photo of that same lipstick. I would (and did) buy this in-store, but not online. Not all virtual makeup was as inaccurate. 

We are moving forward with the assumption that users are less likely to buy a makeup product (in-app or in-store) if they do not trust the accuracy of the application of the virtual makeup. We define “Accuracy” here as the belief that the product will look, in real life, just as it does on camera. Accuracy, then, is relative to the proper recognition of the user’s facial features, skin tone, race, age, etc.


 Brainstorming at the office. #nomakeup #nofilter

From user testing sessions, we gathered that users of virtual try-on experiences want:

  • A balance of “Play” and “Pay” (Allowing product exploration and experimentation to be the natural point of sale, similar to the in-store experience. )
  • A means for creative expression with makeup
  • A variety of products to try
  • Accuracy in the virtual try-on experience. (Where product accuracy cannot be achieved, at least a positive, fun experience can funnel users in-store.)

Users of virtual try-on experiences need:

  • Easy control of the camera view with one hand/thumb
  • Proper architecture and timing of e-commerce product areas during the play to pay funnel.
  • A well-lit camera view, flexible enough for a myriad of environments
  • A natural multi-platform user experience
  • Facial Recognition for all genders, ages, and races*

*Feasibility-wise, this is an idealistic thought summarily strangled by the inequity in facial recognition we’ve witnessed thus far in our research. The feasibility of speedy and comprehensive machine learning for all faces will come from developers out there; as designers, we feel it is important for us to engage in the conversation and point out the inequity. Idealistic? Yes. Nonetheless, an underlying need.

With our user-centric motivations in mind, as well as recognition of the strengths and shortcomings of existing products- we arrived at our goal:

Create a modern, e-commerce, camera-focused experience that feels seamless and authentic.

Ideation meets its frenemy- Feasibility:

With our guiding principles in mind, we asked ourselves: How do we harness existing powerful API’s and a cultural affinity towards dramatic and playful makeup to create a delightful and inclusive e-commerce experience?

 ~Low Fidelity Ideation Techniques including sketches and thumb maps~ AKA Paper!

For the ideation process, we considered both constraint and context, respectively restricting and potentially accrediting features down the road. Constraints become especially important here when dealing with a camera/virtual experience:

For instance, design-wise, we must consider the one-handed “thumb zone.” and commonplace user experiences and, related, expectations around popular camera-oriented apps.

More about Scott Hurff’s thumb zone here. Note that these photos reflect right-handed thumb zones.

Feasibility-wise, we must remember:

  1. Our sprint time constraint 👮🏼‍♀️
  2. The importance of an e-commerce funnel
  3. The availability and robustness of cosmetic API’s
  4. The machine learning curve of accurate makeup application for the whole spectrum of race, gender, and age.

Some of the contextual circumstances that we considered include:

  • Physical limitations, temporary or permanent. (We are, after all, asking the user to take a selfie and therefore should absolutely keep the one-handed experience in mind.)
  • Users with the inability to deeply focus on the task at hand- be it temporarily due to various forms of distraction, or permanently, perhaps due to disability.
  • A range of environmental lighting and how that might influence virtual quality and accuracy

Scoping in, “Must Have”, feasible features include:

  • Ability to save looks/products for later
  • A predominantly camera view
  • Cart / Wearing list
  • Guided process
  • Color picker
  • Control over makeup intensity
  • Adding to bag
  • Seeing Makeup Details
  • Easily Changing/Removing/Adding Products

Features that we should or could have:

  • Automatic camera light correction
  • The ability to report or fix incorrect virtual makeup application
  • Saved “Looks” that apply all at once
  • The ability to share

How exactly those retail elements are presented was guided by user reaction and also screen space available, considering that we don’t want to cover the users’ face with text during their selfie session.

Design Progression:

During low-fidelity prototyping, we noted and observed users’ expectations.

Because of the e-commerce element of this project, we needed to test when, where and how to show the user the product information sooner rather than later. We started off by letting them play first, then show them their choices in a cart-like setting after the fact.

Even though the process to get to the cart did not take long, users rejected being in the dark; they wanted to know exactly what they are wearing when they were wearing it.

 Mid-fidelity prototypes did not show the user the e-commerce details until they saw their cart. Both the copy and the timing were problematic.

Mid-fidelity prototyping identified weak spots in both visual hierarchy and the timing of e-commerce introduction as we strove to balance UI and branding with the selfie-camera view.

During our iterative process, we determined what we think is the most natural way to introduce e-commerce elements into a playful virtual experience using thoughtful choice architecture:

For lipstick application, for instance, you first narrow the product areas by color-which is easily changed-, then choose the preferred finish (and thereby the exact product line and color), and then finalize with manipulation of the intensity of the makeup application.

For brows, however, you should first choose what type of application method you prefer, then choose between the few colors that are offered (thereby the exact product line and color), and then select the “style” (which is where things get fun again, for the user).

Hi-fi prototyping with re-architecture of e-commerce details

In our “final” prototype, we wanted to reflect a selfie-focused experience that encourages the user play with products without feeling forced to buy those products. 

In order to do so, we introduced the product description as a minimalistic, clickable overlay at what users deemed an appropriate time- and added the ability to change the products that the user is wearing from a “now wearing” list.

This person has opened a virtual makeup app, they want to try on makeup, they want to have fun, they want to see the price of the makeup, and ultimately, they just might buy something that they like.

The takeaway from designing this sprint for multiplatform is the importance of maintaining a human-centric approach so that when we pass off to developers, they understand our thoughts on tackling feasibility in this space.

“Final” prototype for both iPhone 8 and Pixel 3

A note on prototyping:

Because of our one-week time limit, we decided against framer and instead chose to work with still images to imitate a live camera view. A natural next best option would have been to use Sketch, but keeping in mind that early concepts need iteration to happen fast we decided to use Adobe XD

XD allowed us to change and test things between and across the team quickly and efficiently. Additionally, the lack of an app for previewing Sketch files on Android was a deal breaker.

XD’s auto-animate unlocked effortless micro-interactions on the final prototype that gave it an extra layer of realism; this was absolutely key in our virtual + selfie experience.

Ending Thoughts:

If you have read anything about Touchlab recently (see this post about our recent announcement with Square!) you have probably picked up that we are proponents of multiplatform design and development.

In this sprint, we forego assumptions about the differences between iOS and Android users and instead design for our spectrum.

Does it matter if iOS users “spend more money on makeup products” or if Android users “take fewer selfies”? Defining a person (or persona) by the phone that they buy seems silly and too blanketed.

As the industry of mobile development moves to a future where iOS and Android Engineers are becoming Mobile Engineers, our design team becomes Mobile Designers.

We focus on expressing our client’s brands across platforms the right way, instead of focusing on items on a list.

Frances Biedenharn & Nelmer De La Cruz

Our philosophy is to step back from “I’m an iOS designer. I’m an Android designer” and instead advocate for experience designers that understand the possibilities and constraints of each platform.

Designing ResearchStack’s open source UX framework

Designing ResearchStack’s open source UX framework

In the fall of 2015, touchlab partnered with Cornell Tech and Open mHealth to bring precise medical research apps to Android with an open source sdk and ux framework called ResearchStack.

Executing this project would be unlike most projects – rather than designing one experience and interface for a single product, we would be designing and building an extensible framework, with designs covering many potential bits and pieces researchers and developers might need to carry out medical research on Android.

Beyond that, we would be designing and building an Android version of Mole Mapper, an existing ResearchKit™  app for iOS that helps users document and keep track of moles across their body, allowing them to share that information with their doctor and promote earlier detection of skin cancer.

So, how would we approach the UX framework? One of the primary goals (besides IRB approval) was to create an experience that was as easy and straightforward as possible but which also ensured that the user is – and remains – completely aware of how their data is managed and what information the apps collect.

The other major piece was creating something users would want to use to keep track of their health and participate in studies. Anecdotally, we learned that some researchers creating apps for ResearchKit had challenges with user retention. The real value of mobile research studies is in accessibility – if users can join a study and track their information easily, studies could operate with unprecedented sample sizes. That’s why we decided to tackle this problem head-on.

So with both of those goals in mind, the UIUX framework behind ResearchStack was broken into three main pieces – onboarding, dashboard and tasks, and data management and visualization. These three pieces would get the users smoothly into the app/study, make completing research and documentation tasks easy (and maybe even enjoyable), and make data clear for them and their care providers.

Onboarding

Pre-roll onboarding (or onboarding that has to happen before the user can open the app) is something that’s easy to get wrong, creating an experience that annoys the user or makes it feel like the barrier to entry is too high.

That said, a medical research app requires the user to be informed before joining, so the challenge here was to build out a process that felt light even though there could be as many as sixteen actions or more between the user and officially being part of the study. If the user doesn’t want to be part of the study, there is an affordance for simply using the app to keep track of their own information.
The onboarding is itself broken into three pieces.

First, eligibility. Is the user actually eligible to join the study? This step uses general qualifying questions to determine if a user is eligible to participate in the study. Like the rest of the onboarding process, the interface is minimal by design. Question layouts and list controls are predictable and uniform, with a high contrast color palette of just white, gray, and blue to visibly highlight questions, selections, and progressive actions.

Next is consent. The user needs to have a clear picture of what the study entails, the purpose of the app, and the fact that they can leave the study at any time. Besides providing this information to the user with illustrations, videos, or links to further reading, the consent process can ensure the user is truly informed by requiring a brief quiz covering the preceding information.

The quiz steps maintain the predictable question layouts from before, but are more dynamic. After a user submits their answer, a message will tell the user whether they are correct or not, highlighting both the user answer and the correct answer, and providing an explanation when needed.

The bottom action bar in this case is also dynamic – if the user needs to read an explanation about the answer that extends off-canvas, the “NEXT” button transforms to “MORE,” and – when touched – scrolls the text progressively until the end, when the user can finally touch “NEXT.”

The same mechanic is used at the end of the quiz when the user finds out whether they passed the quiz. Answers are explained one more time, and the user can go back to give it another try.

Once the user is fully informed and ready to join the study, they need to register and create  an account. We designed a flow that allows the user to sign up, enter personal info, and create a secure passcode to encrypt their data, even if they exit the app during signup and then come back.

Dashboard & tasks

Inside a research app, we wanted to create some UX definitions for at least two primary goals – study participation through “tasks,” and the visualization of data for the user and their care providers.

Most basic tasks come in the form of surveys or measurements. These kinds of tasks rely on the same predictable question layouts mentioned above, but tasks can contain various types of questions and inputs, either on the same screen or on multiple sequential screens. So we came up with designs that allow for this kind of modular assembly of tasks.

Apps like Mole Mapper make use of these patterns too, even while simultaneously including tasks requiring more unique interactions like using the camera and visually measuring objects.

 

Tasks, in the framework design, are managed on one half of the dashboard interface. They appear as a list of items that can show visual categorization with colored indicators. In most cases, completed tasks will slide to the bottom of the list for a given day, and previous days’ responses can be found in a chronological timeline below that.

But this dynamic can change, even within the visual definitions we provided. For example, Mole Mapper’s tasks are simply divided into two subheads – to-do and done. Since Mole Mapper works on a monthly cycle and previous mole measurements are stored in an interface that visually connects them to the moles, there’s no reason to keep duplicate entries in a chronological timeline on the main interface.

As explained before, keeping users engaged and participating is tough. Part of this is ensuring that participants complete their tasks to create full data sets. To try and encourage participation, we made creative use of visual cues to get users back into the task list until all the tasks are done.

In the tab bar, the task icon or text label will be badged with a small circle to indicate some unfinished business. If the user is still ignoring that and looks into their data dashboard, visual cues will prompt them to “FINISH” tasks, with the button simply leading back again to the task list.

Data management & visualization

Designing charts and graphs is and was a pleasure. Data visualization is a rewarding exercise, offering the chance to design something that looks deceptively smooth for how rigid the underlying data are.

For the initial release of the open source framework, we designed several kinds of visualizations, from a basic pie chart to interactive line graphs and multivariate bar charts.

The same reduced palette as before allowed us a lot of freedom to clearly communicate information and add new colors to the palette for multiple variables as needed. Keeping things minimal meant we had plenty of room to add new touches of information.

And as always, one of the goals of the design was to create something predictable for the user that would easily and quickly build a mental model of how data visualizations worked and what they meant. Consistent “expand” icons allow users to see larger views of their information, cards can be arranged in any order and still appear familiar, and clear headers and labeling explain everything.

On the management side of data, we wanted to make sure the user always had options to manage their data and participation status. Persistent action icons in the app’s toolbar provide access to information about the study and settings around data sharing, consent, security, and study participation. If the user wants to learn more, stop sharing, or even leave the study, it’s only a tap away.

Conclusion

Extensible design is a recurring theme in our work, and nowhere does it better shine than an open-source UX framework.

The design of ResearchStack’s framework is one that allows for easy customization and organization without losing the hallmarks that make it easy for users to get started and stay engaged.

Simple palettes, clear and predictable layouts, and a visual hierarchy that doesn’t get in the way of unique and diverse interactions come together to lay a foundation for designing precise medical research apps for Android.

May Meetup: Android on the Desktop (yes, he went there) + Mutative Design Updates

May Meetup: Android on the Desktop (yes, he went there) + Mutative Design Updates

We wouldn’t be a very good Android meetup if we didn’t invite Mark Murphy of Commonsware to speak. Mark is known for answering tons of questions and helping the Android community on StackOverflow. He’s also an incredible public speaker, cracking jokes and interjecting sarcastic remarks into his presentations. We were lucky to have him at our May meetup, hosted and sponsored by American Express.

After warning the group of profanity ahead, Mark spoke about Android on the desktop, offering his pre I/O predictions for Chrome OS and how it may potentially merge with Android. He showed previous predictions from news reports (mostly denied/ignored by Google) and why it’s very possible now with Android N. He concluded by urging us to pick up appropriate hardware, and to “plan for delivering a first-class desktop experience.” Check out his slides here.

Our lead designer Liam Spradlin also gave a talk, presenting his updates on mutative design. He walked us through using a mutative interface, showing how buttons, contrast and brightness can gradually adjust after learning the behaviors of different users. There was a barrage of questions for Liam after from curious developers. If you’d like to learn more about Project Phoebe, you can read about it here and find the open source repo here.

Liam on mutative design. Liam on mutative design.

Overall, it was a great night with stimulating talks that left our group wondering about the next big thing at I/O in a few days. Many thanks to Mark for trekking in, to Mark and Liam for speaking, and to David and Anat over at American Express for coordinating with us each month!

See you in June at Spotify.

The view from AMEX The view from AMEX’s 26th floor auditorium.

Bringing Mutative Design to NYC Apps at IDEO

Bringing Mutative Design to NYC Apps at IDEO

IDEO welcomes NYC Apps with beer and pizza!  Photos courtesy of NYC Apps.  IDEO welcomes NYC Apps with beer and pizza!  Photos courtesy of NYC Apps.

It’s small world, but did you know that the NYC tech world is even smaller? I recently met up with an old colleague, Leah Taylor, who told me about an upcoming NYC Apps meetup focused on design. As the go-to shop for Android design & development, I thought it’d be great if touchlab got involved somehow. Lo’ and behold, Kevin happened to know the meetup’s organizer, Serko Artinian, from the good ol’ coworking days. A few weeks later, our lead designer, Liam Spradlin, presented his latest project at the meetup, hosted by IDEO.

While everyone was chowing down on Saluggi’s pizza, Serko shared a fun Lyft ad to jumpstart the evening. He also welcomed the group with recent tech news before giving the floor to the speakers.

Liam gave the group an intro to Project Phoebe, his latest project focused on “mutative design” — a design methodology for interfaces that can adapt automatically to users. He gave examples of how a contacts app could enhance touch target sizes and highlight important actions for a child, and how the same app could enhance contrast for a user with low vision. There was a flurry of questions after the talk from fascinated designers. One person wondered how long a mutative interface would take to adapt. If he had an argument with his girlfriend and was angrily sending text messages with errors, would the interface mutate that day but switch back after the fight was over? (The answer: Mutations probably wouldn’t immediately react to sudden temporary temporary behavior changes like these.) If you’d like to learn more about Project Phoebe, you can read about it here and find the open source repo here.

Liam Spradlin introducing Project Phoebe. Photos courtesy of NYC Apps. Liam Spradlin introducing Project Phoebe. Photos courtesy of NYC Apps.

Simon Kirk, the director of business development at Invision, walked the group through the latest upcoming features that’ll improve collaboration between designers and developers. IDEO’s team talked about conversational interfaces. They showed how SMS enables conversational UI, and how it can be used for quick testing of conversational interfaces.

The IDEO The IDEO’s team presentation on conversational interfaces. Photos courtesy of NYC Apps.

A big congrats to Serko for putting on another amazing meetup. Thanks so much for having the touchlab team!
Also, 2 epropz to IDEO for hosting and ordering Saluggi’s!

A group shot of the evening A group shot of the evening’s hosts and speakers with their teams.  Photos courtesy of NYC Apps.

Meet Selene: an open-source mutative app and the next phase for Project Phoebe

Meet Selene: an open-source mutative app and the next phase for Project Phoebe

Back in December, I introduced Project Phoebe, a first (open source) step toward mutative design.

Mutative design, if you haven’t had a chance to read the original post, is a theoretical design methodology for creating interfaces and experiences that are born, live, and evolve with the user’s reality. Essentially, digital interfaces that mutate.

If a user has low vision, isn’t familiar with technology, lives with changing lighting conditions, low data, or has low physical acuity, the design would automatically adapt to that, going beyond responsive design to something that makes UI accessible and engaging for every single user.

Today, to coincide with my session at Droidcon San Francisco, I’m excited to reveal Selene, a basic sample app to demonstrate some early concepts of mutative design.

Selene Selene’s contrast mutation

The app, made in collaboration with developer Francisco Franco, demonstrates two mutations so far: a contrast mutation that responds automatically to changing light conditions, and a “helper” mutation that guides users to creating a new note if they need help.

It’s still early days, but mutative design is a quickly evolving idea, and hopefully simple examples like Selene will help developers and designers get on board.

You can find the open-source code and design assets at source.phoebe.xyz or read the new post to coincide with my DCSF session at hello.phoebe.xyz.

To try Selene for yourself, join the beta program at selene.phoebe.xyz