Empa

Creating an emotionally aware Theory of Mind study

Case study
1

The Problem

Emotional awareness metrics are extremely limited.

Looking at a person and asking– "are they happy?" is a luxury most of us take for granted. And, even though researchers and engineers alike have advanced computer vision to the point where we can accurately track facial expressions...

...correlating these facial metrics with actual behavioral characteristics is an incomplete practice.

Other methods and apps have tried to fill in these gaps, but many fall short.

Duke University's app, Autism and Beyond, for example, mostly just looks at observational data of children in response to various stimuli. While this is certainly a key into inspecting the intricacies of ASD or any emotional processing condition, it falls short at looking at distinct differences primed and non-primed emotional behavior.

Countless other apps floating around the app store also do a rudimentary job of simply asssessing behavioral metrics without consideration for changes in facial expression.

This is where Empa seeks to fill in the gaps.

2

The Process

Ideation & Rapid Prototyping

"Let's make this into a game"

Original sketches

Empa's prototyping process started with an extremely basic game, essentially just taking an emoji and having the user replicate that emoji with their own facial expressions to gain a point.

Some original sketches to figure out basic gameplay features.

After some tinkering, I created a prototype of this basic game as an iOS app written in Swift and Objective-C. It uses the streaming Affectiva API to process my facial reactions in realtime.

Initial working prototype
 

Then, after presenting my work– and by presenting I mean tapping on Dr. Gregory Abowd's shoulder and showing him the app– he pointed me to Dr. Rosa Arriaga, who is currently advising my work and has guided me through this process.

Experimental & Interaction Design

"What is the relationship between the viewer and the viewed?"

Here, the cycle of consuming content and producing a reaction forms the basis of how we explore the relationship between media and social interactions.

We realized that the primary component in discovering new insight into this issue was to take a closer look at studying this relationship over time. What questions could we ask then? With time-series analysis, we don't just have a simple overall response to analyze; e.g., "User A viewed the Video B. They had reaction C."

The calculus of emotions

Being the math geek I can be, I always try to find underlying mathematical relationships from human behavior. Within the context of ASD, we can very easily track relative changes in emotion over time by simply measuring the slope of whatever facial expression is bebing mesaured at any given moment in time. In the graph below, we can see happiness being tracked over time for some user.

"How can oberve changes in emotion over time?"

What if we add another variable?

The problem with simply measuring emotions over time is that it only pays attention to one part of media consumption: involuntary reaction to content. If we add another variable, say judgment behavior, we can gather far more insight and see how emotional reactions relate to, in Empa's case, judging images.

In calculus, we end up taking partial derivatives, where we observe changes in judgment with respect to both time and facial expressions.

Each point (J, E, T) on the graph thus doesn't represent a user's emotional expressions at any given time, but rather a 3-dimensional emotional state at any given time.

How can we design interactions in layers?

Science and discovery ultimately come back to being able to ask good questions, which is why the three "layers" of interaction in Empa manifest themselves as three variables and questions we want to analyze:

The what, the how, and the why.

On a basic level, we have touch and sight. Recording these interactions are comprised of simple questions like:

"Which button did they press?"

These kinds of questions offer relatively little insight into behavior, but allow us to answer the what of user actions.

These questions ultimately lead to go deeper into user actions, allowing us to analyze our users in two dimensions and measure the how of change:

"Which buttons did they press at 1s, 1.1s, and 1.2s, and how did their choice patterns change?"

Finally, we can make our data 3D. We now have three variables/interactions to look at and gather insight from, and begin to answer the why of change.

"What are the emotional reactions as they touch the buttons at 1s, 1.2s, and 1.3s, and why did their choice patterns change?"

Making UX modular

All of Empa's UX revolves around two central themes: modularity and layers

Here are some sketches for basic treatment flows and the technical organization of the interface.

Users will be presented with an image that they have to judge by rating it on a slider from 😞 to 😊.

Priming users

In order to observe differences in bias between users, we took our initial prototyping game interface (which had users imitate a given emoji on the screen), and combined it with our judgment interface.

Experimental Group 1: Users have to imitate a 😊 in order to continue
Experimental Group 2: Users have to imitate a 😞 in order to continue
Control Group: Users continue directly to judge images without priming.

A new model for studies

From these sketches and thoughts about "layering" interaction, we designed a study with three primary goals in mind:

1. Decentralize data collection and make a mobile-first, distributed study.

One of the primary problems with data collection in science is not only how unregulated and unstandardized it is, but that collection is often limited to having test subjects come to a lab and collect various pieces of data with complicated and immobile equipment.

Empa lives inside of an app, and can be taken anywhere at any time. This places Empa into a category of scientific tools that makes it especially suited for field data collection.

2. Identify differences between primed and non-primed emotional judgement behavior.
3. Correlate changes in facial affect in reference to emotional judgment tasks, rather than strictly emotional observation tasks.

The key here is that, underneath every emotional judgment task, we use artificial neural networks to analyze the user's facial expressions in real time. So, instead of simply observing these changes while the user watches a video, we get to see the changes in behavior over the course of any judgment tasks and automatically reference their facial data at corresponding times.

The funny thing about science is that data collection apps and methods are often remarkably terrible to use and unstandardized. It was one of my goals to make the data collection process not only smooth for the test subject, but also for the researcher in question (me).

I thus centered the taskflows of both the test subject and the researcher as pieces that fit into one another.

3

The Solution

Current beta release:

 

Currently, Empa is in the process of getting its IRB approved from Emory to gather official data from both neuro-typical and neuro-atypical individuals. Plans for the app are to improve app flexibility and gather test groups with other emotional processing conditions, namely PTSD, anxiety, and perhaps even see what effects different drugs have on the results of different. experimental groups.

4