Measure mental health conditions with context-specific 'Lifemarker' assessment

Behaviour isn’t random. You likely already know that, but have you ever tried a counterfactual test?

I once tried to act completely randomly at my team’s annual away day. As I walked to the podium, I tried to make my facial muscles twitch, utter random words, snort, laugh, and flap my arms in every direction. The result, as you can imagine, was ridiculous.

And it wasn’t truly random at all. I was still walking to the stage, too self-conscious to let go completely, and trying not to offend or scare anyone. My behaviour was clearly out of the ordinary, and perhaps more importantly, people knew that the expressive behaviour I was displaying just wasn’t me.

It’s fascinating how good we are at recognising people’s typical patterns of behaviour. In our mind’s eye, we can easily visualise the unique expressive behaviours of someone we know. We can readily recall the idiosyncratic ways a friend, colleague, or family member smiles or gestures with their hands. When we’re on the phone with someone, our minds automatically generate a picture of how they might be moving, even though we can’t see them. The specific image may be slightly off, but it’s one of a fairly small number of ways they could be behaving.

This doesn’t just hold for those nearest and dearest to us. It’s interesting how little information or data we need to build this internal model of someone else’s behaviour. A single meeting is often enough for us to create a realistic mental model. While the accuracy of these models is largely untested, the seminal work on ‘thin slices of expressive behaviour’ by Ambady and Rosenthal showed that people form first impressions about the expressive behaviour of others - in as little as 30 seconds, with no substantial gains from longer exposures! I’ve personally found that after meeting someone just once (in person), at least once, all subsequent zoom meetings become much easier to do!

It is clear our expressive behaviour isn’t random - we all have typical ways of behaving in different situations. So if it isn’t random, what determines it? A number of important factors, or biomarkers, influence our behaviour, including:


  • The social context - how we act when we are alone, with friends, colleagues, or family.

  • Your metabolic energy levels - you probably know the term ‘hangry’.

  • Emotion - your momentary emotion is obviously going to affect how you behave.

  • Your fatigue levels - you can see when someone is falling asleep, and even at lower levels of physical or mental fatigue your behaviour is influenced.

  • Medical conditions - many medical conditions directly or indirectly affect your behaviour.


If like BLUESKEYE AI, you’re interested in using expressive behaviour as a digital biomarker for Mental Health or other medical conditions that affect expressive behaviour, it is cruicial to accurately attribute observed behaviours to facots like fatigue levels and the medical condition itself. Other influences, such as social and emotional context, can be distractors - although knowing someone’s apparent emotion may help contextualise the data further.

The key challenge in obtaining objective and repeatable measures of these conditions is therefore to capture expressive behaviour while minimising the effect of these variations in the social, metabolic, and emotional behaviour. Simultaneously, it is beneficial to create scenarios where people maximally express the signals relevant to the medical condition of interest. For example, to measure facial palsy, you’d design a task that ensures all facial muscles are used. ‘The quick brown fox jumps over the lazy dog’ analogy for your face.

To address this, BLUESKEYE AI developed a proprietary context-aware assessment technique, which includes a set of standardised tasks delivered to patients or trial participants through the B-Healthy Platform. These tasks were specifically designed to help assess conditions such as depression, anxiety, fatigue, and facial palsy.

Tasks available today include, among others:


  • Mood diary - gratitude video journalling with an avatar that prompts you to dig just a little bit deeper.

  • Read aloud - read a piece of text from your chosen book. When you return, you can continue where you left so you don’t have to repeat the same text over and over.

  • Ball tracking - follow a number of balls on your screen with your gaze.

  • Picture description - say out loud what you see in a randomly generated picture, and how it makes you feel.

  • Facial mimicry - copy the facial expression shown by an actor or avatar.


To obtain an objective measure of a mental health condition, people’s face and voice behaviour are recorded while they complete specific, active tasks, usually on a mobile phone, and always after proper informed consent. Digital behavioural biomarkers are then extracted from the temporal activation patterns of facial muscles, gaze direction, and tone of voice, to name just a few of them.

The same behavioural biomarker can of course be applied to the data coming from different tasks. By now I don’t have to tell you that the task context matters. Evidence of correlation between a medical condition and the biomarker alone is not conclusive - we need to know what the person was doing when their behaviour was recorded, and the best way to control that is to give them a specific task to complete.

This concept of context-specific behaviour assessment is illustrated in the diagram below. Note that this illustration is just an example, showing a subset of our core technologies and an example of what a very simple behaviour descriptor could be.

Diagram 1: How is a Lifemarker output generated?

Below is a list of Lifemarkers that we already have substantial evidence for:

Diagram 2: Example Lifemarkers: combinations of tasks and biomarkers suitable for objective measurement of certain medical conditions.

Collecting clinical evidence for each and every mental health condition of how well our Lifemarkers correlate with established endpoints and patient outcomes takes time and effort, but it’s what the world needs if we want to enable early detection and better treatment of mental health conditions.

We are always on the lookout for new ideas for interactive tasks that have a strong signal and are engaging for people to complete repeatedly. If you have an idea for a new one, or if you want to try our existing Lifemarkers, let’s have a chat!

Previous
Previous

Social robots provide mental health support

Next
Next

Are voice assistants social robots?