Super-charge your in-cabin UI/UX research with face and voice behaviour AI

Launching a new vehicle is an expensive business, and the pre-production stage is lengthy with a number of milestones of no return. Designing the cabin, or cockpit, with all its controls and gauges is an incredibly important step because the design is locked in so early. In addition, the importance of the interior cabin design is really increasing now that there’s so much changing in the cabin. Internet connectivity, digital screens, new sensors, and self-driving capabilities are all seriously changing what a passenger can and wants to do in cabin. This means that making mistakes in the user interface and user experience (UI/UX) stage are becoming increasingly costly.

Traditional UI/UX methodology relies on observing test drivers as they try out a static buck (mocked-up cabin) or a full-scale prototype, either in a simulated virtual reality cave environment, on a test track, or on the real road. The experience of the participants is captured using subjective interviews and post-hoc self-report questionnaires. While this is the way UI/UX research has been done for what seems like forever, these methods have many disadvantages. Feedback is usually provided after the test drive, meaning that memory has already interfered with the true experience of participants, and it’s impossible to pinpoint the exact moments and actions that resulted in a sense of delight, frustration, or confusion. Given the importance of in-cabin design, is it really good enough to use this type of qualitative feedback from a limited set of users during the design stage of a vehicle?

What if you could automate your user research with fine grained, objective insights provided in real time and at high temporal resolution? What if you could measure not only what a person’s attitude was, but also what they were looking at when they expressed that attitude, and what action they were performing? Surely this would provide you with better data to support key design decisions? It would make your designs better than those of your competitors!

Recent advances in AI, in particular the sensing of expressive behaviour using machine learning, mean that it is now entirely possible to measure the facial expressions, expressed emotion, attitudes, and reactions of a passenger while they undergo your user tests, using any camera and microphone. The same technology can do gaze tracking, even using a single camera. With Euro-NCAP’s driver monitoring regulation meaning that every new car in Europe will come with built-in cameras, this means that your hardware is already in place! Some companies provide the ability to detect a small number of discrete emotions, while others, like BLUESKEYE AI, allow you to measure fine-grained differences in a 2-dimensional emotion space. Now you can now do your UI/UX research with levels of precision like never before!

And that’s not all. With many in-cabin user interfaces now being digital, it is possible to install over the air updates post-production. But do they have the desired effect? Using the exact same technology you can now also collect objective, granular and real-time feedback from actual passengers as they roam the world in the car you designed!

It’s a design dream 🙂

Want to learn more? Go to https://www.blueskeye.com/intelligentcockpits

Previous
Previous

Running clinical trials with behavioural biomarkers as digital endpoints in the age of AI regulation

Next
Next

Has the time for consumer grade social robots finally come?