Humanising Autonomy Or How Do Humans Communicate With Self-Driving Cars?

One of the unexpected effects that come with self-driving cars and artificial intelligence is that professions such as architecture, psychology, or behavior science are suddenly in the spotlight. More and more machines and systems surround us that we interact with. Those are not only smartphones or computers, but more often robots, where self-driving cars will soon be some of the largest robots around us. And with that professions that have stayed far too long in their niche suddenly are becoming hot commodities for other top technology topics.

Maya Pindeus in fact has a degree in architecture from the University of Applied Arts in Vienna. At her final project at the Imperial College in London she started analyzing gestures that pedestrians use to interact with car drivers. And the topic stuck with her. Together with two of her colleagues, Leslie Nooteboom and Raunaq Bose, she co-founded Humanising Autonomy. This startup is focusing on creating a database and deep learning platform for human body language to help machines understand gestures and behaviors for making better decisions.

Example

Here is an example: a pedestrian approaches a street crossing and signals the driver that she wants to cross the street. She is making a small gesture with her hand.

HA_01.gif

What’s easy for humans to understand, becomes a challenge for the machine. After the machine recognized that this object in fact is a human, the machine has to recognize in a next step that the human is doing a gesture. But is this a gesture meant for the car or not? And what’s the context of the gesture? It the human giving the signal a pedestrian wanting to cross the street, or a police office asking the car to stop for inspection? Or is the person making the gesture somebody with nefarious intent? A person kneeling down may just fix her shoelaces, or a runner getting ready for a quick dash.

A machine first has to be able to recognize the visual input, then the sequence of micro gestures, and then the context in which they are happening. Humans in different cities also behave differently. And even in the same city differences in other parts of town can be significant – like local dialects – as well as dependent on the time of the day. People going to work at 8 am behave differently than pub crawlers at 11pm. As soon as an autonomous car better understands the intent of people it can react accordingly.

The visual input is followed by a behavior model that Maya Pindeus and her co-founders want to create. The database is fed with camera-data from industry and academia, then body language will get classified, and explicit and implicit gestures tagged. This year the startup wants to create a first prototype of explicit gestures, followed by micro gestures. The company is not working on a software stack, but on the database of human intent. The algorithms can later also be applied to scenarios from other industries and use cases.

The first focus will be on gestures and intents of humans in a city in city traffic. The founders already have partnerships with Daimler and the University of Leeds, with the aim of creating a first Minimum Viable Product (MVP). It will contain typical traffic situations in a city. Starting with pedestrians crossing a street. A list of categories will be run through, such as a child, adult, elderly person on a cane etc. All of that under different conditions, such as rain or sun, in the evening or during daylight and so on.

Technical preconditions are using cameras with a resolution high enough to recognize such gestures. What fingers are lifted, where do the eyes and face of the person look?

Pilot projects and partners

This year the first pilot projects are executed, with more with additional development partners to follow. 2019 shall see the first version for an integration in existing autonomous systems via APIs and Software-as-a-Service licensing.

Currently Maya and her co-founders are busy closing the first investment round (closing in February) and searching for additional partners (OEMs and especially those with a lot of data). And yes, the startup is hiring. They are looking for deep learning experts, developers, and cognitive science majors.

Humanising Autonomy will conduct a panel discussion at the SXSW on March 10th. And here is the link to the Humanising Autonomy blog.

This article has also been published in German.