Intelligent butler on wheels

"Communicating Devices" Series, Part 2: Wolfgang Wahlster, Head of the German Research Center for Artificial Intelligence, on vehicles that can think.

Professor Wahlster, how will we communicate with vehicles in the future?

Wolfgang Wahlster: I think that we will interact with the highly technical vehicle environment using all of our senses – eyesight, micro-gestures, touch, and speech. Our car will assist us like an intelligent private chauffeur. Upon request, it will inform us about the environment through which we are currently moving. Abroad, it will teach us about the local language and will book our hotel. What's more, it will also make useful suggestions, such as: "On your notepad, it says that you need to go the dry cleaner's, which is on the way. Shall we make a stop there?"

Our car will assist us like an intelligent private chauffeur.

Wolfgang Wahlster

How precisely can current voice control systems register the complexity of the human language and understand its nuances?

Wahlster: Voice recognition is currently experiencing a quantum leap. New techniques for machine learning are allowing us to come even closer to a natural manner of conversation. The processing of linguistic idiosyncrasies has drastically improved, but we have to be honest about it: a level of understanding like that of a human being is yet to be achieved. In German for example, there are many metaphors or spontaneous word constructions that are not found in dictionaries. A human can take vague information and build a context, but in order for a computer to do this, it needs the missing background knowledge.

Voice recognition is currently experiencing a quantum leap.
Prof. Dr. Dr. h.c. Wolfgang Wahlster, Head of the German Research Center for Artificial Intelligence.

You are an IT specialist and computer linguist researching, among other things, multi-modal voice dialogue systems. How can multi-modality be used with vehicles?

Wahlster: People combine language with gestures, facial expressions, their eyes, and body language. In this way, we are able to minimise ambiguities when expressing ourselves. Imagine that you're in your car and driving past a Baroque castle and ask out loud: "When was it built?" A purely linguistically-based dialogue system cannot answer this question because it doesn't know what you are looking or pointing at. However, a multi-modal system combines sight tracking or gesture analysis with voice recognition to process various channels of information: it can speak and simultaneoulsy open a web page or research the opening times of the castle. All of this is possible through artificial intelligence.

Without artificial intelligence, there would be no self-driving vehicles. For many, the thought of giving up control is uncomfortable. Do we perhaps need more feedback from our mobile digital "partner"?

Wahlster: That's an important question. An AI system that is equipped with an explanatory component can make more plausible suggestions. We are working on precisely these capabilities as part of a new project called Tractat – Transfer of Control between Autonomous Systems. We intend the vehicle to act as a dialogue partner and explain future situations before they happen. For example, using car-to-car communication, my car may receive a message that there has been a collision ten kilometres ahead and warn me: "There has been a severe accident, we will now slowly reduce the speed. Please get ready to take back control." At the same time, the steering wheel moves back into position. This way, I have time to adapt to the new situation and take action,before handing control back to the vehicle and carry on reading my book. This is much safer than autonomous systems that don’t warn the driver until the last minute. This system should be perfected in the next two to three years.

As long as we are still driving ourselves: which artificial intelligence technologies can help the driver?

Wahlster: One of our main areas of research is driver attentiveness and the cognitive strain placed on the driver. If the driver has to focus on merging lanes on a motorway, for example, it is not helpful to simultaneously display warning messages concerning the further course of the journey. The computer system can determine the cognitive strain on the driver and issues the warnings at a later stage. To this end, we are working with biosensors. In contrast to previous methods, we can now create a much more differentiated picture of the driver’s preferences and behaviour.

Will drivers be able to transfer their profile to other vehicles?

Wahlster: Particularly in the field of carsharing, self-explanatory interfaces will be particularly important. In the future, when you press your car2go membership card against the reader of the vehicle, the vehicle could, for example, automatically accommodate your preferred cockpit settings, your preferred entertainment, your language profile and even your dialect.

Is the vehicle of the future empathic, can it mirror emotions?

Wahlster: Using a very good recognition of emotions, it is possible – even today – for systems to detect whether someone is nervous, aggressive, or irritable. Trials to reproduce system-side emotions using photo-realistic faces have however failed to date. Humans are extremely sensitive in this respect and notice straight away if a synthesised facial expression is unnatural. Although a vehicle should have an emotive ambiance at all design levels, emotional behaviour is better suited for the entertainment field.

Does your research on human-machine communication have a particular motivation or goal?

Wahlster: My main goal is for the technology to assist people in useful ways. Instead of using a mouse, mini keyboard or touchpad, we should be able to control our systems using familiar means of communication – in the form of speech, primarily.

It seems as if man and machine are drawing ever closer together.

Wahlster: I think that with a more natural form of dialogue, we will have more trust in our digital systems and learn to accept them as a sort of digital butler. In the case of self-learning robots, which help people with their daily work, we no longer refer to them as “robots” but rather as “cobots” – collaborative robots. As part of a team, a cobot can, for example, pass us heavy objects, which we humans can further process or inspect.

The dreaded notion of super intelligence won't occur – are you optimistic in this respect?

Wahlster: We research and develop in accordance to strict guidelines. A part of our scientific work is to observe ethical standards. A super intelligence would run counter to scientific goals. Who wants to buy a car that does as it pleases? We want intelligent assistants, not an electronic boss. But there is no law of nature that can prove that it is impossible to create a super intelligence. However, after 30 years of research, I can say that human intelligence cannot be outdone in all areas, even 30 years from now.

Consoling a child, understanding a joke - none of those can be achieved with a machine.

What have you learned to appreciate about human intelligence?

Wahlster: I have gained great respect for human intelligence. A computer can beat a chess master, but it falters in the face of seemingly ordinary tasks. Changing a SIM card in a mobile phone, consoling a child, understanding a joke – none of those can be achieved with a machine. A computer is no competition for a craftsman who restores a historical stucco.

Yet humans have already been beaten in the field of cognitive intelligence. In which fields are we better?

Wahlster: Humans have a high sensory-motoric, emotional, and social intelligence. What's more, the human brain also uses less energy than comparable super computers. It can handle vagueness, insecurity, and incompleteness of information. And another important thing is that humans can learn from very few examples. They see a tennis serve, copy it twice, and can then do it – albeit not very well perhaps, but they can do it, anyhow. No present-day AI system can do it so quickly and playfully, instead it requires extremely complex training data.

What's more, the human brain also uses less energy than comparable super computers.

When do you see self-driving cars being commonplace?

Wahlster: Ten, or even five years ago, many people had doubts about the success of autonomous driving. Today, every manufacturer is on board. What's pleasing to see is that the German vehicle industry and their suppliers are leading the way in terms of patents for autonomous systems. Personally, I'm looking forward to vehicles which can find their way into the city autonomously, that can pick me up and later – after they have dropped me off – find their way to a parking area outside of the city independently – all with minimal emission levels. That's a great vision.

 

Prof. Wolfgang Wahlster is a professor of information technology at the Saarland University and is the technical scientific director and chairman of the management board of the German Research Center for Artificial Intelligence, founded in 1988. His current research areas include multi-modal voice dialogue systems and user-adaptive assistance systems for Industry 4.0 and autonomous systems. His research was disguished with the German Future Prize presented by the German President, as well as honorary doctorates from the Universities of Darmstadt, Linkoeping and Maastricht. He is a member of the Nobel Prize Academy in Stockholm as well as the German Leopoldina National Academy.

The DFKI with its locations in Kaiserslautern, Saarbrücken, Bremen (with a branch in Osnabrück) and a project office in Berlin is the leading research establishment in Germany for innovative software technologies based on the methods of artificial intelligence. In the international scientific world, the DFKI is considered one of the most important "Centers of Excellence" and is currently the world's largest research centre in the field of artificial intelligence and its applications, based on number of employees and third-party funds. DFKI projects address the entire spectrum, ranging from applications-oriented basic research to market and customer-oriented development of product functions. Currently, the DFKI employs a staff of more than 800 people from approximately 60 different nations.

Related articles

E-Mobility

Autonomous Driving

Mobility Concepts

Usage of cookies

In order to optimize the website and for continuous improvement Daimler uses cookies. You agree to the usage of cookies when you continue using this site.