Mobility you can wear

Slip into it and travel even faster: The researcher Tamim Asfour from Karlsruhe is developing a “robot suit” that will be a personal mobility system.

Professor Asfour, you are researching humanoid robots — and robots you can wear. Can you explain these concepts to us?

On the basis of humanoid technology, we’re working on the development of robot suits, which are also known as exoskeletons. You can think of such a suit as a hollow humanoid robot that you can slip into — in other words, that you can wear. Its purpose is to expand and support human capabilities, such as carrying heavy loads. In addition, we want the robot suit to play a role in the state-of-the-art rehabilitation of patients whose movement has been impaired as a result of injury.

What would it be like to move around when you’re wearing such a suit?

An exoskeleton is a personal mobility system of the future. You slip into the suit, and it takes you everywhere. The robot suit will be able to see and navigate. With the suit you’ll be able to run faster and carry heavy shopping bags, for example. It will be something like a personal taxi for moving around in a city.

The robot suit will be able to see and navigate. With the suit you’ll be able to run faster and carry heavy shopping bags, for example.

Which traffic routes could we travel in our wearable robots?

Tamim Asfour, professor at the Institute of Anthropomatics and Robotics (IAR) in the Department of Computer Science at the Karlsruhe Institute of Technology

In a wearable robot you could travel on the street, in the air, and underwater. However, there are still some huge challenges. On the one hand, the challenges lie in the mechatronic development of the suits, which have to be lightweight, energy-efficient, and personalized. On the other, there are challenges in the interface with the human body. We don’t want to attach anything to the human body or put anything into it, but we nonetheless want to create an intelligent control system with which you can manage the suit intuitively. We also want to equip the robot suit with autonomy-enhancing functions. We’re working on artificial intelligence systems so that the suit can register its surroundings and identify human behavior in a predictive way. On that basis it will identify the tasks it needs to perform in order to plan and implement its movements.

If I were moving within a robot suit, would my own arm still have anything to do?

You would always have control over your natural arm and your artificial one. The suit would recognize your intention, for example on the basis of where you are directing your gaze or what movements you are just starting to make. It would use these perceptions to deduce what you want to do, such as stretching out your arm to open a door. The system could also correct movements — for example, movements that an assembly line worker repeatedly makes incorrectly.

A new take on package delivery: Could your humanoid robots open garden gates and carry packages from a van to a recipient?

Not at present, because surroundings of this kind change very fast. We’ve already developed many functionalities, such as grasping things and setting them down. But the robot’s ability to adapt to new situations and surroundings is not mature enough so far.

Human beings can predict how humanoid robots will behave more accurately because the robots are modeled on human beings.

Could humanoid robots also be used in automobile production?

Of course! Their use in automobile production would enable us to make production processes more flexible. Exoskeletons could support workers doing manual tasks or tasks that require them to stretch their arms up above their heads. For the factories of the future, we can imagine robots that can quickly reconfigure themselves. When the assigned tasks change, the factory would adapt itself at every level, ranging from the arrangement of the machines within a facility to the individual robots.

Why are you building machines that are modeled on human beings?

We want robots to cooperate with human beings constructively in the spaces where people move around and handle tools and other objects. In the course of evolution, these spaces have been optimized for human beings. So when we look for systems that can orient themselves within these environments, we obviously use the shape of the human body as a model.

For the factories of the future, we can imagine robots that can quickly reconfigure themselves. When the assigned tasks change, the factory would adapt itself at every level.

Do human beings and humanoid robots form good work teams?

Human beings can predict how humanoid robots will behave more accurately because the robots are modeled on human beings. These cobots, or collaborative robots, are flexible mobile systems. You don’t have to be afraid of them, because they yield in the same way that people do.

A British online supermarket is already using a robot that you developed in order to support its service technicians.

The ARMAR-6 robot is definitively a milestone. It already obeys spoken commands. If you ask it to hand you a wrench, it answers by asking you which wrench it should be. The robot can already predict what an individual will do, and then react proactively.

What does all this mean for the future of work?

The past has shown that the higher the degree of automation, the better is the quality of people’s daily life and work. In order to plan work along these lines, the best thing would be to already cooperate more closely with scientists today. In most cases, people don’t react until a new technology already exists.

Anthropomatics is the science of the symbiosis between human beings and machines. We are doing research to find out how we can develop technical systems that meet the needs of human beings.

You teach at the Institute of Anthropomatics and Robotics at the Karlsruhe Institute of Technology. What is anthropomatics?

Anthropomatics is the science of the symbiosis between human beings and machines. We are doing research to find out how we can develop technical systems that meet the needs of human beings. In order to do that, we need to understand the motor functions, perceptions, and information processing of human beings. The special thing about our work in Karlsruhe is that we are developing not only methods and individual components but entire systems that combine mechatronics, computer science, and artificial intelligence. Our goal is the comprehensive engineering of cooperative supportive robots that carry out a variety of tasks in interaction with human beings. We also want these robots to be able to learn from people and from their own experience.

How are you teaching robots to perform actions in the same way that human beings do?

Human movements are extremely complex. They are always determined by the object in question. A cup stays firmly in my hand because I have contact points with it and because I exert pressure on these contact points. A balance arises between the various forces. There’s a similar situation when we climb stairs. A human being in motion is an object in the hand of his or her surroundings, so to speak. On the basis of this idea, we came up with 46 different basic positions for the entire body. We derived all of this data from our observations of human beings. We’ve got the world’s biggest database of human movements right here.

In other words, you’ve developed something like an alphabet of movement.

We were inspired by natural speech, which can be expressed through a limited number of letters. With these letters, human beings can form words, and with the help of grammar we can form sentences. We assume that, in similar fashion, there is a vocabulary of basic movements in the human body and brain, and that the “letters” of this vocabulary can be put together to form complex actions. If we transfer this to robotics, grasping something is a kind of letter — in other words, a basic movement. Possible “sentences” could consist of gripping and cutting, gripping and wiping, or gripping and setting down. In the next step, we can use this taxonomy to generate complex activities, such as setting a table.

We’re working to enable robots to learn by watching human beings.

How does a robot learn from a human being?

For example, let’s assume we want a robot to run for six meters along a railing and lean with its hand on the railing for support. The best algorithms we know need hours to find a solution for this problem. With the help of our taxonomy of basic body positions, we need only seconds to generate this movement. We’re working to enable robots to learn by watching human beings. Repeatedly performing and saving this experience would enable the robot to improve its own capabilities.

Is your method of representing human movements unique? Where do you stand in an international comparison?

At any rate, we’re among the global leaders in this field. Last December we published part of our work in the journal Science Robotics. In our article we show how robots learn their movements by observing human beings and perform them in real time.

Is a machine more likely to be accepted by people if it looks similar to a human being?

Interestingly enough, this is the case only to a certain extent. There is a phenomenon known as the “uncanny valley”: The more similarities there are between a human being and a machine, the greater is people’s acceptance of the machine; however, the acceptance suddenly decreases if the similarity is too great, and it only starts to increase again if it becomes impossible to distinguish between them.

A person wearing a robot suit could learn how to play the piano or dance a ballet as soon as he or she accepts the suit as an extension of his or her own body.

What can people learn from robots?

A person wearing a robot suit could learn how to play the piano or dance a ballet as soon as he or she accepts the suit as an extension of his or her own body. The way this happens is similar to the feeling you have when you hold an electric screwdriver and briefly regard it as an extension of your hand. This three-dimensional representation of your own body in your brain is very adaptive. And of course, if you’re wearing a complete suit this representation changes a lot more. The overall weight, the way movements are carried out — all of that has to be assimilated into the representation of the body. If that happens, both systems will function as one.

Should we be worried about the possibility that robots might go beyond our instructions?

We are very far from that possibility. We’ll have to understand a great deal more about human beings before we can accurately reproduce their abilities through mechatronics alone. And I’m not even talking about intelligence. The best control system of all time is the human brain. We want to learn from human beings as far as possible. Human beings are the source of our inspiration. Although the robot suit will have certain capabilities, its purpose is only to augment human ones. It’s not an autonomous system — it’s a body-hugging assistance technology.

But in certain cognitive areas, robots are already far ahead of us, aren’t they?

These are generally isolated individual capabilities that have been developed after a long period of research. Of course, we’re aware of these fears held by some members of the general public. That’s why it’s important to think about the ethical, legal, and social implications of this technology at a very early stage. The first thing we teach our students in the introductory lecture about robotics is Isaac Asimov’s Three Laws of Robotics. First Law: A robot may not injure a human being. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Robotics is a key technology of our century, and it will make a crucial contribution to improving people’s quality of life.

Where will robotics be ten years from now?

Robotics is a key technology of our century, and it will make a crucial contribution to improving people’s quality of life. I hope we will have genuine cooperative robots that can be used without hesitation in daily interactions with human beings.

  • Father of a humanoid robot family

Tamim Asfour is one of the most renowned scientists in the field of humanoid robotics. He is a professor at the Institute of Anthropomatics and Robotics (IAR) in the Department of Computer Science at the Karlsruhe Institute of Technology. Asfour has held the Chair of High-performing Humanoid Technologies as a professor of humanoid robotic systems since 2012. His main research interest is in high-performing 24/7 humanoid robotics. The research focuses especially on the development of humanoid robot systems that combine the mechano-informatics of systems with the capabilities of predicting, acting, and learning on the basis of human demonstration and sensomotoric experience. Tamim Asfour is the developer and head of the humanoid ARMAR robot family.

 

The concept of anthropomatics was defined by professors in Karlsruhe as the science of the symbiosis between human beings and machines. It refers to a field of research that focuses on human-centered issues. The aim is to use computer science to research and develop technical systems that meet the needs of human beings. The prerequisites for that are a basic understanding and modeling of human beings, especially their anatomy, motor functions, perceptions, information processing, and behavior.

The cobot ARMAR-6 was developed as part of the SecondHands project, which is funded by the European Union and will cost €7 million. Its aim is to develop a robot system that can assist maintenance workers. The project is being coordinated by the British online supermarket Ocado, in whose warehouses the robot and its control technology will undergo initial testing. The system is meant to be used by maintenance technicians who work in the automated warehouse. The complete hardware, software architecture, and methods of intelligent grasping and natural-language dialogue come from the Institute of Anthropomatics and Robotics (Prof. Tamim Asfour) at the Karlsruhe Institute of Technology.

Isaac Asimov introduced his “Three Laws of Robotics” in his 1942 short story “Runaround.” They are:

  • 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Many actions of a robot are programmed, and it is difficult to adapt these to new situations. In order to avoid this difficulty, the actions must be learned under the guidance of a human being. The concept of “movement primitives” is based on the assumption that the human brain contains a vocabulary of primitive basic movements that can be combined to represent complex actions. One research focus of the Institute in Karlsruhe is therefore the “dynamic movement primitives.” This way of representing movement makes it possible for robots to learn and represent a demonstrated movement in a generalized and adaptive form. This movement can then be adapted to new situations simply by entering the starting and exit points. Another alternative is to use movement primitives that are based on Hidden Markov Models (HMMs). In order to learn a movement, they extract shared characteristic key points out of several demonstrations. The adaptation and consecutive execution of these key points makes it possible to reproduce the learned movement. All the movement primitives are supplemented by a description of the actions and their context and then saved in a movement library.

The Chair of High-performing Humanoid Technologies (H²T) at the Institute of Anthropomatics and Robotics at the KIT researches and develops humanoid robot technologies and systems that perform many different kinds of tasks in the real world in interaction with human beings. The research focus is on the mechano-informatics of humanoid robots, visually and tactilely supported gripping and mobile manipulation, learning from observation of human beings, the modeling and analysis of human movements, active seeing and tactile palpation, software and hardware architectures, and system integration.

The term “uncanny valley” refers to an acceptance gap, or a certain interruption of a viewer’s acceptance of artificial figures, which previously seemed hypothetical and paradoxical. Masahiro Mori, a Japanese robotics scientist, first described this “phenomenon of the uncanny valley” in 1970. Today it refers to a specific phenomenon: the fact that a viewer’s acceptance of a robot’s technically simulated nonverbal behavior depends on the “reality content” of the carrier (robots, avatars etc.). However, this acceptance does not increase in a steady linear fashion together with the figure’s similarity to a human being. Instead, it decreases sharply within a certain range. Human beings find highly abstract, fully artificial figures more appealing and acceptable than figures whose design is extremely natural or extremely similar to human beings. According to this theory, the acceptance decreases suddenly as the result of a certain level of anthropomorphism and only rises again after a certain very high degree of anthropomorphism has been reached. The acceptance would be highest if the imitation could no longer be distinguished at all from a real human being.

We use cookies

We want to make our website more user-friendly and continuously improve it. If you continue to use the website, you agree to the use of cookies.