Science

Researchers have designed a system that allows robots to detect when a human needs help.

⇧ [VIDÉO] You may also like the content of this partner (after the ad)

More and more robotic systems can assist humans in a variety of tasks. However, it is not always easy to determine precisely when the robot should intervene in a human process without hindering it. Jason R. Wilson, Phyo Thuta Aung, and Isabelle Boucher have taken a closer look at the problem and developed a system that allows robots to automatically detect if a person needs help with a task.

This team of researchers from Franklin & Marshall College Lancaster, Pennsylvania, has in fact noted that it is still difficult to engage the robot in human tasks in a relevant way. Very often, advancements are devoted to determining “how” the robot should help the user, but forget to focus on when intervention is required. For scientists, however, this criterion is key in the user experience associated with a robot.

From this observation, they have developed a system that allows robots to automatically detect whether a person needs help or not to carry out a task. Applied to various types of robots, this system would introduce a dimension of autonomy. It will allow them to “decide for themselves” to intervene, depending on the situation in real time, whether it is assembling an Ikea furniture or baking cookies.

The idea is simple, but the challenge was great. Indeed, it was necessary to ensure that the robot does not intervene too much, which tends to reduce the trust placed in it, but not too much. In the latter case, you could interrupt the person too often or give him the feeling that he is deprived of his autonomy. Another important nuance: the moment a person makes mistakes is not necessarily the moment they want help! Therefore, his objective here was to focus on the interpretation of emotions, to capture the moment when the user disconnects from a task due to frustration, because he has the impression that he is not going to get there or that he is not there. sure enough of it. the same.

To respond to this challenge, the researchers decided to rely on two criteria: language and gaze. The physical, face-to-face aspect of the robot allows more easily non-verbal interactions that are the keys to understanding the human expression of a need.

Detect and categorize gaze changes

To analyze the different gazes of a human being while performing a task, the team chose to use the Kurylo and Wilson model, which classifies the ways of looking according to several categories. They retained from their focus two types of gazes that made the cornerstone of their body analysis.

  • Mutual gaze: it is used to express the desire that the interlocutor take care of the same thing as oneself.
  • The confirming gaze: quite similarly, this is the gaze that ensures that the interlocutor is actually observing the same thing as oneself.

Specifically, these types of gazes are detected by physical changes: for example, the eyes that go back and forth between the object and a person who could help, to request its validation silently … Some studies have shown that we could go so far as to determine what ingredients a person was going to put in their sandwich! But of course this doesn’t always work. A person may very well ask a question without looking away. Therefore, the researchers concluded that a multi-system approach should be created to recognize social cues.

Language to express your needs

The architecture involves two channels, one for audio and one for video processing, which are merged to create the final conclusion on the need for help. © Jason R. Wilson et al.

Scientists have combined gaze patterns with elements of language that are socially associated with the need for help. This can happen through an explicit expression: “I need help”. More subtly, they also took into account hesitation marks, such as “I’m not sure.” Likewise, interrogative words have been integrated into the model, or even the negations that usually appear at critical moments in the expression of a need.

On your device, all these elements, language and visual, are captured by a camera and a microphone: each of these information flows is analyzed to determine a need value. The two values ​​are then merged to allow the robot to decide whether to intervene or not. But this is not a simple “addition”. The researchers also made sure to account for the timing of these signals. For example, a quick confirmation glance followed by “OK” may mean that the user is wondering if the task they are performing is being performed correctly.

Other models have already made it possible to know when a robot should intervene in a certain task. However, they were often based on the robot’s knowledge of the tasks to be performed. Thus, it mechanically “knew” if the person was making a mistake, what stage they were in, and could interact accordingly. The objective of this new method lies in the fact that the data used is not specific to a task. Therefore, it is easily adaptable to all types of robots, and also allows the person being helped to not be helped if they do not feel the need, even if their performance is not perfect.

A lego pterodactyl to test the model.

To validate their model, the researchers tested it on 21 people. They had to put together a Lego pterodactyl from a simple black and white photo. A room had also been hidden to increase the difficulty. Three types of robot responses were possible depending on the speech and gaze of the subject: verbal stimulus, indirect feedback, and direct feedback. This Lego-based experiment turned out to be a success, as 90% found the robot’s interventions useful. 86% of the participants also felt that he was “trustworthy”.

To verify the adaptability of the model, the scientists also tested the robot on a person who was going to make cookies, and the transition was quite successful. In the future, they would like to improve their model by also integrating the analysis of facial expressions, hand movements and postures. They would also like to allow the robot to determine the degree of participation required, so as not to help a person more than they would like. In short, let humans do things “like adults”!

The video of the test on the preparation of cookies:

Arxiv

PHP Script, Elementor Pro Weadown, WordPress Theme, Fs Poster Plugin Nulled, Newspaper – News & WooCommerce WordPress Theme, Wordfence Premium Nulled, Dokan Pro Nulled, Plugins, Elementor Pro Weadown, Astra Pro Nulled, Premium Addons for Elementor, Yoast Nulled, Flatsome Nulled, Woocommerce Custom Product Ad, Wpml Nulled,Woodmart Theme Nulled, PW WooCommerce Gift Cards Pro Nulled, Avada 7.4 Nulled, Newspaper 11.2, Jannah Nulled, Jnews 8.1.0 Nulled, WP Reset Pro, Woodmart Theme Nulled, Business Consulting Nulled, Rank Math Seo Pro Weadown, Slider Revolution Nulled, Consulting 6.1.4 Nulled, WeaPlay, Nulledfire

Back to top button