Humans WIll Still Follow Robots During An Emergency Even If They Are Wrong

By Ana Verayo, | March 01, 2016

Georgia Tech Research Institute (GTRI) Research Engineer Paul Robinette adjusts the arms of the “Rescue Robot,” which was built to study trust between humans and robots in emergencies.

Georgia Tech Research Institute (GTRI) Research Engineer Paul Robinette adjusts the arms of the “Rescue Robot,” which was built to study trust between humans and robots in emergencies.

Scientists reveal how humans are apparently too trusting when it comes to relying on robots during emergency situations.

Researchers from Georgia Tech are now suggesting in this new study how robots with sociopathic tendencies are capable of starting a rebellion against humankind due to the fact that humans are immediately trusting of any outside source of help during desperate, emergency events.

Like Us on Facebook

During a fire simulation, individuals are asked to follow directions from an "Emergency Guide Robot" even if the participants were informed numerous times that the machine is unreliable and that some of the participants already knew that the robot was defective.

This new research focuses on determining whether humans would be more likely or not to trust a robot especially those that are made for emergency response situations, to assist them in evacuating from a fire in high rise buildings or similar emergency situations.

Researchers were surprised to discover that the test subjects still continued to follow the instructions of the robot even if they already knew that the robot is unreliable.

According to senior research engineer, Alan Wagner of the Georgia Tech Research Institute, his team suggests that it could be that humans give robots too much credit and trust, that machines possess more intelligence than they do. He says, people seem to readily believe that robotic systems know more about the world than they do, and that machines do not have the capacity to make mistakes or any kind of error.

Wagner who is also the lead author of the research says that humans defy logic during emergency situations, as they become disoriented, following the robot's directions to a point that even lead them to a more dangerous situation, if this was a real emergency.

A researcher that remained hidden is controlling this robot during the experiment, as it led the volunteers to the wrong room where it also travelled in a circle several times before it brought them to a conference room. Seven participants were informed by a researcher that the robot was broken, and after this, the hallway was filled with artificial smoke, but participants still blindly followed the robot, completely ignoring the illuminated exit signs as they followed the robot in the opposite direction.

According to research engineer, Paul Robinette from Georgia Tech, we were absolutely not expecting any of this, as the team initially expected that since the robot has already proven itself untrustworthy for circling around before leading them to the conference room, then people wouldn't choose to follow it. However, all volunteers still chose to follow the robot's instructions regardless its previous faulty performance.

Researchers concluded that humans are more likely to regard the robot as an authority figure during emergency scenarios. Wagner noted that the purpose of this study is to ask whether people are willing to trust rescue robots in time of need however, after seeing the results, a more crucial question would be, how to prevent humans from trusting robots too much.

This study will be presented at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in New Zealand.

©2024 Telegiz All rights reserved. Do not reproduce without permission
Real Time Analytics