Imagine this- you are not feeling well- maybe because of the changing weather your health is bad. You took aspirin but it is not helping, so you decide to go to the clinic and take medical advice. And what you see there- a robot doctor or a digital doctor is there in your service! So, what will you do! Would you trust a robot to treat you? Well if you are a tech-savvy person then definitely, yes you will!
According to the researchers at Penn State University, a person who has high confidence in machine performance and also in his own technological capabilities is more likely to accept and use digital healthcare services and providers.
The automated system is increasingly gaining interest in the medical field, where medical kiosks are preferred more rather than the receptionists. S. Shyam Sundar, James P. Jimirro Professor of Media Effects said, “We investigated user acceptance of these ‘robot receptionists,’ along with automated nurses and doctors. In addition, we tested whether the form that these roles took — human-like, avatar or robot — made a difference in user acceptance.”
It is very obvious that doctors are limited by their human bandwidth, by their experience, knowledge and even state of mind from minute to minute. So instead, machines can be programmed to ‘think’ of all the possible conditions that a patient’s symptoms could point to, according to Sundar. In addition, unlike human doctors, machines never get tired. Therefore, he believes that the healthcare industries can benefit from increased reliance on automated systems.
For better understanding the user psychology behind the acceptance of automation in clinics, the researchers recruited participants from the online workforce, Amazon Mechanical Turk. First, the team gauged the participants’ preconceived beliefs about and attitudes toward machines — what is called a “machine heuristic.”
“A machine heuristic involves stereotypes people have about machines, including their beliefs in machines’ infallibility, objectivity, and efficiency,” said Sundar.
The team then asked the participants to indicate their level of agreement with statements such as, “When machines, rather than humans, complete a task, the results are more accurate.” This helps them to measure the participants’ belief in the machine heuristic. Moreover, the team asked them different questions rate their “power usage,” or level of expertise and comfort in using machines.
“We found that the higher people’s beliefs were in the machine heuristic, the more positive their attitude was toward the agent and the greater their intention was to use the service in the future,” said Sundar. “We also found that power usage predicted acceptance of digital healthcare providers. A power user (a person with advanced computer skills) is more likely to accept a robot doctor, for example, than a non-power user.”
The team also noticed a double dose effect of machine heuristic and power usage. “We found that if you’re high on machine heuristic and you’re high on power usage, you have the most positive attitude toward automated healthcare providers,” said Sundar. “This combination seems to make people more accepting of these technologies.”
Overall, the researchers found that the people who highly believe in machines and also are a power user showed a positive attitude towards all forms of a digital healthcare provider.
“Our results suggest that the key to implementing automation in healthcare facilities may be to design the interface so that it appeals to expert users who have a high belief in machine abilities,” said Sundar. “Designers can direct resources toward improving features such as chat functionality instead of anthropomorphizing healthcare robots. In addition, increasing the number of power users and the general belief that machines are trustworthy may increase the adoption of automated services.”
The results they presented at the ACM Conference on Human Factors in Computing Systems in Glasgow, Scotland. The National Science Foundation supported this research. The team also consists of Andrew Gambino and Jinyoung Kim, doctoral researchers at Penn State’s Media Effects Research Laboratory.