New technique guides humans when to trust an AI

A method to help workers collaborate with artificial intelligence systems.

Share

AIs are augmenting the capabilities of human decision-makers in several sectors. A key question is: when to trust an AI?

To help people better understand when they should trust AI’s predictions, MIT scientists have created an onboarding technique that guides humans to develop an accurate understanding of situations like whether the machine is making correct predictions or not.

The technique shows how AI complements people’s capabilities, thereby helping them make better decisions or conclusions faster while working with AI.

Hussein Mozannar, a graduate student in the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute for Medical Engineering and Science, said, “We propose a teaching phase where we gradually introduce the human to this AI model so they can, for themselves, see its weaknesses and strengths. We do this by mimicking the way the human will interact with the AI in practice, but we intervene to give them feedback to help them understand each interaction they are making with the AI.”

Human makes decisions for any complex task based on past interactions and experiences. That’s why scientists designed an onboarding process that provides representative examples of humans and AI working together. This would act as a reference point that humans can consider in the future.

Scientists, at first, created an algorithm that can identify examples that will best teach humans about AI.

Mozannar says, “We first learn a human expert’s biases and strengths, using observations of their past decisions unguided by AI. We combine our knowledge about humans with what we know about AI to see where it will be helpful for humans to rely on AI. Then we obtain cases where we know the human should rely on the AI and similar cases where the human should not rely on the AI.”

The team tested their technique on a passage-based question answering task: The user receives a written passage and a question whose answer is contained in the passage. The user then has to answer the question and click a button to ‘let the AI answer.’

Answers by AI are not already visible. Hence, users need to rely on their mental model of the AI.

The onboarding process they developed begins by showing these examples to the user, who tries to predict with the help of the AI system. The human may be right or wrong, and the AI may be right or wrong, but in either case, after solving the example, the user sees the correct answer and an explanation for why the AI chose its prediction.

Mozannar said, “To help the user retain what they have learned, the user then writes down the rule they inferred from the teaching example. User can later refer to these rules while working with the agent in practice. These rules also constitute a formalization of the user’s mental model of the AI.”

When testing their technique on three groups of participants, scientists realized that:

  • 50 percent of the people who received training wrote accurate lessons of the AI’s abilities.
  • Those who had accurate lessons were right on 63 percent of the examples.
  • Those who didn’t have accurate lessons were right on 54 percent.
  • Those who didn’t receive teaching could see the AI answers were right on 57 percent of the questions.

Mozannar said“When teaching is successful, it has a significant impact. That is the takeaway here. When we can teach participants effectively, they can do better than if you gave them the answer.”

Journal Reference:

  1. Hussein Mozannar et al. Teaching Humans When To Defer to a Classifier via Exemplars. arXiv: 2111.11297v2

Trending