Using Explainable Artificial Intelligence for Human-Computer Interaction Part: 1

Arnav Kartikeya
2 min readOct 10, 2021

When pilots are in spatially disorienting situations, times when they can’t tell up from down or left from right, they tend not to listen to their onboard computer system telling them to turn a certain direction. In life and death situations, or even in situations of uncertainty, humans tend not to trust computers.

Artificial intelligence is not free of the same problem. It is still a computer system which may offer completely incorrect decisions. This causes humans to often trust artificial intelligence less than even a computer algorithm. While perfecting machine learning and artificial intelligence models to near 100% accuracy is a daunting and seemingly impossible task, there exists another solution. Prior research into the field of human-computer interaction suggests that increasing the transparency of a computer system increases trust. For example, adding the vectors to the screen which show the confidence of a model in its decision for both sides, is understood to increase trust.

There exists the issue, however, that what is understood to increase trust does not necessarily increase trust. The only way to determine if trust increases it to have a measurement for trust. Literature typically uses scales known as likert scales, in which survey takers respond fill in a number between 1 and 7 based on how much they agree with that statement. A very simple example is:

Rate the the following statement on 1–7, 7 being I completely agree with the statement: I trust this computer to make decisions for me.

Obviously no proper research paper would ask such a simple question and use it as a measurement of trust, but they still use the same methodology, having the subject present what they feel as trust towards a system. A common survey for this task is the Trust in Automation Questionnaire.

There are a few flaws with this approach, primarily that this relies on the user to decide how much they trust a system rather than actually observe how much they trust it. This is a problem that I saw in previous literature and wanted to answer, with the power of artificial intelligence.

A part 2 coming out soon will explain how I chose to combat this issue in my own research paper. If you would like to read the paper, here is the link: https://arxiv.org/abs/2108.04770

--

--

Arnav Kartikeya

A high school student interested in cognitive science and programming