Even though many studies show that models beat experts in almost everything from determining cancers to the longevity of the fat, green-toed sloth those same studies show we have algorithm aversion.
This is very relevant to quants (traders who use algorithms to trade). Following and maintaining confidence in the model is the foundation of quantifiable trading.
I think most traders, from a cognitive point of view, say, of course, I have confidence in the backtesting, the research. They will follow the rules no matter what happens. But, when the rubber hits the road, and we are in a prolonged drawdown, most would also admit they experience angst and have a very difficult time staying the course – sticking to the rules.
Today, I would like to shed some light on this cognitive dissonance. Why we give so much more leeway to our human counterparts or to our own thought processes, but that latitude does not extend to our algorithms.
A recent paper by Dietvorst et al. (2014), titled “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err” examines this phenomenon.
Now I won’t ask you to read this whole paper so let me do the heavy lifting and summarize it for you.
First of all the abstract from the paper, “Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster.
“This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. The paper shows that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster.
“This is because people are quicker to lose confidence in algorithms than human forecasters after seeing them make the same mistake.
“In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.”
“Thus, across the vast majority of forecasting tasks, algorithmic forecasts are more accurate than human forecasts.”
Yet we continue to observe that experts and laypeople are steadfastly resistant to algorithmic outcomes. And frequently elect to use forecasts made by inferior humans (themselves) rather than forecasts made by a superior algorithm.
Why are we are averse to algorithms
Some of the reasons cited for this oddity is:
1. The desire for perfect forecasts;
2. The inability of algorithms to learn;
3. The presumed ability of human forecasters to improve through experience;
4. The notion that algorithms are dehumanizing;
5. The notion that algorithms cannot properly consider individual targets;
6. Concerns about the ethicality of relying on algorithms to make important decisions; and
7. The presumed inability of algorithms to incorporate qualitative data.
On the one hand, these writings offer thoughtful and potentially viable hypotheses about why algorithm aversion occurs. On the other hand, the absence of empirical evidence means that we lack real insight into which of these (or other) reasons actually drive algorithm aversion. Thus, when people are most likely to exhibit algorithm aversion.
By identifying an important driver of algorithm aversion, research begins to provide this insight.
Here’s an example, Imagine that you are driving to work via your normal route. You run into traffic and you predict that a different route will be faster. You get to work 20 minutes later than usual, and you learn from a coworker that your decision to abandon your route was costly; the traffic was not as bad as it seemed.
Many of us have made mistakes like this one, and most would shrug it off. Very few people would decide to never again trust their own judgment in such situations. Now imagine the same scenario, but instead of you having wrongly decided to abandon your route, your traffic-sensitive GPS made the error. Upon learning that the GPS made a mistake, many of us would lose confidence in the machine. We would be reluctant to use it again in a similar situation.
It seems that the errors that we tolerate in humans become less tolerable when machines make them.
The study conducted 5 separate studies and as expected the models outperformed the participants in all five. The results were actually quite astounding. Participants’ confidence ratings revealed participants “learned” more from the model’s mistakes than from the human’s.
Seeing the model make relatively small mistakes consistently decreased confidence in the model. Whereas seeing a human make relatively large mistakes did not consistently decrease confidence in the human.
It is interesting to note, even if confidence in the model “equals” confidence in the human, participants chose to tie their forecasts to the human. They only chose the statistical model over the human when they are more confident in the model than in the human.
This is a very interesting a relevant study but we are running out of time and space here so I’ll finish this off in the next blog post.
ALWAYS FOLLOW THE RULES
Check our out the Performance Matrix for our algorithms and sign up today.