In the previous post we started talking about Algorithm Aversion and here we continue that discussion.
Interestingly, last week I had my own experience with Algorithm Aversion. I was headed to a location on a highway with no street address just a house number. Since I didn’t know where I was going, I plugged it into Google Maps’ GPS. When I came to an intersection, Google Maps told me to turn left, so I did.
After 8 miles, I got to the spot the GPS said I had arrived but lo and behold there was nothing but wide open fields and a little old lady riding a bike.
According to Google Maps, I was where I was supposed to be the but the house I was trying to find, obviously was not.
I tried to call the person who gave me the address but had no cell reception. I pulled a 180 and headed back with a few chose words emanating from my mouth. Once I got cell reception found out I was supposed to turn right at the intersection not left. I immediately starting thinking, I’ll never rely on Google Maps ever again.
Then realized HMMM, I’m doing the exact same thing my article, which had been delivered to your inbox just hours before, was talking about. Seems I’m just as susceptible to algorithm aversion as anyone.
The aversion to algorithms is costly, not only for the participants in Dietvorst’s studies noted last week but for society at large. Many decisions require a forecast, and algorithms are almost always better forecasters than humans. The ubiquity of computers and the growth of the “Big Data” movement have invigorated the growth of algorithms but many people still remain impervious to using them.
Dietvorst’s study “Algorithm Aversion” showed resistance at least partially arises from greater intolerance for error from algorithms than from humans.
People are more likely to abandon an algorithm than a human judge for making the same mistake. This is enormously problematic, as it is a barrier to adopting superior approaches to a wide range of important tasks. It means, for example, that people will more likely forgive a stock picker for picking the wrong stock than a stock trading algorithm for making an error. Even when, the algorithm makes much fewer such errors.
If you are trying to beat the market through equity selection, you have three options: use a model to pick stocks, use a human called “Steve Top Picker” to pick your stocks for you, or combine the two. In all cases, the model alone will prevail.
Inevitably, the model will underperform at some point, since no strategy wins all the time. If a strategy never failed, everyone would invest and the edge would cease to exist.
When a model underperforms for a certain time period, it does not mean that the model is inherently broken. In fact, the model could have simply failed over some time period, but the long-term statistical “strength” of the model remains intact.
Steve, the human stock-picker, will also under-perform at some point. However, Steve can probably tell a better story over a beer as to why he missed the mark on last quarter’s earnings. Stories about that pesky SEC investigation, etc. And since drinking a beer with stock-picker Steve is a lot more fun than drinking a beer with an HP desktop, we will probably give Steve the benefit of the doubt.
We’re the Plan in “Plan your Trade and Trade your Plan” – TraderJanie
Take our 30-day free trial and see why we can say, “If awesome were inches, we’d be the Effiel Tower.”