Why Investors Should Be Wary of Automated Advice

Many of us routinely—and even blindly—rely on the advice of algorithms in all aspects of our lives, from choosing the fastest route to the airport to deciding how to invest our retirement savings

But should we trust them as much as we do? Research suggests maybe we shouldn't, especially when it comes to high-stakes financial decisions.

I recently conducted an empirical study focused on automation bias, or our preference for using algorithmic decisions over those of human experts, and its impact in the area of consumer finance. I provided 800 U.S. survey takers with a series of hypothetical investment situations. Some were told the advice being provided came from a human adviser, while others were told the recommendations came from an automated online algorithm.


iStock-970562458.jpg


The survey takers who thought they were getting advice from an algorithm consistently reported having more confidence in the recommendations than those who thought they were being advised by a human expert.

As a follow-up, all of the study participants were told that the financial advice they received resulted in disappointing investment-performance results. Yet when asked again to rate their level of confidence in their advisers, and whether they would be likely to use them again, the survey takers continued to favor the robo advisers over the human experts.

No second opinion

Why is this the case?

In real life, there are several reasons why some investors might prefer robo advisers over human ones. Robos are available 24/7, can be accessed from the comfort of a customer's bedroom and don't require setting up a meeting or phone call in advance. They also are cheaper.

But none of that explains the results of my study, since I hadn't told survey takers that the algorithms were cheaper or more accessible. Instead, the results likely were due to the fact that many people perceive algorithms to be a superior authority. They view an algorithm like a math equation—an objective process that always spits out the correct answer after doing its calculation.

This bias seems especially strong in the area of consumer finance, where investors are constantly told to look at data objectively and not let emotions drive their decision-making. Algorithms don't have emotions, so they must always be objective and rational—or so the thinking goes.

That perception, however, is misguided. People often overlook the fact that algorithms are designed by humans who choose what data to use and how to use it—and those humans are just as fallible as human advisers. Coders can consciously or unconsciously embed biases into algorithms.

So although algorithmic advisers certainly have contributed to the field of personal finance, consumers' increasing deference to algorithmic results also raises several concerns.

One is that as people outsource more of their decision making to algorithms, their desire to seek second opinions weakens. Because they already have received an objective algorithmic opinion, they may think that seeking another opinion is pointless. But just as no two human advisers are likely to provide a customer with the exact same investment results, investment algorithms can vary greatly in performance. So getting a second opinion via a different algorithm is both rational and important. That's especially true for high-stakes decisions—say, a risky, yet potentially rewarding business opportunity—or when an investor doesn't like the first recommendation but is unqualified to evaluate how sound it really is.

Another concern is that our increasing dependence on, and trust in, algorithms could lead to less risk-taking, creativity and critical thinking finance and in society overall.

Indeed, if people start accepting algorithmic results as fundamental truths, they may be less willing to take long-shot bets on promising startups or go against the odds in pursuit of innovative solutions to problems. Such risky bets can result in failures. But they also can lead to tremendous success.

Fighting back

Fortunately, there are ways to reduce the risks associated with automation bias.

For starters, lawmakers, educators and the media should nudge people to seek second opinions—human or not—on the recommendations they get from algorithms, especially those whose inner workings and biases are difficult to assess.

Regulators also should consider requiring robo advisers and other automated consumer-finance products to disclose that there can be biases in the algorithms they use, similar to how drug firms must disclose the medical risks associated with their products. Simply instilling a minimal dose of informed skepticism could help reduce some of the effects of automation bias.

Taking it a step further, regulators may want to require the disclosure of the assumptions used by human coders who develop algorithms, as well as the data sets the algorithms use and don't use, to make people more aware of any biases. Alternatively, alerting people to the existence of competing algorithms may increase awareness that algorithms differ based on human choices.

In the interim, people need to remember that there is no absolutely correct algorithm. Instead, investors should treat robo advisers just like they treat human ones—by seeking out reviews, recommendations and evaluations. Algorithms are fallible, and it is good practice to seek a second, different opinion, even if it is the opinion of another algorithm.

Dr. Packin is an assistant professor of law at Baruch College's Zicklin School of Business in City University of New York. Email her at reports@wsj.com.