Podcast: On Taking Advice from Algorithms

Podcast: On Taking Advice from Algorithms
Photo by Priscilla Du Preez / Unsplash

I opened an account with Wealthfront nearly 7 years ago and haven’t looked back. I was compelled by the simplicity of the process and the benefits of their robo-advisor (an automated investing algorithm). In less than a few days, I opened and funded a new Roth IRA account I felt confident in.

Yet, when I mentioned this to some close friends and family in the weeks and months that followed, I received mixed responses. “You trust that?” asked one family member. Or, “That’s definitely the way to invest these days,” chimed a friend, who had just set up an account with a different robo-advisor.

Underlying these thoughts from others were judgements about the kind of person I am, the values I hold, and my goals as an investor. Choosing to trust an algorithm over a human financial advisor conveyed something about me. Was I trying to save money, avoid seemingly unnecessary conversations, or contribute to the shrinking of the financial advisor field? Or, was I merely prioritizing speed of opening an account and trying to maximize my returns?

Very little research has been done to-date on the social implications of taking advice from algorithms vs humans. That sparked the interests of several academics in Europe last year. In May 2023, they released a study in the International Journal of Human-Computer Interaction called, “Taking Algorithmic (Vs. Human) Advice Reveals Different Goals to Others.

I spoke with them in a recent episode of Mediated World, a podcast I’m hosting about technology and communication. You can listen to the episode on Apple or Spotify. They discuss their findings and areas they’re curious about for future research.

Insights

People make judgements of others. It’s an inescapable fact. So, the researchers wanted to know what judgements do people make about the goals of others who choose to take an algorithm’s advice over a human’s advice. After conducting five studies, they found that:

“Observers attribute the primary goal that an algorithm is designed to pursue in a situation to advice seekers. As a result, when explaining advice seekers’ subsequent behaviors and decisions, primarily this goal is taken into account, leaving less room for other possible motives that could account for people’s actions.”

The authors use attribution theory to build this framing, which essentially states that “people often go beyond the information given when making sense of the goals that others pursue.” Further, they argue that the reasons an algorithm favors an option or provides a certain recommendation are different from when a human advisor favors the same option. So, a person’s goals when taking the algorithm’s advice is likely judged differently than another person who takes the same advice offered by a human.

Additionally, human advice is held to a different standard. While algorithms are expected to be driven by one primary motivation, human advisors are expected to consider secondary information. We expect a robo-advisor to pursue maximum returns or profit. However, we expect a human advisor to do this, plus consider other factors, like the sustainability of portfolio companies or the locations where those companies are based.

Consequently, the goals of people who take human advice are perceived as more multifaceted. This is not always a good thing. The authors discuss a few counterintuitive examples, which I’ll suggest you read if it piques your interest.

As the authors note, an important area for future research is to explore whether “observers’ assessments, in fact, correspond with the actual motives that advice seekers pursue when taking algorithmic vs human advice.” Additionally, future research should look at new domains where algorithms are being piloted.

In short, we judge each other based on where we receive our advice. And, we attribute the (inferred or implied) goals of the algorithm (or human) to the advice seekers. How you receive advice or recommendations, where it comes from, and whether you take it has social implications.

This matters because, as social beings, most of us care about the way we’re perceived by others. Whether you’re a clinician taking advice from an AI on a patient’s treatment course or an individual investor relying on a robo-advisor, AI is increasingly playing a role in our lives. We should, at a minimum, recognize the social value and consequences of the advice we take.

Subscribe to Andy Busam

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe