With explicit feedback, AI needs less data than you think

Register now for your free virtual pass to the Low-Code/No-Code Summit this November 9. Hear from executives from Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.


We’ve all come to appreciate that AI and machine learning are the magic sauce powering large-scale consumer internet properties. Facebook, Amazon and Instacart boast enormous datasets and huge user counts. Common wisdom suggests that this scale advantage is a powerful competitive moat; it enables far better personalization, recommendations and ultimately, a better user experience. In this article, I will show you that this moat is shallower than it seems; and that alternative approaches to personalization can produce outstanding outcomes without relying on billions of data points. 

Most of today’s user data is from implicit behaviors

How do Instagram and TikTok understand what you like and don’t like? Sure, there are explicit signals — likes and comments. But the vast majority of your interactions aren’t those; it’s your scrolling behavior, “read more” clicks, and video interactions. Users consume far more content than they produce; key factors that social media platforms use to determine what you liked and didn’t like are based on those cues. Did you unmute that Instagram video and watch it for a whopping 30 seconds? Instagram can infer that you’re interested. Scrolled past it to skip? OK, not so much. 

Here’s a key question, though: Does Instagram know why you unmuted that cat on a motorcycle video? Of course, they don’t — they just observed the behavior, but not the why behind it. It could be that you saw a familiar face in the first frame and wanted to see more. Or because you’re into motorcycles. Or into cats. Or you clicked accidentally. They can’t know due to the structure of the user experience and the expectations of the customer. As such, to figure out if it was the cats, or the motorcycles, or something altogether unrelated, they need to observe a lot more of your behaviors. They’ll show you motorcycle videos and separately, cat videos, and that can help increase their confidence a bit more. 

To add to this issue, the platform doesn’t just detect “cats” and “motorcycles” in this video — there are dozens, if not hundreds, of features that might explain why you were interested. If there’s no taxonomy defining the space well, a deep-learning approach that doesn’t require a taxonomy (i.e. feature definition) needs orders of magnitude more data. 

Event

Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Register Here

Advancing human-computer interactions

You can see how fragile and data-hungry this approach is — all because it’s based on implicit behavioral inference. 

Let’s evaluate an alternative approach to understanding the user’s intent with an analogy. Imagine a social interaction where person A is showing this same video to person B. If person B just says “that’s awesome,” can A infer much about B’s preferences? Not much. What if instead, A digs in with “What about it did you like?” A lot can be inferred from the answer to this question. 

How can this interaction be translated into the world of human-computer interactions? 

Explicit feedback: Just ask the user!

Let’s look at rideshare. A key requirement in that business is to ensure the quality of the drivers; a driver that creates a poor rider experience needs to be expelled from the system quickly, otherwise, they can be quite damaging to the company. Thus, a very simple model appeared: Uber asked the user to rate the driver after each ride. A rating below 4.6 expels the driver from the Uber system.

And yet, hiring and onboarding drivers is an expensive endeavor; with bonuses as high as $1,000 for a new Uber driver, it’s quite inefficient to fire drivers for offenses that they could have easily addressed. 

In a model based on a one- to five-star rating, a driver is either “basically perfect” or “eventually fired.” This lack of nuance is bad for business. What if a driver commits a very fixable offense of regularly eating in their car, and as such, their car smells for a few hours after lunch? If only there were some way for riders to indicate that in their feedback, and for the oblivious driver to learn about it…  

This is exactly what Uber pursued in the second iteration of its feedback system. Whenever a rider rates a trip four stars or below, they are required to select a reason from a dropdown list. One of those reasons is “car smell.” If a handful of riders — out of dozens of rides that a driver gives! — provide explicit car smell feedback, the driver can be made aware and fix it. 

What are the key characteristics of this dramatically more efficient approach? 

  • Defined taxonomy: Uber’s rider experience specialists defined different dimensions of the rider experience. What are the reasons a rider can be unhappy after a ride? Car smell is one; there are half a dozen others. This precise definition is possible because the problem space is constrained and well understood by Uber. These reasons wouldn’t be relevant for food delivery or YouTube videos. Asking the right questions is key. 
  • Explicitly asking the user for the WHY behind the feedback: Uber is not guessing why you rated the ride one star — was it because of the peeling paint on the car or because the driver was rude? Unlike Instagram, which would just throw more data at the problem, Uber can’t expose a few dozen customers to a bad driver, so the data volume constraints force them to be clever. 

There are wonderful examples in domains other than rideshare. 

Hotels.com inquires about your experience shortly after check-in. It’s a simple email survey. Once you click “great,” they ask “What did you like?” with options like “friendly staff” and “sparkling clean room.”

Hungryroot, the company where I work, asks the user about their food preferences during signup in order to make healthy eating easy. Want to eat more vegetables? Love spicy foods? Prefer to be gluten-free? Great, tell us upfront. Recommendations for your groceries and recipes will be based on what you told us. 

This approach is dramatically more effective. It requires less data and the inference driven from each data point can be much stronger. This approach also doesn’t require creepily observing what the user is clicking on or scrolling past — the kind of snooping tech giants got in trouble for. 

It’s important to note a tradeoff here. Implicit feedback mechanisms require no user effort at all; on the other hand, going too far when asking the user for explicit feedback can create an annoyance. Imagine Uber overdoing it with the follow-up questions: “What exactly was the bad smell in the car? Did that smell bother you the whole ride or a part of it? Was it a strong smell?” This crosses from helpful and caring to irritating and would surely backfire. There’s definitely a sweet spot to be found. 

Moats built on implicit user data are quite shallow

Don’t be afraid of an incumbent with an implicit data advantage. Build a taxonomy of your space and ask the users for explicit feedback. Your users will appreciate it — and so will your bottom line. 

Alex Weinstein is the chief digital officer at Hungryroot. Previously, he served as an SVP senior vice of growth at Grubhub. Alex holds a Computer Science degree from UCLA. 

Originally appeared on: TheSpuzz

Scoophot
Logo