At Uber in the spring of 2020 I (Duncan) moved over to Uber Eats. It was the early pandemic and the Eats business was growing explosively. I was tasked with leading the Marketplace Data Science team, building the algorithms that power the realtime Eats marketplace. We had a stellar group, packed with Eats veterans and a surge of talent moving over from Rides to bolster our ranks.
The excitement – and the stress – were considerable. The tech was much less mature than what we had on Rides, and the domain was different in important ways. I have vivid memories of lying awake at night, a single question swirling in my brain: are we solving the right problems?
Getting that part wrong was the number one way for me to screw it all up. Getting that wrong was the way to waste the time of the hundreds of combined scientists and engineers across the team. It was the way to get chewed out in front of Dara, our CEO; it was the way to get myself fired.
I knew that because I had seen it happen countless times before – where a talented team spends months lovingly grinding out models that deliver zero business value because they were solving the wrong problem.
There’s a simple yet subtle reason this happens. Scientists want to do what they know best – they want to fire up PyTorch, make predictions and test hypotheses. For scientists:
It’s easier to build a model than solve a real business problem
And so months get wasted, trust gets burned, and opportunity gets lost.
Making this mistake is getting the problem framing wrong. So today, let’s give problem framing the attention it deserves: what it is, how it goes wrong, what it costs data leaders, and how to get it right.
What problem framing means
To get concrete, here a few examples of problem framing gone wrong:
Wrong metric: A streaming platform’s data team builds a model to personalize content recommendations, but only optimizes for immediate clicks on content suggestions. Their platform promotes clickbait, leading to lower customer engagement over time.
Wrong objective function: A national pizza company’s data team needs to forecast delivery times for online orders, but their model focuses on the 50th percentile — providing the median delivery time, but not accounting for variability. That leads to longer-than-expected wait times and customer dissatisfaction (for half of their orders — yikes!).
Wrong model type: A retailer’s data team uses a simple regression model to model the relationship between conversion and price. This approach doesn’t consider the causal relationship between demand and price, and makes it look like higher prices actually drive conversion. The result is prices that are way too high.
Peeling back a layer, problem framing means having clarity on 3 things:
What you’re trying to predict (like units that will be sold or customer churn)
How you’ll score those predictions (your objective function, e.g. mean absolute error or root mean squared error – aka MAE or RMSE)
How the predictions will be used (such as buying inventory, adjusting prices, or prioritizing customer retention strategies)
In other words, problem framing requires business acumen (What problem needs to be addressed?), data expertise (What data can we use to make relevant predictions?), and mathematical skills (What model will produce the best results, and how will we evaluate its performance?).
And then you need conviction that your combination of the above will actually move your business forward. This is where well-intentioned but fast moving data science teams run into trouble: if they start looking at data and models without getting into these details, they’ll easily start solving the wrong problem.
It’s important to realize that getting this right is hard. Of course the theoretical answer here is that every model should optimize for long term shareholder value; but that’s completely impractical. In practice, models must optimize for a proxy (or set of proxies) of long term profits – and sometimes the proxy that’s easiest ends up so far from it that you end up asking yourself, wait what? That’s how you end up optimizing for clicks.
The costs of getting it wrong
Why is poor problem framing the costliest mistake? Because if you get it wrong, everything else that comes next is a waste. Bad model framing leads to wasted time, lost trust, and even data leaders’ livelihoods.
Lost time
As we’ve discussed before, machine learning is tedious work. It takes months, maybe quarters for data science teams to build and test models — which means, it can take eons before business users have the chance to figure out that the problem framing is flawed. It’s easy to spend six months building out tech that doesn’t solve the right problem for the business.
Lost trust
Machine learning can be convoluted and intimidating to business leaders. They can be skeptical of the data team’s abilities, or threatened by the possibility of losing headcount to automation.
Many choices in ML are opaque, like exactly what data is used to train a model – but if the problem framing is off, most business leaders worth their salt can smell it out. If your model doesn’t actually solve the right problem, your business stakeholders will be quick to deem it a failure — and far less likely to collaborate on ML initiatives in the future.
Lost job
Screwing this up is a common way data science leaders get fired. If you tell your CEO your models will optimize the company homepage for revenue, but it turns out they are just optimizing for clicks – and it turns out that more clicks alone are actually hurting revenue (a common occurrence)... let’s just say that’s not going to end well. On the bright side, if you get the framing right and can explain it crisply, you’ll be seen (and rewarded) as a unicorn.
How to get problem framing right
In short: do the work. Invest the time.
1. Data Science leaders have to own this
As a data science leader, this is where I would personally spend most of my technical time. Every time my team would bring me content on a model they were working on, I’d spend the most energy working through the model framing.
This work is tricky and nuanced. Although there are commonalities across businesses, there’s no single textbook answer to problem framing, and every organization works differently — but as a leader, ensuring your team is working on the right problem, with the right business buy-in, is your responsibility.
By extension - if you are managing data science leaders, you need to make sure they are clearly accountable for getting this right.
2. Scientists need to be shoulder to shoulder with the business
As we said earlier, getting to the right answer on problem framing isn’t purely a tech problem or a data science problem. You need to solve for what the business actually needs. And that means addressing a lot of friction.
Most data scientists and engineers enjoy writing code, playing with data, and building models. Interviewing and extracting insights from business stakeholders? Not so much. Especially when those stakeholders are very busy and don’t understand the tech your team is working on to solve their problems.
When neither side fully grasps what the other needs, problem framing goes wrong. So start building empathy early and often. The data team and business stakeholders need to sit together, developing a shared understanding of how the work currently gets done and how the proposed ML can help solve the problem.
3. Write down your problem framing
Every team should have a document which explains their problem framing. This sounds obvious, but is annoying to do — and invites criticism. However, you want to make sure you get that criticism at the outset of a project rather than 10 months in.
Data leaders should know this document end to end. Having a single source of truth documenting what your models are solving for and how they do it aligns your internal data and tech teams, and provides visibility for your non-technical stakeholders.
4. Problem framing will get harder over time
Be prepared to spend more time, not less, on problem framing as your data science organization matures. That may seem counterintuitive, but when you have dozens of ML models, you can end up in a feedback loop where one feeds into another — creating unexpected side effects.
For example, one data leader at a freight tech marketplace recounted how their team’s valuable ML model, which predicted the auction ceiling and floor of drivers’ possible bids for shipments, was compromised when a new “autobid” feature was introduced by another team. The new automation routed its autobid data back for training — essentially causing the core ML model to no longer predict actual marketplace behavior, but only the product’s own automation. By the time the data leader discovered what was happening, this feedback loop had caused millions of dollars in lost revenue.
5. Actively evolve your problem framing
Most teams refine their ML projects by evolving the model or improving the data — since they control those pieces directly, that’s the easiest way to see results.
But we see a huge, untapped opportunity in thinking hard about the higher-level problem framing.
Revisit what you’ve learned since you first tackled this problem.
Run experiments to really learn how long term effects affect your business and might be incorporated in your model.
Consider what new business lines or considerations may impact your model now.
This work is typically more challenging — technically and organizationally — but it can be game-changing.
Problem framing isn’t a simple endeavor but it’s absolutely fundamental. By putting in the elbow grease early, getting the right people working together, and making sure to keep moving the ball forward, you can lay a key foundation for ML success.
What ended up happening at Eats? We did take a step back on a variety of our approaches – and also asked how our models might take advantage of synergies with the Rides business, like the shared driver base. The Eats business has grown by a factor of 4x since then. Perhaps most importantly, I sleep better now.