Tech's Optimization Power
Be Careful Where You Point That Thing
Are you mad about social media spreading misinformation and hate? Have you ever repeatedly deleted and reinstalled a social app that you just can’t stop checking? There is no evil plot by technology companies to waste your time or break society. The problem is that technology companies are much better at optimizing for goals and metrics than they are at choosing them. We have billions of dollars and thousands of the brightest minds working on boosting engagement on Facebook, Instagram, Youtube, Tiktok, etc., and it’s working spectacularly, but causing harm in the process. This post explains how significant societal good, and business success, can result from selecting better optimization targets
Google searches don’t have nearly as much of an issue because the user clearly expresses what they want; the AI system is well-aligned with our goal and can give the best result possible. Feeds, on the other hand, are where many of these issues arise. The feed “knows” it should give you something that you’ll interact with, but it doesn’t have any other constraints, aside from not showing anything banned or illegal. To solve this, we need to either add constraints that will limit the selection of bad content, or simply optimize for something other than pure engagement that’s more aligned with improving people’s lives.
Capitalism for Good
Capitalism is at its best when an enterprise is both good for the world and profitable, which allows the good to self-sustain and scale.
Similarly, for consumer apps, the magic quadrant is when a service is both highly aligned with the user’s needs and desires, and highly engaging. A service generally has to be engaging for it to have any hope of success (a platform for eating boiled vegetables at scale isn’t going to do very well). For a service to better a person’s life, it also has to be aligned with their goals.
Technology for Good
Feeds can be more engaging than search; they’re an easy way to discover interesting new content (and provide an easy variable reward loop), but they also have a much smaller amount of knowledge of the user’s goals.
A Reflection of Humanity
It seems strange that feeds (or recommender systems more generally) would promote something that has poor user alignment. Don’t they just show people what they like? People will engage with an outrageous but false piece of content, but on a longer timescale, people prefer not to feel outraged, stressed, alienated, or inadequate, and prefer to make progress towards their best selves. This short-term optimization is harming users, dangerously shaping their beliefs, and causing foundational issues for society.
With that said, there is plenty of content in in the high-engagement, high-alignment quadrant, and social feeds do surface these things as well. My Tiktok feed can actually become pretty informative if I intentionally scroll past the junk and press “like” all of the science videos I see. The Humans of New York posts on Instagram provides millions with feelings of happiness and connection. There is real benefit from this content, and it’s worth acknowledging as well.
In a sense, social media is a reflection of humanity, with both the good and the bad. Nobody at Facebook is trying to spread conspiracy theories, but apparently people love creating and sharing this stuff. Unfortunately, people are wired to pay more attention to negative stimuli than positive, and care deeply about their position in the social hierarchy. Facebook has spent billions on a society-level experiment, answering the simple question, “What kinds of information do people like to engage with?” Now we have the unfortunate answer.
How can we change the goal of the feed algorithm? It’s technically possible: in addition to engagement, it also needs to know whether a piece of content helps or hinders a user towards achieving her goals. Unfortunately, this is much harder to measure than engagement, but could be done if the user can provide input to the ranking algorithm (either by changing parameters or via natural language), or indicating this information via usage of the product (e.g. Duolingo naturally knows how if you’re learning when you get the right answer on an assignment).
The problem is, this wouldn’t happen at Facebook, Instagram, or others. They makes money from views, and they have little financial incentive to decrease this metric, unless it gets so bad that users start to leave. Every 1% drop in usage costs them $250M in annual profit. So the question becomes, how do we change the goal of the company?
At a high level, there are three options to improve this situation.
1. Company Altruism
In this scenario, the familiar players competing for our attention choose to do the right thing, even if it’s against their economic interest. Business models that aren’t predicated on harvesting attention (Apple’s iOS and Google’s Android) have no problem with rolling out “Time Well Spent” and and “Digital Wellbeing” features, respectively. To their credit, Facebook and others have embraced the “Time Well Spent” mission, giving users tools to observe how much time they’re spending, more control over notifications, and making Instagram “likes” less prominent. Youtube has introduced “Take a break” notifications as well.
However, for attention economy companies, these initiatives are against their very DNA, so more systemic reform is highly unlikely.
It’s profitable to sell organs, or mine without regard for the environment, but we, as a society, have decided that this is unethical and should not be allowed. It’s much more effective to prevent a company that sells organs from existing, than to let it exist and hope its executives to do the right thing. Every company tries to maximize its profits within the constraints of the market it lives within. If we start to see attention as a precious resource, and regulate how, and how much, it’s extracted, we could create new constraints that the private sector would have to work within.
There are two challenges with this approach. The first is that technology evolves much more rapidly than our ability to measure negative externalities and create mitigations. The second is that it’s difficult to create regulation that actually solves the core issues, doesn’t throw out the good with the bad, and doesn’t entrench the established players. GDPR had its heart in the right place, but its biggest effect was annoying users, and making it harder for new players to challenge Facebook and Google’s primacy in advertising. Unfortunately, preventing fake news from spreading on social platforms will be even harder to get right.
Some promising examples of digital wellness from non-corporate entities include the Wellbeing Economy Alliance, and efforts to move beyond GDP as the core measurement of a country’s success.
3. New Companies with Better Incentives
If a user is paying you, you’re going to try and solve that user’s problems. If an advertiser is paying you, you’re going to do whatever you can to provide that advertiser with the best eyeballs at the lowest price.
Unfortunately, Facebook isn’t about to move to a subscription model. Average revenue per user (ARPU) in Canada and the US is roughly $40/quarter, a sum most would likely not be willing to pay, hence a move to subscription would hit revenues substantially. This is an indictment on the amount of value it actually adds to a user’s life; if you’re willing to pay $10/month for Spotify, Netflix, or Amazon Prime, but not Facebook, obviously those services provide you with more value.
The current situation actually looks ripe for a form of disruption, as laid out in The Innovator’s Dilemma. Facebook will act rationally by continuing to serve its current customer base and business model. It’s a great way to stave off boredom and get updates from friends, but there are many other important user goals that are being under-served by using the service: improving mental and physical health, learning, connecting with humanity, and so on. The key is to align incentives from the start, where success of the company requires actually helping solve an important user goal, and then applying all of the best technological tools we have to achieve it. Services that are following this playbook today often look basic compared to Facebook: Apple’s Screen Time, apps for meditation and learning, and therapy-as-a-service to name a few. However, are just scratching the surface of what’s possible, and as these services improve, they will eventually become much more compelling ways to spend our time than mindlessly scrolling through a feed.
Aligning Users’ Short and Long-Term Interests
You might be thinking, how will a meditation app ever compete for attention with Facebook? We can use a food metaphor to illustrate. You’re a parent who is trying to get your kid to become healthy by eating better, but currently they eat lots of cookies and candy. Here’s what you might do:
Limit the amount of candy available on hand. If it’s just as easy to eat candy as vegetables, candy will usually win.
Find a recipe your kid likes and make the vegetables as delicious as possible.
Create other rewards for eating well, like boosting their allowance.
Help them find a goal for which it helps to be healthier, like making the high school soccer team.
Track and celebrate progress along the way, and encourage their friends do the same.
Eventually, it’ll be an ingrained habit, the kid will no longer even want candy, and he’ll live a healthier, happier life. Crucially, they might have wanted to be healthier in a general sense at the outset, but short-term impulses generally trump what’s best for us longer-term. External intervention can help align short-term desires and actions with long-term goals. Once the candy was removed, maybe the broccoli truly was the easiest path to short-term reward.
Today, consumer apps can tweak some parts of this equation, but their ability to intervene is limited compared to that of a parent. Someday, software will have the ability to intervene in our life in far reaching and subtle ways, and might someday even shape our behavior deterministically (if one is to believe Yuval Noah Harari). If that’s the case, the difference between us being continuously glued to our feeds, and us becoming our best selves, comes down to incentives.
Aligning Recommender Systems as Cause Area. The original inspiration for this post, lots of great ideas and context in there.
Big Tech’s attention economy can be reformed. Here’s how. Tristan Harris has been championing this cause for years and has much more developed thinking on the topic.
The Attention Merchants: How Our Time and Attention Are Gathered and Sold. Tim Wu gives a fascinating background on the history and power of capturing and profiting from attention.