I’ve always been interested in knowing more about the University of Chicago economist John List and so when I heard he had a book on a topic that could be interesting, I was eager to read it. The Voltage Effect is mainly focused on scaling ideas, but he uses it as a vehicle to talk more about his research that uses field experiments to test different hypothesis and try out novel approaches to things.
To start off, he described what exactly he meant by voltage in this context. “These cases are all examples of a voltage drop: when an enterprise falls apart at scale and positive results fizzle. (The term ‘voltage drop’ comes from the literature of implementation science and can be traced to the work of Amy Kilbourne and co-authors.) Voltage drops are what happens when the great electric charge of potential that drives people and organizations dissipates, leaving behind dashed hopes, not to mention squandered money, hard work, and time…According to Straight Talk on Evidence…between 50 and 90 percent of programs will lose voltage at scale.” (13)
The structure of the book is to layout 5 things that can cause problems with scaling, and then go into 4 things that you should seek out when trying to scale. “It [the book] unpacks the Five Vital Signs, or the five key signature elements that will cause voltage drops and prevent an idea from taking off. The first is false positives – these are cases where there was never any voltage in the first place, though it appeared otherwise. The second is overestimating how big a slice of the pie your idea can capture. Often this is the result of failing to know your audience – or assuming that the small subset of people who have bought into your idea are more representative of the general population than they actually are, so that when you expand your idea it falls short for a broader set of people . The third is failing to evaluate whether your initial success depends on unscalable ingredients – unique circumstances that can’t be replicated at scale. The fourth is when the implementation of your idea has unintended consequences or spillovers, that backfire against that same idea. And the fifth is the ‘supply-side economics’ of scaling – for instance, will your idea be too costly to sustain at scale?” (16)
The section on false positives was pretty intuitive – you can’t scale something if it doesn’t actually work. He had good examples of testing things at a small scale, and in multiple domains. Don’t only test it in one type of city, but make sure to test it various places. In his own experiments, he found success in one small experiment, but thankfully did another small experiment before rolling out the policy to a wider company. He found it didn’t work, and saved money and credibility by not rolling it out. I liked the idea of having an idea subject to independent replication. “Of course, showing data in a way that highlights the strengths of an idea or downplays the weaknesses is much different from knowingly falsifying data, but the solution to both is the same: independent replication, which is just as crucial for businesses as it is for scientists. If someone has an idea, it should be someone else with no stake in the idea and who doesn’t stand to benefit financially who should test it out, or at least replicate it before it is shipped. Otherwise, incentives potentially conflict with full honesty.” (42) I’m always a fan of having it set up to have pushback on an idea, and encourage a person or people to play the role of devil’s advocate. “As we have seen, organizations don’t always incentivize employees to speak the truth, and in many cases the person who comes up with an idea is also the sole tester of the idea. This speaks more broadly to the need for every business and organization to have a devil’s advocate deputy, team, and/or function built into its structure – in other words, a force that is always pushing for more data, more proof…By now I hope it’s clear that the most hazardous obstacle to successful scaling is not ignorance. It is the illusion of knowledge, arising from either misleading data, hidden biases, or outright deception.” (43)
The know your audience section was pretty good as well. A big emphasis was not having a one size fits all approach to things, but instead letting people in different places tailor aspects to meet the needs of the people in that area. He also talked about the importance of finding the right set of people to test an idea on. “This example highlights that there is always the risk – or temptation – that researchers may deliberately seek out a specific population that stands to benefit most from the program, product, or medication in order to show large effects, since this can increase the chance of public recognition and further funding or investment. Likewise, it may be less expensive to convince people to participate in a study if those people expect to benefit from it.” (59)
One of my favorite parts of the book was the section on “Is it the Chef or the Ingredients,” focusing on what the non-negotiables are in your organization. What can you compromise on and what can you do without? The example of many restaurants having an amazing chef that can’t scale versus Jamie Oliver having restaurants that focus on great, simple ingredients that many chefs could produce was vivid and memorable. “For an idea or enterprise – not just restaurant chains [like those of Jamie Oliver] – to hold strong at scale, you need to know what the drivers of high performance are and do everything in your power to keep them in place. To achieve this, before anything else you must determine if your secret sauce is the ‘chef’ or the ‘ingredients.’ In other words, dose your success at small scale rest largely on the people indispensable to your idea or product, say the engineer who built the platform your business runs on, or the celebrity spokesperson who fundraises for your nonprofit – or is it the idea or product itself? If it involves people, a key piece to understand is whether those responsible for implementing the idea will be faithful to its ingredients…Knowing this isn’t half the battle. It’s the whole battle. If your answer is the ‘chef’ (people), there will likely be a limit to how big you can get, since, as we’ve seen, people with unique skills are inherently unscalable…When it comes to ingredients, you must know your negotiables and non-negotiables, then figure out whether your non-negotiable ingredients – the ones your enterprise can’t survive without – are in fact scalable.” (75)
I liked the section on spillovers as well. It talks a lot about unintended consequences of actions, like how there can be unintended consequences to additional safety features, or negative aspects to having wage transparency. “While Peltzman’s paper [on how increased auto safety regulation didn’t lead to fewer injuries] was controversial at the time – unsurprisingly, it was politicized by pro- and anti-regulation advocates – much research in the intervening years has borne out similar conclusions in other domains. It turns out people have a tendency to engage in riskier behaviors when measures are imposed to keep them safer. Give a biker a safely helmet and he rides more recklessly – and, even worse, cars around him drive more haphazardly…In short, safety measures have the potential to undermine their own purpose. This phenomenon – which came to be known as the Peltzman effect – is often used as a lens for studying risk compensation, the theory that we make different choices depending on how secure we feel in any given situation (i.e., we take more risk when we feel more protected and less when we perceive that we are vulnerable).” (90) I had heard about instances of head trauma going up after bike helmet laws were passed, so this reinforced that prior of mine. I thought the part on incentivizing workers was interesting, too. Money can incentivize the people you pay more, but it can also disincentivize the people you pay less. “As it turned out, it wasn’t the extra $5 that incentivized the higher-paid solicitors to work harder. It was the $5 less that disincentivized the lower-paid workers when they knew others were making more than them. The fact was, at odds with my original belief in the very first summer, the solicitors did talk about pay and this served to disincentivize the lower wage earners. This group shirked their duties by visiting fewer houses, and even engaged in more theft, pocketing donations at a much higher rate than the higher-paid solicitors. This is an example of the psychological phenomenon called resentful demoralization. It’s the flip side of the famous John Henry effect, which is the bias introduced into experiments when members of the control group are aware that they are being compared to the experimental group and react by trying harder than they typically would. In my field experiment, the solicitors did the opposite. The feelings of resentment resulting from the knowledge of the wage disparity (i.e., that they were being underpaid compared to their peers) created an unintended effect that undercut our fundraising…The idea is that salary data should be made publicly available – at least inside the company – so that everyone knows how much everyone else is making, from the bottom all the way up to the top. Based on my experience with the baseball fundraising, one would expect that making salaries transparent could drive resentful demoralization if people saw peers making more.” (99) There is more nuance to it, and they found that when managers make more than their workers thought they did, they worked even harder, reinforcing the idea that workers work hard when they feel like they can have a good return to their effort.
The Cost Trap section made sense – good ideas have to make a profit, not just revenue. One thing that was interesting was the idea of being able to scale with good, but not great, workers. “It may sound counterintuitive, or even idiotic, not to search out the best talent in the early stages of your endeavor. When it comes to, say, designing new, innovative hardware that will be scaled, I’m not saying to choose mediocre computer engineers. After all, the hardware or digital interface must be of high quality; that’s a non-negotiable. But if maintaining that hardware at scale will require forty thousand technicians, the reality is that not all of them will be five-star workers, so those in the development phase shouldn’t be, either. Hiring less-excellent technicians is admittedly not ideal, but it is a negotiable…Ideal conditions are not realistic in most cases, so you must ask yourself a question infused with a rude dose of reality: will you actually be able to hire the best people at scale, or will either budget restriction or the finite pool of talented candidates make this impracticable?” (126) I agree that you will have to have workers like this eventually, but I’m not sure I totally buy into his assertion that you should use them the entire time to see how mediocre people do running the show.
I did like the part on Incentives that Scale. This had a lot of stories from his time at Uber, but also studies related to reducing jet fuel usage and improving tax collection in the Dominican Republic. I thought the part that was most interesting was dealing with incentives for teachers and students. While many find the idea of paying teachers more for higher test scores or students more for better performance unappealing, it seems like it works for certain cohorts. They tested giving teachers a bonus if their students did well, and then giving teachers a bonus, and then clawing it back if their students didn’t do well, and found that the clawback resulted in better performance from the teachers. They then tried a similar experiment with students. “This time we took it in the opposite direction of the clawback, and told some students that if they performed will on the test, they would receive their reward one month after the test. Suddenly, the incentives didn’t matter. That is, rewards – even large ones – delivered with a delay didn’t impact students’ performance at all. This finding suggests that one explanation for students’ low investment in their own education, and their high dropout rate, is that the current returns (get into college, get a higher-paying job, et cetera) are delivered with too long a delay to sufficiently motivate some students. After all, if delaying an incentive by just one month can tank motivation, the abstract prospect of better opportunities in the far-off future clearly won’t be very persuasive…When it comes to incentives, timing is everything.” (155)
I liked the chapter on Revolution on the Margins. I read the blog Marginal Revolution with some regularity, but was not as familiar with the context of the name as I could have been. It’s often easier to calculate the average benefit or cost than the marginal benefit or cost, but when making decisions, it’s better to focus on the margin rather than the average. He talked about this in the context of working for the Council of Economic Advisors. “This chore [benefit-cost analysis of implementing policies on a large scale] is an important one because the more than one hundred federal agencies issue approximately 4,500 new rulemaking notices each year. Of those, about fifty to a hundred per year meet the necessary condition of being ‘economically significant’ (more than $100 million yearly in either benefits or costs). Every economically significant proposal, then, receives a formal analysis of the benefits and costs.” (161) I then liked the extra context on the benefits and importance of looking at the impact of the next item and not just the averages. “And – staying true to my economics training – once I looked at all the charts and graphs and figures across the various agencies, I knew that if we wanted to identify and prioritize the policies that got the most out of taxpayer money, just as Justice Breyer had suggested [in his book Breaking the Vicious Cycle which advocated that the obligation of government was to use money to scale initiatives that improve as many lives as possible], we needed to look not at the positive impact per dollar spent on average, but the positive impact of the last dollar spent. That’s because the benefit-cost averages that lumped all the dollars together were obscuring more specific figures revealing that certain policies became much less impactful the more they were scaled…In the late nineteenth century, the field of economics took an intellectual leap forward that came to be known as the Marginal Revolution…Going beyond the limited concept of supply versus demand, Jevons, Menger, and Walras introduced the utility function, or the theory of utility, into the discussion of value…Everything we spend money on provides a certain amount of satisfaction, or utility, whether we are paying to own an object, use a service, or have an experience. And this level of satisfaction determines the value we receive from goods and services…Jevons, Menger, and Walras posited that utility isn’t static: that goods and services – broken into ‘units’ – have different value to consumers depending on if they are the first or last unit consumed, or somewhere in between. The value of that final, most recent unit is referred to as the marginal utility, and it is rarely the same as the value averaged across all units.” (165)
The chapter called Quitting Is for Winners also had some good content. A bit of it was focused on the sunk cost fallacy. He had some nice examples from his own life in moving on from a focus on golf to focus instead on academics. “Pursuing such objectives requires tremendous sacrifices, the most significant of which is the opportunity costs of paths not taken. This is why it is so devastating when an idea you pour your heart and soul and time into fails to scale. It’s not just the voltage you lose. It’s all the other promising opportunities you turned down in the process…But if you quit at the right time (and ignore that sunk cost), then you can move on to scale something else – something with a better shot at success. This is what I call optimal quitting. Sometimes you have to leave behind that professional golf career you’ve been dreaming of…in order to shift gears and find a better one. And the sooner you do this, the lower the opportunity cost you’ll pay…It requires an effort that runs counter to our deep-rooted heuristics and fast way of thinking…You can see the dangers of this tunnel-vision type of thinking when aiming to scale. Rather than imagine what other ideas they could spend their time pursuing, people often zero in on various aspects of the idea they have already invested time and resources into…When you have lots of alternatives, quitting will be much less painful, both emotionally and practically.” (191) It seemed like some of his comments on comparative advantage were oversimplifying the concept so much it might not have been accurate any more, but overall an interesting chapter on the benefits of not ignoring opportunity cost and not wasting time on less useful pursuits.
The last chapter was on culture. Given that he had worked at Uber during their early days, he saw some interesting things. The chapter talks a bit about how people act differently when having communal goals versus individual goals. “Research suggests that deep trust is a powerful factor in enabling organizations to scale, in part because it promotes cooperation and functional teamwork is essential for growth, but for other reasons as well. The lack of trust at Uber was in large part the natural endgame of an organization that was a meritocracy in name but not in spirit: people didn’t trust that the objective value of their contributions (time, ideas, and effort) would be appreciated. In other words, employees didn’t feel respected…Sure, Uber was great at attacking complacent and lazy thinking, but here is the irony: the leaders at Uber were complacent in how they thought about scaling up the company culture. No one in a leadership position – myself included – forcefully questioned Uber’s culture to the same extent employees were pushed to question its business ideas and practices…He [Travis Kalanick] knew he had made big mistakes, and he showed remorse, not just because he lost his job but because he felt that he had let his Uber team down. I don’t believe Travis Kalanick is a bad person. He is a good person who made several bad calls…at scale.” (211)
(Continued in comments)