Cal Newport's Blog
October 13, 2025
What Neuroscience Teaches Us About Reducing Phone Use
This week on my podcast, I delved deep into the neural mechanisms involved in making your phone so irresistible. To summarize, there are bundles of neurons in your brain, associated with your short-term motivation system, that recognize different situations and then effectively vote for corresponding actions. If you’re hungry and see a plate of cookies, there’s a neuron bundle that will fire in response to this pattern, advocating for the action of eating a cookie.
The strength of these votes depends on an implicit calculation of expected reward, based on your past experiences. When multiple actions are possible in a given situation, then, in most cases, the action associated with the strongest vote will win out.
One way to understand why you struggle to put down your phone is that it overwhelms this short-term motivation system. One factor at play is the types of rewards these devices create. Because popular services like TikTok deploy machine learning algorithms to curate content based on observed engagement, they provide an artificially consistent and pure reward experience. Almost every time you tap on these apps, you’re going to be pleasantly surprised by a piece of content and/or find a negative state of boredom relieved—both of which are outcomes that our brains value.
Due to this techno-reality, the votes produced by the pick-up-the-phone neuron bundles are notably strong. Resisting them is difficult and often requires the recruitment of other parts of your brain, such as the long-term motivation system, to convince yourself that some less exciting activity in the current moment will lead to a more important reward in the future. But this is exhausting and often ineffective.
The second issue with how phones interact with your brain is the reality that they’re ubiquitous. Most activities associated with strong rewards are relatively rare—it’s hard to resist eating the fresh-baked cookie when I’m hungry, but it’s not that often that I come across such desserts. Your phone, by contrast, is almost always with you. This means that your brain’s vote to pick up your phone is constantly being registered. You might occasionally resist the pull, but its relentless presence means that it’s inevitably going to win many, many times as your day unfolds.
~~~
Understanding these neural mechanisms is important because they help explain why so many efforts to reduce phone use fail—they don’t go nearly far enough!
Consider, for example, the following popular tips that often fall short…
Increase Friction
This might mean moving the most appealing apps to an inconvenient folder on your phone, or using a physical locking device like a Brick that requires an extra step to open your phone. These often fail because, from the perspective of your short-term motivation systems, these mild amounts of friction only decrease your expected reward by a small amount, which ultimately has little impact on the strength of its vote for you to pick up your phone.
Make Your Phone Grayscale
There is an idea that eliminating bright colors from your phone’s screen will somehow disrupt the cues that lead you to pick it up. This also often fails because colors have very little to do with your brain’s expected reward calculation, which is based on more abstract benefits, such as pleasant surprise and the alleviation of boredom.
Moderate Your Use with Rules
It’s also common to declare clear rules about how much you will use each type of app; e.g., “only 30 minutes of Instagram per day.” The problem is that such rules are abstract and symbolic, and have limited interaction with your short-term motivation systems, which deal more with the physical world and immediate rewards.
Detox Regularly
Another common tactic is to “detox” by taking regular time away from your phone, such as a weekly Internet Shabbat, or an annual phone-free meditation retreat. These practices can boast many benefits, but they’re not nearly long enough to start diminishing the learned rewards that drive your motivation system. It would take many months away from your phone before your brain began to forget its benefits.
~~~
So what does work? Our new understanding of our brains points toward two obvious strategies that are both boringly basic and annoyingly hard to stick to.
First, remove the reward signals by deleting social media or any other app that monetizes your attention from your phone. If your phone no longer delivers artificially consistent rewards, your brain will rapidly reduce the expected reward of picking it up.
Second, minimize your phone’s ubiquity by keeping it charging in your kitchen when at home. If you need to look something up or check in on a messaging app, go to your kitchen. If you need to listen to a podcast while doing chores, use wireless earbuds or wireless speakers. If your phone isn’t immediately accessible, the corresponding neuronal bundles in your motivation system won’t fire as often or as strongly.
In the end, here’s what’s clear: Our brains aren’t well-suited for smartphones. We might not like this reality, but we cannot ignore it. Fixing the issues this causes requires more than some minor tweaks. We have to drastically change our relationship to our devices if we hope to control their impact.
The post What Neuroscience Teaches Us About Reducing Phone Use appeared first on Cal Newport.
October 6, 2025
The Great Alienation
Last week, I published an essay about the so-called Great Lock In of 2025, a TikTok challenge that asks participants to tackle self-improvement goals. I argued that this trend was positive, especially for Gen Z, because the more you take control of your real life, the easier it becomes to take control of your screens.
In response, I received an interesting note from a reader. “The biggest challenge with this useful goal Gen Z is pursuing,” he wrote, “is they don’t know what to do.”
As he then elaborates:
“Most of them are chasing shiny objects that others are showing whether on social media or in real life. And when they (quickly) realize it’s not what they want, they leave and jump on to something else…this has been a common problem across generations. But Gen Z, and youngsters after it, are making things worse by scrolling through social media hoping to find their purpose by accident (or by someone telling them what they should do).”
Here we encounter one of the most insidious defense mechanisms that modern distraction technology deploys. By narrowing its users’ world to ultra-purified engagement, these platforms present a fun-house mirror distortion of what self-improvement means: shredded gym dwellers, million-subscriber YouTube channels, pre-dawn morning routines. Because these “shiny” goals are largely unattainable or unsustainable, those motivated to make changes eventually give up and return to the numbing comfort of their screens.
By alienating its users from the real world, these technologies make it difficult for them to ever escape the digital. To succeed with the Great Lock In, we need to resolve the Great Alienation.
~~~
At the moment, I’m in the early stages of writing a book titled The Deep Life. It focuses on the practical mechanisms involved in discerning what you want your life to be like and how to make steady progress toward these visions.
At first glance, this might seem like an odd book for me to write, given that my work focuses primarily on technology’s impacts and how best to respond to them. When we observe something like Gen Z’s struggles with the Great Lock In, however, it becomes clear that this book’s topic actually has a lot to do with our devices. Figuring out how to push back on the digital will require more attention paid to improving the analog.
The post The Great Alienation appeared first on Cal Newport.
September 29, 2025
The Great Lock In of 2025
If there’s one thing that I’m always late to discover, it has to be online youth trends. True to form, I’m only now starting to hear about the so-called “Great Lock In of 2025.”
This idea began circulating on TikTok over the summer. Borrowing the term ‘lock in’, which is Gen Z slang for focusing without distraction on an important goal, this challenge asks people to spend the last four months of 2025 working on the types of personal improvement resolutions that they might otherwise defer until the New Year. “It’s just about hunkering down for the rest of the year and doing everything that you said you’re going to do,” explained one TikTok influencer, quoted recently in a Times article about the trend.
Listeners of my podcast know that I’m a fan of the strategy of dedicating the fall to making major changes in your life. My episode on this topic, How to Reinvent Your Life in 4 Months, which I originally aired in 2023 and re-aired this past summer, is among my most popular – boasting nearly 1.5 million views on YouTube.
To me, however, the more significant news contained in this trend is the generalized concept of ‘lock in’, which has become so popular among Gen Z that the American Dialect Society voted it the “most useful” term of 2024.
Critically, ‘lock in’ seems to have been defined in reaction to smartphones. “I think that we live in an era where it’s very easy to be distracted and that we’re on our phones a lot,” explained language science professor Kelly Elizabeth Wright, in the Times. “‘Lock in’ really came up in these last couple years, where people are saying, like, ‘I have to make myself focus. I have to get into a state where I am free from distraction to accomplish, essentially, anything.’”
This matters because defining positive alternatives to negative habits is an effective way to reduce them.
When I first published Deep Work, which centers on the importance of undistracted focus in your professional life, people already knew that spending their days frantically checking email probably wasn’t good. They only felt motivated to change, however, when presented with a positive alternative.
Ideas like ‘locking in’ might provide a similar influence for Gen Z’s collective smartphone addiction. It’s one thing to be told again and again that your devices are bad, but it’s another to experience a clear vision of the good that’s possible once you put them away. When you experience life in its full analog richness, the allure of the digital diminishes.
The irony of the Great Lock In of 2025, of course, is that it started on TikTok – the ultimate digital distractor. My hope is that by the New Year, this challenge will no longer be trending; not because people gave up, but because they’re not online enough anymore to talk about it.
The post The Great Lock In of 2025 appeared first on Cal Newport.
September 22, 2025
Does WiFi Make Students Smarter?
At a time when educators are increasingly concerned about technology’s impact in the classroom, the Washington Post published an op-ed with a contrarian tone. The piece, written by the journalism professor Stephen Kurczy, focuses on Green Bank, a small town in rural West Virginia, home to the world’s largest steerable radio telescope. Due to the sensitivity of this device, the entire area is a congressionally designated “radio quiet zone” in which cell service and WiFi are banned.
The thought of a disconnected life might sound refreshing, but as this op-ed argues, there’s one group for which this reality might be causing problems: the students in Green Bank’s combined elementary and middle school.
“Without WiFi, the 200 students couldn’t use Chromebooks or digital textbooks, or do research online,” Kurczy writes. “Teachers couldn’t access individualized education programs online or use Google Docs for staff meetings.”
Some teachers in the school are frustrated. “The ability to individualize learning with an iPad or a laptop – that’s basically impossible,” explained one teacher, quoted in the piece. “Without the online component of our curriculum fully working, it’s really detrimental to our instruction,” said another.
These concerns aren’t merely hypothetical. As Kurczy points out: “Green Bank consistently [posts] the lowest test scores in the county.” He quotes the school’s principal, who blames this on the students’ “lack of access to engaging technology.”
The message of this op-ed is clear. At a time when we’re rushing to condemn phones in classrooms, we should be careful not to extend this ire to other ed-tech innovations, as without these, students struggle.
It’s a tidy point. But is it true? I decided to dig a little deeper…
To start, the claim that Green Bank posts the lowest scores in the county is easily confirmed. But there’s a caveat here: Pocahontas County, which includes Green Bank, is small. It includes only one other middle school and two other elementary schools, so even modest differences in the student populations can create big changes in measured performance.
The only other middle school in the county, for example, does boast higher test scores, but it also serves only around 100 students, meaning that a small cohort of more advantaged children could explain the entire gap. (It’s perhaps notable that this higher-performing school is co-located with a hospital and across the street from a country club.)
What we really need is time-series data. The classroom iPad/Chromebook revolution took off in the 2010s, so if the lack of WiFi is what’s holding back Green Bank, we should see a unique decline in their performance starting last decade.
I couldn’t find time-series data for individual schools, but I could for individual counties in West Virginia. Given the small size of Pocahontas County and the fact that roughly half of its elementary and middle school-aged students attend the school in Green Bank, if the lack of WiFi is really negatively impacting the student population, this should be reflected in the county-level data on performances in grades 3 to 8.
So what do these data actually teach us? First, let’s look at scores on standardized math tests in Pocahontas County over time.

These scores had been steadily increasing, but then, around 2017, they began to drop. We then see, starting in 2022, the start of a post-pandemic recovery.
The timing here seems to roughly align with the WiFi hypothesis: if iPads and Chromebooks took off last decade, then we might expect to see a negative impact on performance in Green Bank right around this point.
To run a proper controlled analysis, however, we need to compare these changes to similar counties in West Virginia that had full access to WiFi. Fortunately, we have these results.
The following chart measures both the magnitude of the performance drop from 2019 to 2022 and the magnitude of the subsequent recovery from 2022 to 2024. It compares Pocahontas County to the entire state, as well as to a set of five counties with similar population sizes, demographics, and socio-economic status.

The result?
Compared to other counties in the state, Pocahontas County schools had a smaller performance drop and larger recovery. Put another way: the county in which nearly half of the measured students lacked access to WiFi did better than other counties with similar student populations and full access to classroom technology.
The more plausible story told by this data is that rural West Virginia schools are struggling, and something appears to have made this worse around 2015 to 2017 (most likely deteriorating economic conditions). But the solution to these problems is likely not as simple as getting more internet-connected Chromebooks into the students’ hands.
(That being said, the fact that this school is using old technology is a problem, just for other reasons. As Kurczy’s reporting reveals – he wrote an entire book on this town – the teachers in Green Bank are frustrated. They feel left behind by the county, and they are missing out on the productivity gains we take for granted, like the use of shared documents or the ability to easily distribute assignments online.)
The big news coming out of Green Bank is that the school district has finally negotiated an agreement with the observatory to allow classroom WiFi, and, I guess, I’m happy to hear it
However, the more important reminder here – and this applies to me as much as anyone else – is that when it comes to writing about technological impacts, we have to be wary of motivated reasoning. Just because something feels like it should be true doesn’t mean that it necessarily is. The data often – frustratingly – paints a more nuanced picture.
The post Does WiFi Make Students Smarter? appeared first on Cal Newport.
September 12, 2025
On Charlie Kirk and Saving Civil Society
Many of you have been asking me about the assassination of the conservative commentator Charlie Kirk earlier this week during a campus event at Utah Valley University. At the time of this writing, little is yet known about the shooter’s motives, but there have been enough cases of political violence over the past year that I think I can say what I’m about to with conviction…
Those of us who study online culture like to use the phrase, “Twitter is not real life.” But as we saw yet again this week, when the digital discourses fostered on services like Twitter (and Bluesky, and TikTok) do intersect with the real world, whether they originate from the left or the right, the results are often horrific.
This should tell us all we need to know about these platforms: they are toxic and dehumanizing. They are responsible, as much as any other force, for the unravelling of civil society that seems to be accelerating.
We know these platforms are bad for us, so why are they still so widely used? They tell a compelling story: that all of your frantic tapping and swiping makes you a key part of a political revolution, or a fearless investigator, or a righteous protestor – that when you’re online, you’re someone important, doing important things during an important time.
But this, for the most part, is an illusion. In reality, you’re toiling anonymously in an attention factory, while billionaire overseers mock your efforts and celebrate their growing net worths.
After troubling national events, there’s often a public conversation about the appropriate way to respond. Here’s one option to consider: Quit using these social platforms. Find other ways to keep up with the news, or spread ideas, or be entertained. Be a responsible grown-up who does useful things; someone who serves real people in the real world.
To save civil society, we need to end our decade-long experiment with global social platforms. We tried them. They became dark and awful. It’s time to move on.
Enough is enough.
The post On Charlie Kirk and Saving Civil Society appeared first on Cal Newport.
September 8, 2025
On the Reverse Flynn Effect
Last fall, a Norwegian psychology professor named Lars Dehli was asked to give a lecture on intelligence. It had been a while since he had taught the topic, so he looked forward to revisiting it. As he explained in an essay about the experience, he decided to start the lecture by discussing the so-called Flynn Effect—the well-known phenomenon, first observed by James Flynn, whereby measured IQ scores have been steadily increasing since World War II. “It’s always fun to tell students that their generation is the smartest people who have ever lived,” Dehli wrote.
But as he gathered data to build an up-to-date chart, he was “very surprised” by what he discovered: “IQ has actually started to fall.”
Delhi was not the first person to notice this decline. In recent years, a growing number of researchers have been documenting what has become known as the Reverse Flynn Effect. Consider, for example, a recent paper published in the journal Intelligence that studied IQ scores over time in an American population. It found a steady decline in almost every intelligence metric studied as part of a 35-item assessment.
Here’s a chart that shows these declines broken out by education level:

There’s no consensus on the causes of the Reverse Flynn Effect. But in a recent podcast appearance, James Mariott, a critic and columnist for The Times of London, summarized a hypothesis that has been gaining traction: as we switch our information consumption from print to digital devices, our ability to think deeply degrades.
As Marriott explains:
“Print requires us to make a logical case for a subject. A really significant feature of books is that if you make a case in print, you have to make it logically add up. You can’t just assert things in the way you can on TikTok or on YouTube…print privileges a whole way of thinking and a whole way of processing the world that is logical, that is more rational, that is more dense information, that is more intellectually challenging. If you lose these things in our culture, which I think we really are in the process of losing them, it’s not surprising that people are getting stupider…and that we seem to find that IQ is declining.”
The data on the Reverse Flynn Effect includes several pieces of evidence that support Marriott’s claims. The IQ reversal, for example, seems to begin right around 2010—the point at which smartphones began their rapid ascent to ubiquity. In addition, according to the Northwestern study, the demographic suffering the steepest declines is 18 to 22-year-olds, who also happen to be the heaviest users of smartphones.
As with most psychological findings, it is unlikely that we will ever fully attribute this effect to a single, specific cause. But based on common sense and lived experience, there’s certainly a ring of truth to this device hypothesis.
It’s grown standard to say things like, “my phone is making me so dumb!”, but this is often intended to be a figure of speech; a self-deprecating shorthand for the reality that the things we do on our phone are dumb, or that we spend less time doing “smart” activities than we used to. If these technological interpretations of the Reverse Flynn Effect hold up, it might turn out that this quip is way more literal than we may have originally assumed.
The post On the Reverse Flynn Effect appeared first on Cal Newport.
August 31, 2025
Does Work-Life Balance Make You Mediocre?
Last month, a 22-year-old entrepreneur named Emil Barr published a Wall Street Journal op-ed boasting a provocative title: “‘Work-Life Balance’ Will Keep You Mediocre.”
He opens with a spicy take:
“I’m 22 and I’ve built two companies that together are valued at more than $20 million…When people ask how I did it, the answer isn’t what they expect—or want—to hear. I eliminated work-life balance entirely and just worked. When you front-load success early, you buy the luxury of choice for the rest of your life.”
As Barr elaborates, when starting his first company, he slept only three and a half hours per night. “The physical and mental toll was brutal: I gained 80 pounds, lived on Red Bull and struggled with anxiety,” he writes. “But this level of intensity was the only way to build a multimillion-dollar company.”
He ends the piece with a wonderfully cringe-inducing flourish. “I plan to become a billionaire by age 30,” he writes. “Then I will have the time and resources to tackle problems close to my heart like climate change, species extinction and economic inequality.”
(Hold for applause.)
It’s easy to mock Barr’s twenty-something bravado, even if I do have to be careful not to be the pot calling the kettle black (ahem).
Yet, some of this knee-jerk mockery might stem from the uncomfortable realization that beneath this performative busyness, there may lie a kernel of truth. Are we forfeiting our opportunity to make a meaningful impact with our work if we prioritize balance too much? As NYU professor Suzy Welch noted, “I do give [Barr] points for saying something I only mutter to my M.B.A. students …You cannot well-being yourself to wealth.”
To help address these fears, let’s turn to the advice of another twenty-something: me. In an essay I published when I was all of 27—around the time I was finishing my doctoral dissertation at MIT—I wrote the following:
“I found writing my thesis to be similar to writing my books. It’s an exercise in grit: You have to apply hard focus, almost every day, over a long period of time.
To me, this is the definition of what I call hard work. The important point, however, is that the regular blocks of hard focus that comprise hard work do not have to be excessively long. That is, there’s nothing painful or unsustainable about hard work. With only a few exceptions, for example, I was easily able to maintain my fixed 9 to 5:30 schedule while writing my thesis.
By contrast, the work schedule [followed by many graduate students] meets the definition of what I call hard to do work. Working 14 hours a day, with no break, for months on end, is very hard to do! It exhausts you. It’s painful. It’s impossible to sustain.
I’m increasingly convinced that a lot of student stress is caused by a failure to recognize the difference between these two work types. Students feel that big projects should be hard, so hard to do habits seem a natural fit.
I am hoping that by explicitly describing the alternative of doing plain hard work, I can help convince you that the hard to do strategy is a terrible way to tackle large…challenges.”
I gave that article a simple, declarative title: Focus Hard. In Reasonable Bursts. One Day at a Time.
This strategy has continued to serve me well. I’m now 43 years old and, I suppose, still managing to avoid mediocrity—all while continuing to rarely work past 5:30 p.m. I’m not willing to sacrifice all the other things I care about in order to grind.
Barr is still young, and his body is resilient enough to get away with his hustle for a while longer. I hope, however, that those who found his message appealing might also hear mine. Deep results require disciplined, relentless action over a long period of time, and this is a very different commitment than the type of unfocused freneticism lionized by Barr. I work hard almost every day. But those days are rarely hard to get through. This distinction matters.
The post Does Work-Life Balance Make You Mediocre? appeared first on Cal Newport.
August 17, 2025
What if AI Doesn’t Get Much Better Than This?
In the years since ChatGPT’s launch in late 2022, it’s been hard not to get swept up in feelings of euphoria or dread about the looming impacts of generative AI. This reaction has been fueled, in part, by the confident declarations of tech CEOs, who have veered toward increasingly bombastic rhetoric.
“AI is starting to get better than humans at almost all intellectual tasks,” Anthropic CEO Dario Amodei recently told Anderson Cooper. He added that half of entry-level white collar jobs might be “wiped out” in the next one to five years, creating unemployment levels as high as 20%—a peak last seen during the Great Depression.
Meanwhile, OpenAI’s Sam Altman said that AI can now rival the abilities of a job seeker with a PhD, leading one publication to plaintively ask, “So what’s left for grads?”
Not to be outdone, Mark Zuckerberg claimed that superintelligence is “now in sight.” (His shareholders hope he’s right, as he’s reportedly offering compensation packages worth up to $300 million to lure top AI talent to Meta.)
But then, two weeks ago, OpenAI finally released its long-awaited GPT-5, a large language model that many had hoped would offer leaps in capabilities, comparable to the head-turning advancements introduced by previous major releases, such as GPT-3 and GPT-4. But the resulting product seemed to be just fine.
GPT-5 was marginally better than previous models in certain use cases, but worse in others. It had some nice new usability updates, but others that some found annoying. (Within days, more than 4,000 ChatGPT users signed a change.org petition asking OpenAI to make their previous model, GPT-4o, available again, as they preferred it to the new release.) An early YouTube reviewer concluded that GPT-5 was a product that “was hard to complain about,” which is the type of thing you’d say about the iPhone 16, not a generation-defining technology. AI commentator Gary Marcus, who had been predicting this outcome for years, summed up his early impressions succinctly when he called GPT-5 “overdue, overhyped, and underwhelming.”
This all points to a critical question that, until recently, few would have considered: Is it possible that the AI we are currently using is basically as good as it’s going to be for a while?
In my most recent article for The New Yorker, which came out last week, I sought to answer this question. In doing so, I ended up reporting on a technical narrative that’s not widely understood outside of the AI community. The breakthrough performance of the GPT-3 and GPT-4 language models was due to improvements in a process called pretraining, in which a model digests an astonishingly large amount of text, effectively teaching itself to become smarter. Both of these models’ acclaimed improvements were caused by increasing their size as well as the amount of text on which they were pretrained.
At some point after GPT-4’s release, however, the AI companies began to realize that this approach was no longer as effective as it once was. They continued to scale up model size and training intensity, but saw diminishing returns in capability gains.
In response, starting around last fall, these companies turned their attention to post-training techniques, a form of training that takes a model that has already been pretrained and then refines it to do better on specific types of tasks. This allowed AI companies to continue to report progress on their products’ capabilities, but these new improvements were now much more focused than before.
Here’s how I explained this shift in my article:
“A useful metaphor here is a car. Pre-training can be said to produce the vehicle; post-training soups it up. [AI researchers had] predicted that as you expand the pre-training process you increase the power of the cars you produce; if GPT-3 was a sedan, GPT-4 was a sports car. Once this progression faltered, however, the industry turned its attention to helping the cars that they’d already built to perform better.”
The result was a confusing series of inscrutably named models—o1, o3-mini, o3-mini-high, -4-mini-high—each with bespoke post-training upgrades. These models boasted widely-publicized increases on specific benchmarks, but no longer the large leaps in practical capabilities we once expected. “I don’t hear a lot of companies using AI saying that 2025 models are a lot more useful to them than 2024 models, even though the 2025 models perform better on benchmarks,” Gary Marcus told me.
The post-training approach, it seems, can lead to incrementally better products, but not the continued large leaps in ability that would be necessary to fulfill the tech CEO’s more outlandish predictions.
None of this, of course, implies that generative AI tools are worthless. They can be very cool, especially when used to help with computer programming (though maybe not as much as some thought), or to conduct smart searches, or to power custom tools for making sense of large quantities of text. But this paints a very different picture from one in which AI is “better than humans at almost all intellectual tasks.”
For more details on this narrative, including a concrete prediction for what to actually expect from this technology in the near future, read the full article. But in the meantime, I think it’s safe, at least for now, to turn your attention away from the tech titans’ increasingly hyperbolic claims and focus instead on things that matter more in your life.
The post What if AI Doesn’t Get Much Better Than This? appeared first on Cal Newport.
August 3, 2025
On Additive and Extractive Technologies
A reader recently sent me a Substack post they thought I might like. “I bought my kids an old-school phone to keep smartphones out of their hands while still letting them chat with friends,” the post’s author, Priscilla Harvey, writes. “But it’s turned into the sweetest, most unexpected surprise: my son’s new daily conversations with his grandmothers.”
As Harvey continues, her son has adopted the habit of stretching out on the couch, talking to his grandmother on a retro rotary-style phone, the long cable stretching across the room. “There’s no scrolling, no distractions, no comparisons, no dopamine hits to chase,” she notes. “Instead he is just listening to stories, asking questions, and having the comfort of knowing someone who loves him is listening on the other end of the line.”
The post’s surface message is one about kids and technology. Harvey, defiantly pushed back against the culture of weary resignation surrounding our youth and phone use, and discovered something sacred.
But I think there’s a more general idea lurking here as well.
The telephone, in its original hard-plastic, curly-wired form, is an example of what we might call an additive technology. Its goal is to take something you value—like talking to people you know—and make this activity easier and more accessible. You want to talk to your grandmother? Dial her number, and her voice fills your ear, clear and immediate. The phone seeks strictly to add value to your life.
Now compare this to Instagram. The value proposition is suddenly muddled. You might enjoy aspects of this app: the occasional diversion, the rare update from a cherished friend. But with these joys come endless sorrows as well. The scrolling can become worryingly addictive, while the content tends to devolve into a digital slurry—equal parts mind-numbing and anxiety-inducing.
Unlike the straightforward benefits of a landline, it soon becomes clear that this tool doesn’t have your best interests as its primary goal. It’s using you; making itself just compelling enough that you’ll pick it up, at which point it can monetize every last ounce of your time and data. It’s what we might call an extractive technology, as it seeks to extract value from you instead of providing it.
My philosophy of techno-selectionism builds on a simple belief: we must become significantly more critical and choosy about the tools we allow into our lives. This goal becomes complicated when we filter our choices based solely on whether something can plausibly offer us any benefit. Nearly everything passes that low bar.
But if we distinguish between additive and extractive technologies, clarity emerges. The key is not whether that app, device, or site is flashy or potentially cool. What matters is whose interest it ultimately serves. If it’s not our own, why bother? Life’s too short to miss time on the phone with grandma.
The post On Additive and Extractive Technologies appeared first on Cal Newport.
July 27, 2025
On Engineered Wonder
In the wake of my recent (and inaugural) visit to Disneyland, I read Richard Snow’s history of the park, Disney’s Land. Early in the book, Snow tells a story that I hadn’t heard before. It fascinated me—not just for its details, but also, as I’ll soon elaborate, for its potential relevance to our current moment.
The tale begins in 1948. According to Land, Disney’s personal nurse and informal confidant, Hazel George, had become worried. “[She] began to sense that her boss was sinking into what seemed to her to be a dangerous depression,” Land writes. “Perhaps even heading toward what was then called a nervous breakdown.”
The sources of this distress were obvious. Disney’s studio hadn’t had a hit since Bambi’s release in 1942, and the loss of the European markets during the war, as well as the economic uncertainty that followed in peacetime, had strained the company’s finances. Meanwhile, during this same period, Disney faced an animator strike that he took as a personal betrayal. “It seemed again to just be pound, pound, pound,” writes Land. “Disney was often aggressive, abrupt, and when not angry, remote.”
Hazel George, however, had a solution. She knew about Disney’s childhood fascination with steam trains, so it caught her attention when she saw an advertisement in the paper for the Chicago Railroad Fair, which would feature exhibits from thirty different railway lines built out over fifty acres on the shore of Lake Michigan. She suggested Disney take a vacation to see the fair. He loved the idea.
In Chicago, entranced by what he encountered, Disney felt a spark of the creative enthusiasm that had been missing throughout the war years. He just needed to find a way to harness it. Serendipitously, upon returning to Los Angeles, one of his animators, Ward Kimball, introduced him to a group of West Coast train enthusiasts who were building scale models of functioning steam trains large enough for an adult to ride on (think: cars roughly the length of a child’s wagon).
This, Disney decided, is what he needed to do.
In 1949, Disney and his wife, Lillian, bought a five-acre plot of land on Carolwood Drive in the Holmby Hills neighborhood of LA, to build a new house. They chose the location in large part because Disney thought its layout would be perfect for his own scale railroad project.
Over the next year, he worked with the machine shops at his studio to help construct his scale trains and with a team of landscapers to build out the track and its surroundings. When complete, Disney’s Carolwood Pacific Railroad, as he called it, included a half-mile of right-of- way that circled the house and yard, including a 46-foot-long trestle bridge and a 90-foot-long tunnel dug under his wife’s flower bed—complete with an S-turn shape so that you couldn’t see the other end upon entering. His rolling stock included his 1:8 scale steam locomotive, called the Lilly Belle, six cast-metal gondolas, two boxcars, two stock cars, a flatcar, and a wooden caboose decorated inside with miniature details like a twig-sized broom and tiny potbelly stove that could actually be lit.

As Land tells it, this project re-energized Disney. The more he worked on the line, the more ideas began to flow for his company. Soon, one such idea began to dominate all the others. In 1953, Disney abruptly shut down the Carolwood Pacific. It had accomplished its goal of helping him rediscover his creative inspiration, but now he had a bigger project to pursue; one that would dominate the final chapter of his career and provide him endless fascination and enthusiasm: he would build a theme park.
As Land concludes: “Of all the influences that helped shape Disneyland, the railroad is the seminal one. Or, rather, a railroad. One Disney owned.”
~~~
My term for what Disney achieved in building the Carolwood Pacific Railroad is engineered wonder. More generally, engineered wonder is when you take something that sparks a genuine flare of interest, and you pursue it to a degree that’s remarkable (or, depending on who you ask, perhaps even absurd). Such projects are not done for money, or advancement, or respect, but instead just because they fascinate you, and you want to amplify that feeling as expansively as possible.
This brings me back to my promised connection to our current moment. In the early 1950s, Disney deployed engineered wonder to escape the creativity-sapping economic doldrums created by wartime uncertainty. Seventy-five years later, I see a more widely relevant use for this strategy: escaping the digital doldrums created by mediating too many of our experiences through screens.
I increasingly worry that as we live more and more of both our personal and professional lives in the undifferentiated abstraction of the digital, we lose touch with what it’s like to grapple with the joys and difficulties of the real world: to feel real awe, or curiosity, or fascination, and not just an algorithmically-optimized burst of emotion; to see our intentions manifest concretely in the world, and not just mechanically measured by view counts and likes.
Engineered wonder offers an escape from this state. It reawakens our nervous systems to what it’s like to engage with the non-digital. It teaches our brains to crave the real sensations and reactions that our screens can only simulate. It’s a way to jumpstart a more exciting chapter in our lives.
During Disney’s era, the Carolwood Pacific Project likely seemed extreme to most people he encountered. Today, this extremeness might be exactly what we need.
The post On Engineered Wonder appeared first on Cal Newport.
Cal Newport's Blog
- Cal Newport's profile
- 9907 followers
