Brian Potter's Blog

November 22, 2025

Reading List 11/22/25

USS George HW Bush under construction at Newport News shipyard.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure, and industrial technology. This week we look at the ship failure that caused the Francis Scott Key Bridge collapse, the boring part of Bell Labs, a more efficient way of making antimatter, underground nuclear reactors, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

Francis Scott Key Bridge Collapse

I normally think of extreme sensitivity to small failures as a property of very high performance engineered objects – things like a jet engine catastrophically failing due to a pipe wall being a few fractions of a millimeter too thin. But other complex engineered systems can also be susceptible to the right (or wrong) sort of very small failure. The National Transportation Safety Board has a report out on what caused the MV Dali containership to lose power and crash into the Francis Scott Key Bridge in Baltimore in 2024. The culprit? The label on a single wire in slightly the wrong position, which prevented the wire from being firmly connected. When the wire came loose, the ship lost power. Via the NTSB:

At Tuesday’s public meeting at NTSB headquarters, investigators said the loose wire in the ship’s electrical system caused a breaker to unexpectedly open -- beginning a sequence of events that led to two vessel blackouts and a loss of both propulsion and steering near the 2.37-mile-long Key Bridge on March 26, 2024. Investigators found that wire-label banding prevented the wire from being fully inserted into a terminal block spring-clamp gate, causing an inadequate connection.

The NTSB also has a video on its Youtube channel showing exactly what went wrong with the wire.

Apple and 3D printing titanium

Apple has an interesting piece on their use of 3D printing for their titanium-bodied watches. It’s typically rare to use 3D printing for large-volume production, due to its higher unit costs compared to other fabrication technologies. Apple seems to be using 3D printing on its watch bodies for two reasons: one is that because 3D printing is additive rather than subtractive (machining down a titanium forging), there’s less material waste, which they consider beneficial for decarbonization reasons. The other is that 3D printing makes it possible to fabricate part geometries that wouldn’t be possible using other fabrication methods.


Apple 2030 is the company’s ambitious goal to be carbon neutral across its entire footprint by the end of this decade, which includes the manufacturing supply chain and lifetime use of its products. Already, all of the electricity used to manufacture Apple Watch comes from renewable energy sources like wind and solar.


Using the additive process of 3D printing, layer after layer gets printed until an object is as close to the final shape needed as possible. Historically, machining forged parts is subtractive, requiring large portions of material to be shaved off. This shift enables Ultra 3 and titanium cases of Series 11 to use just half the raw material compared to their previous generations.


“A 50 percent drop is a massive achievement — you’re getting two watches out of the same amount of material used for one,” Chandler explains. “When you start mapping that back, the savings to the planet are tremendous.”


In total, Apple estimates more than 400 metric tons of raw titanium will be saved this year alone thanks to this new process.


The boring part of Bell Labs

Bell Labs, as I’ve noted several times, is famous for the number of world-changing inventions and scientific discoveries it generated over its history. It’s the birthplace of the transistor, the solar PV cell, and information theory, and it has accumulated more Nobel Prizes than any other industrial research lab. But the scientific breakthroughs and world-changing inventions were a small part of what Bell Labs did. Most people that worked there were engaged in the more prosaic work of making the telephone system work better and more efficiently. Elizabeth Van Nostrand has an interesting interview with her father, who worked in this “boring” part of Bell Labs:


Most calls went through automatically e.g. if you knew the number. But some would need an operator. Naturally, the companies didn’t want to hire more operators than they needed to. The operating company would do load measurements and, if the number of calls that needed an operator followed a Poisson distribution (so the inter-arrival times were exponential).


The length of time an operator took to service the call followed an exponential distribution. In theory, one could use queuing theory to get an analytical answer to how many operators you needed to provide to get reasonable service. However, there was some feeling that real phone traffic had rare but lengthy tasks (the company’s president wanted the operator to call around a number of shops to find his wife so he could make plans for dinner (this is 1970)) that would be added on top of the regular Poisson/exponential traffic and these special calls might significantly degrade overall operator service.


I turned this into my Master’s thesis. Using a simulation package called GPSS (General Purpose Simulation System, which I was pleasantly surprised to find still exists) I ran simulations for a number of phone lines and added different numbers of rare phone calls that called for considerable amounts of operator time. What we found was that occasional high-demand tasks did not disrupt the system and did not need to be planned for.


Transit timelines

Transit timelines is a very cool website that has transit system maps for over 300 different cities, going back to the 19th century. For each city you can step through time in five year increments to look at the extent of the transit system, and compare the transit systems of multiple cities for a given period of time.

Read more

 •  0 comments  •  flag
Share on Twitter
Published on November 22, 2025 05:01

November 20, 2025

How ASML Got EUV

I am pleased to cross-post this piece with Factory Settings, the new Substack from IFP. Factory Settings will feature essays from the inaugural CHIPS team about why CHIPS succeeded, where it stumbled, and its lessons for state capacity and industrial policy. You can subscribe here.

An EUV tool at Lawrence Livermore National Lab in the 1990s.

Moore’s Law, the observation that the number of transistors on an integrated circuit tends to double every two years, has progressed in large part thanks to advances in lithography: techniques for creating microscopic patterns on silicon wafers. The steadily shrinking size of transistors — from around 10,000 nanometers in the early 1970s to around 20-60 nanometers today — has been made possible by developing lithography methods capable of patterning smaller and smaller features.1 The most recent advance in lithography is the adoption of Extreme Ultraviolet (EUV) lithography, which uses light at a wavelength of 13.5 nanometers to create patterns on chips.

EUV lithography machines are famously made by just a single firm, ASML in the Netherlands, and determining who has access to the machines has become a major geopolitical concern. However, though they’re built by ASML, much of the research that made the machines possible was done in the US. Some of the most storied names in US research and development — DARPA, Bell Labs, IBM Research, Intel, the US National Laboratories — spent decades of research and hundreds of millions of dollars to make EUV possible.

So why, after all that effort by the US, did EUV end up being commercialized by a single firm in the Netherlands?

How semiconductor lithography works

Briefly, semiconductor lithography works by selectively projecting light onto a silicon wafer using a mask. When light shines through the mask (or reflects off the mask in EUV), the patterns on that mask are projected onto the silicon wafer, which is covered with a chemical called photoresist. When the light strikes the photoresist, it either hardens or softens the photoresist (depending on the type). The wafer is then washed, removing any softened photoresist and leaving behind hardened photoresist in the pattern that needs to be applied. The wafer will then be exposed to a corrosive chemical, typically plasma, removing material from the wafer in the places where the photoresist has been washed away. The remaining hardened photoresist is then removed, leaving only an etched pattern in the silicon wafer. The silicon wafer will then be coated with another layer of material, and the process will repeat with the next mask. This process will be repeated dozens of times as the structure of the integrated circuit is built up, layer by layer.

Early semiconductor lithography was done using mercury lamps that emitted light of 436 nanometers wavelength, at the low end of the visible range. But as early as the 1960s, it was recognized that as semiconductor devices continued to shrink, the wavelength of light would eventually become a binding constraint due to a phenomena known as diffraction. Diffraction is when light spreads out after passing through a hole, such as the openings in a semiconductor mask. Because of diffraction, the edges of an image projected through a semiconductor mask will be blurry and indistinct; as semiconductor features get smaller and smaller, this blurriness eventually makes it impossible to distinguish them at all.

The search for better lithography

The longer the wavelength of light, the greater the amount of diffraction. To avoid eventually running into diffraction limiting semiconductor feature sizes, in the 1960s researchers began to investigate alternative lithography techniques.

One method considered was to use a beam of electrons, rather than light, to pattern semiconductor features. This is known as electron-beam lithography (or e-beam lithography). Just as an electron microscope uses a beam of electrons to resolve features much smaller than a microscope which uses visible light, electron-beam lithography can pattern features much smaller than light-based lithography (“optical lithography”) can. The first successful electron lithography experiment was performed in 1960, and IBM extensively developed the technology from the 1960s through the 1990s. IBM introduced its first e-beam lithography tool, the EL-1, in 1975, and by the 1980s had 30 e-beam systems installed.

E-beam lithography has the advantage of not requiring a mask to create patterns on a wafer. However, the drawback was that it’s very slow, at least “three orders of magnitude slower than optical lithography”: a single 300mm wafer takes “many tens of hours” to expose using e-beam lithography. Because of this, while e-beam lithography is used today for things like prototyping (where not having to make a mask first makes iterative testing much easier) and for making masks, it never displaced optical lithography for large-volume wafer production.

Another lithography method considered by semiconductor researchers was the use of X-rays. X-rays have a wavelength range of just 10 to 0.01 nanometers, allowing for extremely small feature sizes. As with e-beam lithography, IBM extensively developed X-ray lithography (XRL) from the 1960s through the 1990s, though they were far from the only ones. Bell Labs, Hughes Aircraft, Hewlett Packard, and Westinghouse all worked on XRL, and work on it was funded by DARPA and the US Naval Research Lab.

For many years X-ray lithography was considered the clear successor technology to optical lithography. In the late 1980s there was concern that the US was falling behind Europe and Japan in developing X-ray lithography, and by the 1990s IBM alone is estimated to have invested more than a billion dollars in the technology. But like with e-beam lithography, XRL never displaced optical lithography for large-volume production, and it’s only been used for relatively niche applications. One challenge was creating a source of X-rays. This largely had to be done using particle accelerators called synchrotrons: large, complex pieces of equipment which were typically only built by government labs. IBM, committed to developing X-ray lithography, ended up commissioning its own synchrotron (which cost on the order of $25 million) in the late 1980s.

Part of the reason that technologies like e-beam and X-ray lithography never displaced optical lithography is that optical lithography kept improving, surpassing its predicted limits again and again. Researchers were forecasting the end of optical lithography since the 1970s, but through various techniques, such as immersion lithography (using water between the lens and the wafer), phase-shift masking (designing the mask to deliberately create interference in the light waves to increase the contrast), multiple patterning (using multiple exposures for a single layer), and advances in lens design, the performance of optical lithography kept getting pushed higher and higher, repeatedly pushing back the need to transition to a new lithography technology. The unexpectedly long life for optical lithography is captured by Sturtevant’s Law: “the end of optical lithography is 6 – 7 years away. Always has been, always will be.”

Advances in optical lithography lenses over time, via Bruning 2007. In addition to more complex lenses, shorter wavelengths of light were used.The rise of EUV

In the early 1980s, Hiroo Kinoshita, a researcher at Japan’s Nippon Telephone and Telegraph (NTT), was researching X-ray lithography, but was becoming disillusioned by its numerous difficulties. The X-ray lithography technology being used was known as “X-ray proximity lithography” or XPL. Whereas in optical lithography light passed through a lens to reduce the image size projected onto the silicon wafer, because no known materials could make a reduction lens for X-rays, X-rays were projected directly onto the wafers without any sort of lens reduction. In part because of the lack of reduction — which meant that any imperfections in the mask wouldn’t be scaled down when projected onto the wafer — making masks for XPL proved exceptionally difficult.

However, while it’s not possible to focus X-rays with a lens, it is possible to reflect certain X-ray wavelengths with a mirror. A normal mirror will only reflect X-rays at very shallow angles, making it very hard to use them for a practical lithography system (the requirement of a shallow angle would make such a system gigantic); at steeper angles, X-rays will simply pass through the mirror. However, by constructing a special mirror from alternating layers of different materials, known as a “multilayer mirror”, light near the X-ray region of the spectrum can be reflected at much steeper angles. Multilayer mirrors use layers of different materials with different indices of refraction (how much light bends when entering it) to create constructive interference — each layer boundary reflects a small amount of light, which (when properly designed) adds together with the reflection from the other layers. (Anti-reflective coatings use a similar principle, but instead use multiple layers to create destructive interference to eliminate reflections.)

The first multilayer mirrors that could reflect X-rays were built in the 1940s, but they were impractical because the mirrors were made from gold and copper, which quickly diffused into each other, degrading the mirror. But by the 1970s and 80s, the technology for making these constructive interference-creating mirrors had dramatically improved. In 1972 researchers at IBM successfully built a 10-layer multilayer mirror that reflected a significant fraction of light in the 5 to 50 nanometer region, and in 1981 researchers at Stanford and the Jet Propulsion Laboratory built a 76-layer mirror from alternating layers of tungsten and carbon. A few years later researchers at NTT also successfully built a multilayer tungsten and carbon film, and based on their success Kinoshita, the researcher at NTT, began a project to leverage these multilayer mirrors to create a lithography system. In 1985 his team successfully projected an image using what were then called “soft X-rays” (light in roughly the 2 nanometer to 20 nanometer range) reflected off of multilayer mirrors for the first time.2 That same year, researchers at Stanford and Berkeley published work showing that a multilayer mirror made from molybdenum and silicon could reflect a very large fraction of light near the 13 nanometer wavelength. Because X-rays in a lithography tool will bounce off of multiple mirrors (a modern EUV tool might have 10 mirrors), reflecting a large portion of them is key to making a lithography tool practical; too little reflection and the light will be too weak by the time it reaches the wafer.

Initially people in the field were skeptical about the prospects of a reflective X-ray lithography system. When presenting this research in Japan, Kinoshita noted that his audience was “highly skeptical of his talk” and that they were “unwilling to believe that an image had actually been made by bending X-rays”. The same year, when Bell Labs researchers suggested to the American government that soft X-rays with multilayer mirrors could be used to create a lithography system, they received an “extremely negative reaction”; reviewers argued that “even if each of the components and subsystems could be fabricated, the complete lithography system would be so complex that its uptime would be negligible.” When researchers at Lawrence Livermore National Lab, after learning of Kinoshita’s work, presented a paper on their own soft X-ray lithography work in 1988, reception was similarly negative. One paper author noted that “You can’t imagine the negative reception I got at that presentation. Everybody in the audience was about to skewer me. I went home with my tail between my legs…”

Despite the negative reactions, work on soft X-ray lithography continued to advance at NTT, Bell Labs, and Livermore. Kinoshita’s research group at NTT designed a new two-mirror soft x-ray lithography system, and used it to successfully print patterns with features 500 nanometers wide. When presenting this work at a 1989 conference in California, a Bell Labs researcher named Tania Jewell became extremely interested, and “deluged” Kinoshita with questions. The next year, Bell Labs successfully printed a 50 nanometer pattern using soft X-rays. The 1989 conference, and the meeting between NTT and Bell Labs, has been called the “dawn of EUV”.

Work on soft X-ray lithography continued in the 1990s. Early soft X-ray experiments had been done with synchrotron radiation, but a synchrotron would be difficult to make into a practical light source for high-volume production, so researchers looked for alternative ways to generate soft X-rays. One strategy for doing this is to heat certain materials, such as xenon or tin, enough to turn them into a plasma. This can be done using either lasers (creating laser produced plasma, or LPP) or electrical currents (creating discharge produced plasma, or DPP). Development of LPP power sources began in the 1990s, but creating such a system was enormously difficult. Turning material into a plasma generated debris which reduced the life of the extremely sensitive multilayer mirrors, and a “great deal of effort [was] put into designing and testing a variety of debris minimization schemes”. One strategy that proved to be very successful was to minimize the amount of debris by creating a “mass limited target”: minimizing the amount of material to be heated into plasma by emitting it as a series of microscopic droplets. Over time, these and other strategies allowed for longer and longer mirror life.

Another major challenge was manufacturing sufficiently precise multilayer mirrors. In 1990, mirrors could be fabricated with at most around 8 nanometers of precision, but a practical soft X-ray lithography system demanded 0.5 nanometer precision or better. NTT had obtained its first multilayer mirrors from Tinsley (the US firm that had built the ultra-precise mirrors for the Hubble Space Telescope), and with NTT’s encouragement Tinsley was able to fabricate mirrors of 1.5 to 1.8 nanometer accuracy in 1993. Similar work on mirror accuracy was done at Bell Labs (with assistance from researchers at the National Institute of Standards and Technology), and during the 1990s the precision of multilayer mirrors continued to improve.

As work on it was proceeding, a change in name for soft X-ray technology was suggested. “Soft X-ray” was thought to be too close to X-ray proximity lithography, which worked on different principles (ie: it had no mirrors) and had developed a negative reputation thanks to its difficult development history. So in 1993 the name was changed to Extreme Ultraviolet Lithography, or EUV. The wavelengths being used were at the very bottom of the ultraviolet spectrum, and the name created associations with “Deep Ultraviolet Lithography” (DUV), a lithography technique based on 193-nanometer light, which was then being used successfully.

Organizational momentum behind EUV continued to build. In the early 1990s Sandia National Labs, using technology developed for the Strategic Defense Initiative, partnered with Bell Labs to demonstrate a soft X-ray lithography system using a laser produced plasma. In 1991, Japanese corporations Nikon and Hitachi also began to research EUV technology. That same year, the Defense Advanced Research Projects Agency (DARPA) began to fund lithography development via its Advanced Lithography Program, and by 1996 Sandia National Labs and Lawrence Livermore lab had committed around $30 million to EUV development (with a similar amount contributed by several private companies). In 1992, Intel committed $200 million into the development of EUV, most of which funded research work at Sandia, Livermore and Bell Labs. In 1994, the US formed the National EUV Lithography Program, made up of researchers from the national labs (Livermore, Berkeley, and Sandia), and led by DARPA and the DOE.

EUV-LLC

In 1996, congress voted to terminate DOE funding for EUV research. Without funding to keep the research community together, the national lab researchers would be reassigned to other tasks, and much of the knowledge around EUV might dissipate. At the time, there were still numerous difficulties with EUV, and it was far from obvious it would be the successor lithography technology: a 1997 lithography task force convened by SEMATECH (a US semiconductor industrial consortium) ranked EUV last of four possible technologies behind XPL, e-beam lithography, and ion projection lithography.

Despite the uncertainty, Intel placed a bold bet on the future of EUV, and stepped in with around $250 million in funding to keep the EUV research program alive. It formed a consortium known as EUV-LLC, which contracted with the Department of Energy to fund EUV work at Sandia, Berkeley, and Livermore national labs. Other major US firms, including Motorola, AMD, IBM, Micron also joined the consortium, but Intel remained the largest and most influential shareholder, the “95% gorilla”. Following the creation of EUV-LLC, Europe and Japan formed their own EUV research consortiums: EUCLIDES in Europe and ASET in Japan.

When EUV-LLC was formed, US lithography companies had been almost completely forced out of the global marketplace. Japanese firms Nikon and Canon held a 40% and 30% share of the market, respectively, and third place was held by an up-and-coming Dutch firm called ASML, which held 20% market share. The members of EUV-LLC, not least Intel, wanted a major foreign lithography firm to join the consortium to help ensure EUV became a global standard However, the prospect of funding the development of advanced semiconductor technology, only to hand that technology over to a national competitor (especially a Japanese competitor who had so recently been responsible for decimating the US semiconductor industry), wasn’t an easy sell. Nikon declined to participate in EUV-LLC in part due to the resulting controversy, and Canon was ultimately prevented from joining by the US government.

ASML, however, was different. Being located in the Netherlands, it was considered “neutral ground” in the semiconductor wars between the US and Japan. Intel, whose main concern was that it could itself get access to the next generation of lithography tools regardless of who produced them, strongly advocated that ASML be allowed to procure a license. (One executive at the US lithography company Ultratech Stepper complained that Intel had “done everything in their power to give the technology to ASML on a silver platter.”) In 1999, ASML was allowed to join EUV-LLC and gain a license for its technology, provided that it used a sufficient quantity of US components in the machines it built and opened a US factory — conditions that it never met.

Left outside of the EUV-LLC consortium, Nikon and Canon never successfully developed EUV technology. And neither did any US firms. Silicon Valley Group, a US lithography tool maker which had licensed EUV technology, was bought by ASML in 2001, and Ultratech Stepper, another US licensee, opted not to pursue it. ASML, in partnership with German optics firm Carl Zeiss, became the only lithography firm to take EUV technology across the finish line.

Conclusion

Over the next several years, EUV-LLC proved to be a huge success, and when the program ended in 2003, it had met all of its technical goals. EUV-LLC had successfully built a test EUV lithography tool, made progress on both LPP and DPP light sources, developed masks that would work with EUV, created better multilayer mirrors, and filed for over 150 patents. Thanks in large part to Intel’s wager, EUV would ultimately become the lithography technology of the future, a technology that would entirely be in the hands of ASML.

This future took much longer to arrive than expected. When the EUV-LLC program concluded in 2003, US semiconductor industry organization SEMATECH stepped in to continue funding work on commercialization. ASML shipped its first prototype EUV lithography tool in 2006, but with very weak DPP power sources. A US company, Cymer (later acquired by ASML), was developing a better power source using a laser-produced plasma, but working out the problems with it took years and required further investment from Intel. Making defect-free EUV masks proved to be similarly difficult. EUV development proved to be so difficult that ASML ultimately required billions of dollars in investment from TSMC, Samsung, and Intel to fund its completion: the three companies invested $1 billion, $1 billion, and $4 billion, respectively in ASML in 2012 in exchange for shares of the company. ASML didn’t ship its first production EUV tool until 2013, but development work on things like the power source (often funded by the US) continued for years afterwards. Intel, worried about the difficulties of getting EUV into high-volume production, made the ultimately disastrous decision to try and push optical lithography technology one more step for its 10 nanometer process.

But today, after decades of development, EUV has arrived. TSMC, Intel, and Samsung, the world’s leading semiconductor fabricators, are all using EUV in production. And they are all using lithography tools built by ASML for it.

An important takeaway from the story of EUV is that developing a technology that works, and successfully competing with that technology in the marketplace, are two different things. Thanks to contributions from researchers around the world, including a who’s who of major US research organizations — DARPA, Bell Labs, the US National Labs, IBM Research — EUV went from unpromising speculation to the next generation of lithography technology. But by the time it was ready, US firms had been almost entirely forced out of the lithography tools market, leaving EUV in the hands of a single European firm to take it across the finish line and commercialize.

1

Modern semiconductor processes often have names that imply smaller sizes — TSMC’s 7 nm node, Intel’s 10 nm node — but these are essentially just names that don’t correspond with actual feature sizes.

2

Definitions for what light is considered to be “soft x-ray” don’t seem especially consistent. One article notes that “the terms soft X-ray and extreme ultraviolet aren’t well defined.”

 •  0 comments  •  flag
Share on Twitter
Published on November 20, 2025 05:01

November 15, 2025

Reading List 11/15/25

Bristol 188 supersonic research aircraft.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure, and industrial technology. This week we look at Israel refilling a lake with desalinated seawater, South Korean nuclear subs, ways to make titanium cheap, a “new Bell Labs”, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

Housekeeping items this week:

IFP has started a new substack, Factory Settings, about the CHIPS Act and how it succeeded.

Sea of Galilee

The inaptly named Sea of Galilee is a large lake in Israel, and the supposed location of many of Jesus’ miracles (including him walking on water). The lake supplies around 10% of Israel’s drinking water, but water levels in the lake have declined in recent years.

Via Giame and Artzy 2022.

To try and prevent declining water levels, Israel is now pumping large amounts of desalinated seawater into the lake. Via the Times of Israel:


The Water Authority has started channeling desalinated water to the Sea of Galilee, marking the first ever attempt anywhere in the world to top up a freshwater lake with processed seawater.


The groundbreaking project, years in the making and a sign of both Israel’s success in converting previously unusable water into a vital resource and the rapidly dropping water levels in the country’s largest freshwater reservoir, was quietly inaugurated on October 23.


The desalinated water enters the Sea of Galilee via the seasonal Tsalmon Stream, entering at the Ein Ravid spring, some four kilometers (2.5 miles) northwest of what is Israel’s emergency drinking source.


Firas Talhami, who is in charge of the rehabilitation of water sources in northern Israel for the Water Authority, told The Times of Israel that he expected the project to raise the lake’s level by around 0.5 centimeters (0.2 inches) per month.


The move has also reactivated the previously dried-out spring, allowing visitors to once again paddle down the Tsalmon, which now flows with desalinated water.


Iran drought

Israel isn’t the only middle eastern country facing water problems. The capital of Iran, Tehran, is facing an acute water shortage. This crisis has been brewing for months, and has now gotten so bad that the city may become “uninhabitable” if the current drought continues. Via Reuters:


President Masoud Pezeshkian has cautioned that if rainfall does not arrive by December, the government must start rationing water in Tehran.


“Even if we do ration and it still does not rain, then we will have no water at all. They (citizens) have to evacuate Tehran,” Pezeshkian said on November 6.


The stakes are high for Iran’s clerical rulers. In 2021, water shortages sparked violent protests in the southern Khuzestan province. Sporadic protests also broke out in 2018, with farmers in particular accusing the government of water mismanagement.


The water crisis in Iran after a scorching hot summer is not solely the result of low rainfall.


Decades of mismanagement, including overbuilding of dams, illegal well drilling, and inefficient agricultural practices, have depleted reserves, dozens of critics and water experts have told state media in the past days as the crisis dominates the airwaves with panel discussions and debates.


Pezeshkian’s government has blamed the crisis on various factors such as the “policies of past governments, climate change and over-consumption”.


Colorado River negotiations

In US water news, the Colorado River Compact is an agreement that determines how seven southwest states — California, Nevada, Arizona, New Mexico, Utah, Colorado, and Wyoming — divide up water from the Colorado River. The compact allocates a specific amount of water to each state. However, the total amount allocated to the various states exceeds the typical flow of the Colorado River, possibly because allocations were decided during a period of unusually high flow. New agreements have been needed to determine how water is allocated in these conditions, and states don’t always have an easy time coming to an agreement. From the Colorado Sun:


The rules that govern how key reservoirs store and release water supplies expire Dec. 31. They’ll guide reservoir operations until fall 2026, and federal and state officials plan to use the winter months to nail down a new set of replacement rules. But negotiating those new rules raises questions about everything from when the new agreement will expire to who has to cut back on water use in the basin’s driest years.


And those questions have stymied the seven state negotiators for months. In March 2024, four Upper Basin states — Colorado, New Mexico, Utah and Wyoming — shared their vision for what future management should look like. Three Lower Basin states — Arizona, California and Nevada — released a competing vision at the same time. The negotiators have suggested and shot down ideas in the time since, but they have made no firm decisions.


…The Department of the Interior is managing the process to replace the set of rules, established in 2007, that guide how key reservoirs — lakes Mead and Powell — store and release water.


The federal agency plans to release a draft of its plans in December and have a final decision signed by May or June. If the seven states can come to agreement by March, the Department of the Interior can parachute it into its planning process, said Scott Cameron, acting head of the Bureau of Reclamation, during a meeting in Arizona in June.


If they cannot agree, the feds will decide how the basin’s water is managed.


South Korean nuclear submarines

Given South Korea’s proficiency in shipbuilding, and in nuclear reactor construction, it’s always been somewhat surprising to me that South Korea doesn’t build nuclear submarines. Apparently this is due to non-proliferation restrictions that prevent Korea from producing enriched uranium for submarine reactors, as uranium enrichment could also be used to produce nuclear weapons.

Now though, the Trump administration has approved South Korean nuclear submarine construction, apparently as part of a deal where South Korea will invest in US shipbuilding capabilities. From Naval News:


The announcement came following a meeting with various Asian heads of state including South Korean President Lee Jae-Myung in Gyeongju, South Korea. Additional posts by Trump on Truth Social have detailed that the Submarines will be built on U.S soil at the Philadelphia shipyards, which were acquired by the Korean defense firm Hanwha late in 2024.


Subsequently, the construction of Nuclear submarines marks a departure from past efforts, as previous South Korean submarine construction has focused primarily on conventionally powered submarines. In tandem with this, South Korean Nuclear Submarine construction projects have remained in limbo for sometime as the U.S had not given tacit approval until President Trump’s statement.


However, as the Philadelphia Shipyards where construction will take place is not currently equipped to handle the construction of Nuclear Submarines (only commercial vessels have been produced), Hanwha has reportedly invested an additional $5 billion dollars into modernization and preparation. Despite this, there has been a lack of a concrete agreement regarding the development of the shipyards and a plan for the construction of the submarines with no official signature from the South Korean side.


These agreements are the conclusion of a long standing desire for nuclear powered submarines expressed by the South Korean government and military. Naval News has previously reported that subsequent efforts for a Nuclear Submarines have been born of increasingly intense operational needs for endurance and a deterrent towards neighboring nations such as North Korea, China, and Russia.


Read more

 •  0 comments  •  flag
Share on Twitter
Published on November 15, 2025 05:00

November 13, 2025

What Is A Production Process?

Corning ribbon machine, via The Henry Ford.

Below is the first chapter of my book, The Origins of Efficiency, available now on Amazon, Barnes and Noble, and Bookshop.

In 1880, Thomas Edison was awarded a patent for his electric incandescent light bulb, marking the beginning of the age of electricity. Although it was the result of thousands of hours of research that took place over decades by Edison and his many predecessors, the ultimate design of Edison’s light bulb was simple, consisting of just a few components: a filament, a thin glass tube in which the filament was mounted, a pair of lead-in wires, a base, and the glass bulb itself.

Until the 20th century, light bulbs were largely manufactured by hand. Workers would run the lead-in wires through the inner glass tube, attach the filament to the lead-in wires, and attach the glass tube to the bulb. A vacuum pump would then suck the air out of the bulb. Initially, this was done by connecting the pump to the top of the bulb, leaving a small tip of glass that had to be cut off. Later, tipless bulbs were developed that had the air removed from the bottom.

Most of this manufacturing process was done in house by Edison’s Electric Light Company, but the production of the glass bulb itself, known as a bulb blank, was outsourced. Edison placed his first order for bulb blanks with the Corning Glass Works company in 1880. The process of making the bulb blanks was fairly straightforward: Glassworkers would mix together sand, lead, and potassium carbonate, along with small quantities of niter, arsenic, and manganese oxide, place the mixture in a crucible, and melt it in a furnace into liquid glass. A worker called a gaffer would then gather a blob of glass on the end of a hollow iron tube and place the blob into a mold the shape of a light bulb. While the blob was still attached to the iron tube, the gaffer would blow into it to form the body of the bulb, then open the mold and cut the bulb from the end of the tube.

We can draw this series of steps using a process flow diagram, a visual representation of how a process unfolds. See Figure 2 for an example of what the bulb blank process might look like. Making bulb blanks is an example of what we’ll call a production process—a series of steps through which input materials are transformed incrementally into a finished product. Each step in the process induces some change in the input material. The changed material is then passed on to the next step, which makes another change, and so on, until the finished product comes out the other side. In the bulb blank process, sand, lead, and other chemicals are the inputs. These are gradually transformed by heat, chemical reactions, and physical manipulation until a finished bulb blank emerges at the other end.

In turn, this output might be the input to a subsequent process. Bulb blanks, for instance, would then be sent to Edison’s factory to be assembled into complete light bulbs. Likewise, the input materials for the manufacture of bulb blanks were themselves the output of some other production process. Potassium carbonate, for example, was mined from potassium ore and then refined using the Leblanc process.

Outside of the small number of things we can obtain directly from nature, all products of civilization are the result of some sort of production process—some series of transformations that take in raw materials, energy, labor, and information and produce goods and services. At first glance, services might seem far removed from the production of physical goods like cars or shoes, but the same basic model applies. A house cleaner, for example, goes through a specific series of steps—cleaning the bedrooms, then the bathrooms, then the kitchen—using various inputs—labor, electricity, cleaning products—to transform an input—a dirty house—into an output—a clean one. These processes might be comparatively simple, such as the production of light bulb blanks, or exceptionally complex, with hundreds or even thousands of steps. One 19th-century watch factory boasted that its watches “required 3,700 distinct operations to produce,” while a 1940s Cadillac — a relatively simple automobile by modern standards — required nearly 60,000 separate operations.

Even everyday objects can mask a great deal of production complexity. In his book The Toaster Project, Thomas Thwaites disassembles a $7 toaster to find that it contains 404 parts made up of more than a hundred different materials. And if we follow the chain of production further back, to the processes required to make the various input materials (and the processes to make the inputs for those processes, and so on), we find a sprawling mass of complexity for even the simplest products of civilization. In his famous 1958 essay “I, Pencil,” Leonard Read notes that a full accounting of the inputs required to make an ordinary pencil—the steel used to make the tools to harvest the cedar, the ships used to transport the graphite from Sri Lanka to the factory, the agricultural equipment used to grow the castor beans to produce the lacquer—involves the work of millions of people all over the world.

Figure 2. Process flow diagram of a bulb blank production process.Five factors of the production process

Now that we have a basic model for how things get produced, we can add a bit of detail to the description, identifying five distinct factors of the production process. This slightly more regimented structure will be useful for pinpointing discrete sites of intervention that can improve the efficiency of a production process.

First is the transformation method itself. In bulb blank production, one transformation method is the process of blowing the glass bulbs. Of course, each transformation is itself made up of many steps (gathering the glass on a blowpipe, placing the mold around it, blowing while a worker holds it), which in turn might be made up of substeps (such as individual worker motions). Different situations will call for varying degrees of fidelity in describing a process—the scientific management movement of the early 20th century, for example, spent a great deal of time studying specific worker motions—but it will always be a simplified model that omits many details of what is actually occurring.

The idea of a well-defined transformation or series of transformations is something of a simplification, as there will inevitably be some degree of variation in the specific actions taken during a step. For a machine, this variation will be very small and occur in narrowly defined ways, but the farther we get from modern industrial production processes, the less true this becomes. A person might perform the same step slightly differently each time and modify their technique over time as they get more skilled. And craft production methods often require some degree of deciding what the next step should be. A glassblower blowing bulbs without a mold, for instance, will decide how hard to blow based on how they see the bulb taking shape.

Second, to understand how efficient a production process is, we need some idea of how much time the process takes. It obviously makes a big difference whether the bulb blank factory can produce 10 or 10,000 bulbs a day. Using bulb molds, three workers could produce about 150 bulbs per hour, or roughly 1500 per day. This is called the production rate. Each step in the process will have its own rate, and these rates may differ from those of other steps. For example, filling the glass crucibles might be done just once a week, even though glassworkers

were producing bulb blanks daily.

Third, to determine how much a given production process costs, we need to account for all the direct material inputs and outputs to the process. At the furnace step, raw materials go in and molten glass comes out. At the blowing step, molten glass goes in and a bulb comes out. Depending on how detailed we decide to be, we might also include inputs like the coal that fuels the furnace and outputs like the ash and smoke produced by the furnace. There are labor inputs as well. The blowing step, for instance, requires the labor of two or three workers to gather the glass, work the mold, and cut the finished bulb free.

We also need to account for the indirect inputs—things that aren’t used directly by the process but are nevertheless necessary. A factory’s rent can’t be directly attributed to any particular operation within the factory, but the building is still an important input to the process. We can account for this cost by attributing some fraction of it to each step. Similarly, we can assign some fraction of the cost of the equipment, administration, insurance, and any other overhead costs to each step in the process. (The question of how best to assign these indirect costs is an involved area of accounting, but broadly speaking, these costs will be spread over the amount of output we produce.)

Fourth, to understand whether the process is efficiently arranged, we need to keep track of how much material is in the process at any given time. At any point, some material is actively being worked on and some is waiting to be worked on. In bulb blank production, once the raw materials had been added to the crucible, it might take a while before the glass was gathered by workers and blown into bulbs. If crucibles were filled once a week, there would be about half a week’s worth of molten glass waiting to be turned into bulbs at any given time. Any material that isn’t currently being worked on is considered to be in a buffer of available material. The total amount of material in the system—that is, the combination of what’s in the buffer and what’s being worked on—is collectively known as work in process.

Fifth and finally, in evaluating a production process we need to make note of how the output of the process varies. While it’s tempting to think of a step as producing the exact same output every time, there will inevitably be some variation. At times, the process may simply fail. For example, in some cases, the furnace would produce a batch of glass that was unsuitable for bulbs. In other cases, the crucibles that held the molten glass would crack, spilling the glass before it could be turned into bulbs.

But there will also be more subtle sources of variation. For instance, the composition of the glass and the thickness of the bulbs would differ slightly, perhaps imperceptibly, from bulb to bulb. No two bulbs were exactly alike. This discrepancy can be a natural outcome of the process, the result of a disparity in the inputs, or due to variation in the environment in which the process takes place. The quality of the bulb glass, for example, was greatly dependent on the quality of the chemicals used, how well they were mixed, and the temperature of the furnace.

One simple way of characterizing variation is in terms of yield—the fraction of inputs that are successfully transformed into outputs. A yield of 50 percent would characterize a process that is only successful half the time. An unsuccessful transformation might be a complete failure (a bulb falls on the floor and breaks) or one that is simply outside the range of acceptable tolerance (the glass on the bulb was slightly too thin). In many cases, however, it will be useful to have a more detailed characterization of the variation in a process. In the production of light bulb filaments, very slight differences in temperature during the carburizing process resulted in the filament producing different amounts of light. Understanding how the resulting filaments varied was, therefore, necessary to determine how many bulbs of a given illumination could be produced. It might turn out that the variation in illumination could be described by a normal distribution with a particular mean and standard deviation, making it possible to track disruptions to the process by looking at whether values fell outside of the expected range. For now, we’ll just note that variation is an important factor to consider without worrying about developing a certain measurement for it.

Looking at a single step in the process, we now have five factors

that characterize it:

The transformation method itself. For example, the act of blowing molten glass into a mold.

The production rate. For example, how many molds the gaffers can fill in an hour.

The inputs and outputs, along with their associated costs. For example, the molten glass, the gaffer’s wages, and wear and tear on the molds and blowpipes.

The size of the buffer. For example, how much molten glass is stored in the furnace waiting for the gaffer.

The variability of the output. For example, fluctuations in how fast the gaffer works and the thickness of the bulbs produced, or how often the gaffer drops and breaks a bulb.

This is, of course, a highly simplified model. For one thing, it omits the complexity of what specifically occurs during each step. For another, it suggests that these factors are steady over time, but in reality they will frequently be in flux. The variation in output may rise when a new worker starts, or at the end of the day when workers are tired, or over a long period of time as workers or managers grow complacent. Alternatively, variation may go down over time as workers gain experience and precision improves.

This model also doesn’t include the many possible ways one step may influence another step, beyond how fast the step runs. The temperature of the glass furnace might influence how easy it is to blow the bulb into the mold, for example. Likewise, variation in one process may be a function of variation in some previous process. Bulbs breaking when the mold is removed, for instance, might be a function of inconsistent mixing of the ingredients or uneven temperature of the molten glass.

Finally, this model doesn’t include any specifics about what is actually being produced. As we’ll see later on, the form of the product and the method of production are intimately connected, and a change in one generally results in a change in the other.

Despite its various simplifications, however, this model gives us a useful way to structure our thinking about production processes and how they can be made more efficient.

Figure 3. Process sketch of a bulb blank process showing inputs, outputs, buffers, production rates, and yields.Improvements to the process

The goal of any efficiency improvement is to minimize the costs of producing something. If we’re running a bulb blank factory, we want to figure out how to produce those bulb blanks as cheaply as possible, which means using the fewest, lowest-cost inputs we can. The way to do this is to change one or more of these five factors.

First, we can change the transformation method itself to one that requires fewer resources. The very first bulb blanks produced by Corning didn’t use molds but were produced using a much slower free-hand method, which entailed manually rolling out tubes of glass. Changing the bulb-blowing process to the mold method greatly increased output and decreased the labor required for each bulb, such that workers went from producing 165 bulbs on the first day to

150 an hour.

Second, we can try to improve the rate of production and take advantage of economies of scale—the fact that per-unit costs tend to fall as production volume rises. Glass furnaces in the bulb blank factory ran continuously, because starting a furnace cold took a great deal of time (24 hours or more) and was very likely to damage the crucibles. The furnaces were, therefore, burning coal regardless of whether glass was being blown and bulbs produced. Similarly, the rent needed to be paid whether the factory was producing bulb blanks or not. For these reasons, a factory that manufactured bulbs continuously over 24 hours would have lower unit costs than a factory that only operated for eight hours a day (and, in fact, some glass manufacturers did run continuously for this reason).

Third, we can try to reduce variation in the process. The quality of the glass was dependent on the temperature of the furnace: Variations in temperature would result in glass that would break after a short period of use. Reducing temperature variation would, therefore, result in more glass within acceptable bounds, producing a higher yield.

Fourth, we can try to decrease the costs of our inputs. Replacing the hand-blowing process with the bulb mold process not only reduced the amount of labor required but also enabled the factory to use less expensive labor, since the molding process required less skill.

Fifth, we can try to reduce our work in process by decreasing the size of our buffers. Work in process is material that has been paid for but hasn’t yet been sold—it’s an investment that has yet to yield a return. If glass furnaces were filled with new glass once a week, on average there would be half a week’s worth of glass simply sitting idle in the factory. If the crucibles were instead half the size and filled twice a week, on average there would only be a quarter of a week’s supply of glass in the crucibles, reducing work in process by 50 percent.

We also have one more option available to us: We can try to delete an entire step in the process. This will, obviously, remove all of its associated costs. If, for instance, it becomes possible to buy premixed glass powder, we no longer need to perform the mixing step ourselves—our input materials can instead go directly to the glass crucibles.

These are the options available to improve the efficiency of a process. So, what does this suggest about what an extremely efficient process looks like?

It’s a process with no buffers. Material moves smoothly from one step to the next without any waiting or delay, and material tied up in the process is minimized.

It’s a process with no variability. The process works every time and always produces exactly what it’s supposed to, at exactly the time when it’s needed. More generally, the output of the process is as close to perfectly predictable as possible.

It’s a process with no unnecessary or wasteful steps. Every step is contributing value, and no steps can be eliminated.

It’s a process with inputs that are as cheap as possible and no wasted outputs. Either all inputs are successfully transformed, or the ancillary outputs are repurposed elsewhere.

It’s a process that acts at as large a scale as the technology and market will allow. Fixed costs are spread over as much output as possible, and the process takes maximum advantage of scale effects.

It’s a process that uses transformation methods that require as few inputs as possible, at the limits of what production technology will allow.

This sort of production process is sometimes called a continuous flow process—it continuously transforms inputs into outputs without any delays, downtime, waiting, unnecessary steps, or unneeded inputs. A steady stream of inputs goes in, and a steady stream of completed products swiftly and smoothly comes out.

One way of thinking about a continuous flow process is that it’s like driving on the highway. In the city, there’s the constant stop-and-start of traffic lights and waiting behind other cars. But on the highway, the flow of cars is consistent and uninterrupted, as one car smoothly follows another.

In practice, it’s often not possible to achieve a true continuous flow process, just as it’s not always possible for traffic to flow perfectly smoothly on the highway. The technology may not allow it, or the size of the market may not justify the cost of the equipment required. There are any number of reasons why continuous flow may not be achievable. But when it is possible, it results in the production of enormous volumes of incredibly inexpensive goods.

To see what a continuous flow process looks like in practice, let’s look at how the light bulb manufacturing process evolved in the century after Edison.

The evolution of lightbulb production

In 1891, just over a decade after Edison’s invention, the US was producing 7.5 million incandescent bulbs per year. By the turn of the 20th century, that figure had climbed to 25 million. But production was still largely manual, and the cost of light bulbs, though falling, was still high. In 1907, a 60-watt light bulb cost $1.75, or about $54 in 2022 dollars.

In 1912, Corning introduced the first semiautomatic machine for blowing light bulbs, called the Empire E machine. Though it still required workers to manually gather the molten glass, the machine could produce bulbs at a rate of 400 per hour, over twice as fast as the manual mold method. This was followed by General Electric’s fully automatic Westlake machine, as well as Corning’s Empire F. In 1921, a Westlake machine could manufacture over 1000 bulbs an hour. By the 1930s, improved Westlake machines could produce 5000 bulb blanks an hour.

The Westlake machine, though speedy, was largely a faster, mechanized version of the existing method for hand-blowing bulbs. It consisted of a large rotating drum with a series of iron tube arms mounted to it; as the machine rotated, the arms would lower into a glass furnace, gather a glob of molten glass, and swing it into a mold, after which air would be blown into it to form the bulb. Then, in 1926, a new type of machine for manufacturing bulb blanks was introduced: the Corning ribbon machine. Unlike previous machines, which largely duplicated the manual bulb-blowing process, the ribbon machine used a different mechanism for forming the bulbs. Instead of gathering a blob of glass on an iron rod, molten glass was poured onto a conveyor

belt, which produced a continuous ribbon of molten glass (giving the machine its name). The glass would sag through holes in the belt, forming a bowl shape. As the conveyor moved, a mold attached to a second conveyor below would snap shut around the bowl-shaped glass and air would be blown in from above, forming the shape of the bulb. The formed bulbs were then released and carried away by conveyor belt.

What had previously been a process with many small stops and starts became an uninterrupted, continuous flow. Glass poured onto the conveyor, sagged through the holes, and was repeatedly transformed into finished bulbs, one after the other, without any delays or waiting. Every step was perfectly synchronized.

The ribbon machine was extraordinarily complex and required constant intervention to keep operational. But it could produce bulb blanks in truly staggering volumes. The first ribbon machine produced 16,000 bulbs an hour—over three times faster than the Westlake machines. By 1930, an improved ribbon machine could produce 40,000 bulbs an hour.

The ribbon machine represented the final evolution of incandescent bulb blank production. It produced bulbs in such enormous quantities that by the early 1980s, fewer than 15 ribbon machines were needed for the entire world’s supply of light bulbs. By then, machine improvements had increased the production volume to nearly 120,000 bulbs an hour, or 33 bulbs every second.

Similar improvements took place in the rest of the light bulb manufacturing process, though none were quite so dramatic as the ribbon machine. In the late 19th and early 20th centuries, machines were developed to attach the inner tube to the outside bulb, mount the filament to the tube, make and then insert the lead-in wires, and seal the bulb. Enhanced vacuum pumps were developed to evacuate bulbs much more quickly—Edison’s original pumps took five hours to produce a vacuum in a bulb—and they did so automatically.

By the 1920s, most steps in the bulb manufacturing process had been automated, but they were largely performed by separate machines. Large volumes of in-process bulbs would accumulate between workstations, creating severe storage problems. Starting in 1921, these steps were rearranged into groups, or cells, so one machine would smoothly feed another at synchronized rates. Work in process was greatly reduced, storage requirements fell, and output per worker nearly doubled. By 1930, the major manufacturing innovations were complete, and by 1942, finished bulbs could be produced by a work cell at a rate of 1000 per hour.

As a result of these improvements, the cost of a light bulb plummeted. By 1942, the cost of a 60-watt bulb had fallen to 10 cents. Over this same period, bulb efficiency, the amount of light emitted per watt, also improved, nearly doubling from 1907 to 1942. Combined with cheaper electricity, the cost per lumen dropped 98.5 percent between 1882 and 1942.

Other parts of the light bulb-making process benefited from the same types of improvements: new production technology that required fewer inputs, increased economies of scale, reduced variability, minimization of buffers, and the elimination of unnecessary steps. As with bulb blanks, these processes gradually evolved toward a continuous, uninterrupted transformation of material.

Of course, such gains are not restricted to light bulbs. Any production process that can be described as a series of sequential steps can be made more efficient in the exact same ways. As we’ll see throughout this book, these types of improvements have resulted in increased efficiency in everything from steelmaking to cargo shipping. Over the next several chapters, we’ll take a closer look at each of the five factors of a production process and how they can contribute to increased production efficiency.

The Origins of Efficiency is available now from Amazon, Barnes and Noble, and Bookshop.

 •  0 comments  •  flag
Share on Twitter
Published on November 13, 2025 05:00

November 8, 2025

Reading List 11/8/2025

Grumman X-29.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure, and industrial technology. This week we look at gathering robot training data, “love letters” sent to home sellers, the Napier Deltic diesel engine, jumps in electricity demand from electric teakettles, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

Robot training

We’ve previously noted that one major bottleneck in making robots more capable is a lack of training data. LLMs have the benefit of the entire internet as a source of training data, but there’s no such pre-existing “movement” dataset that we can use to train robot AI models on, and finding/creating a source of robot training data has become an important aspect of robot progress. The LA Times has a good piece about some of the companies working to collect this training data:


In an industrial town in southern India, Naveen Kumar, 28, stands at his desk and starts his job for the day: folding hand towels hundreds of times, as precisely as possible.


He doesn’t work at a hotel; he works for a startup that creates physical data used to train AI.


He mounts a GoPro camera to his forehead and follows a regimented list of hand movements to capture exact point-of-view footage of how a human folds.


That day, he had to pick up each towel from a basket on the right side of his desk, using only his right hand, shake the towel straight using both hands, then fold it neatly three times. Then he had to put each folded towel in the left corner of the desk.


If it takes more than a minute or he misses any steps, he has to start over.


Privatized air traffic control

In response to the ongoing government shutdown, an air traffic controller shortage is forcing a curtailment of airline flights across the US. Air traffic controllers are having to work without pay, and there are about 3500 fewer controllers working than needed. On Thursday 10% of flights across the country, around 1800 flights, were ordered to be cancelled.

Air traffic control seems like an obvious service for the government to provide, like police or firefighters, but apparently private or semi-private air traffic control systems aren’t all that uncommon. Marginal Revolution has an interesting, short post about Canada’s privatized air traffic control system:


It’s absurd that a mission‑critical service is financed by annual appropriations subject to political failure. We need to remove the politics.


Canada fixed this in 1996 by spinning off air navigation services to NAV CANADA, a private, non‑profit utility funded by user fees, not taxes. Safety regulation stayed with the government; operations moved to a professionally governed, bond‑financed utility with multi‑year budgets. NAV Canada has been instrumental in moving Canada to more accurate and safer satellite-based navigation, rather than relying on ground-based radar as in the US.


“NAV CANADA – in conjunction with the United Kingdom’s NATS – was the first in the world to deploy space-based ADS-B, by implementing it in 2019 over the North Atlantic, the world’s busiest oceanic airspace.


NAV CANADA was also the first air navigation service provider worldwide to implement space-based ADS-B in its domestic airspace.”


Meanwhile, America’s NextGen has delivered a fraction of promised benefits, years late and over budget. As the Office of Inspector General reports:


“Lengthy delays and cost growth have been a recurring feature of FAA’s modernization efforts through the course of NextGen’s over 20-year lifespan. FAA faced significant challenges throughout NextGen’s development and implementation phases that resulted in delaying or reducing benefits and delivering fewer capabilities than expected. While NextGen programs and capabilities have delivered some benefits in the form of more efficient air traffic management and reduced flight delays and airline operating costs, as of December 2024, FAA had achieved only about 16 percent of NextGen’s total expected benefits.”


NPR also ran an article in July this year about the debate around switching to a privatized system in the US, and the pros and cons of a system like Canada’s:


Canada went from paying for air traffic control largely through tax revenue to charging customers a fee based on the weight and distance of a flight.


According to Correia, privatizing air traffic control was the next move for an aviation sector that already had privately-held airplane manufacturers and commercial airlines. “So basically the step that was taken by Canada was to say, well, air traffic control is providing a service to an industry that is already privatized or mostly privatized in many regions of the world,” he said.


Other air traffic control systems that exist outside or partially outside the government include NATS in the United Kingdom, Airservices Australia, Airways New Zealand, DFS in Germany and Skyguide in Switzerland.


A 2017 report by the Congressional Research Service said other countries’ models don’t appear to show “conclusive evidence that any of these models is either superior or inferior to others or to existing government-run air traffic services, including FAA, with respect to productivity, cost-effectiveness, service quality, and safety and security.”


The origins of Airbus

I’ve previously written about the difficulties of competing in the commercial aircraft industry: the costs of developing commercial aircraft are so high (often a significant fraction of the value of the company) and the number of annual aircraft sold so few, that a few bad bets — a program that goes over budget, or sells much less than anticipated — can be ruinous.

Given these difficulties, and the fact that many companies (Lockheed, Douglass, Convair) have been forced from the field, it’s a little surprising who ended up being competitive, and who didn’t. Japan, despite overwhelming many US industries with inexpensive, high-quality manufacturing in the second half of the 20th century, never fielded a commercial airliner (though not for lack of trying). South Korea didn’t either. Instead, the international competitors came from Brazil (Embraer), and a consortium of European countries (Airbus). Works in Progress has a good piece on why Airbus was so successful, when so many other similar European efforts failed:


Europe is a graveyard of failed national champions. They span from the glamorous Concorde to obscure ventures like pan-European computer consortium Unidata or notorious Franco-German search engine Quaero.


Airbus is the rare success story. European governments pooled resources and subsidized their champion aggressively to face down a titan of American capitalism in a strategically vital sector. Why did Airbus succeed when so many similar initiatives crashed and burned?


Airbus prevailed because it was the least European version of a European industrial strategy project ever. It put its customer first, was uninterested in being seen as European, had leadership willing to risk political blowback in the pursuit of a good product, and operated in a unique industry.


…Roger Béteille, who led the A300 program, probably bears more responsibility for Airbus’s early success than anyone else. Béteille wasn’t interested in building an inferior European Boeing copy. Instead, he invested significant time in getting to know his potential customers and what they needed. This led to Airbus quickly tossing the original design for a 300-seat A300, in favor of a 225-250 seater, when it became clear that Air France and Lufthansa wanted a smaller product.


The revised A300B would prove much cheaper to develop, in part because it allowed the consortium to dispense with the expensive Rolls Royce engine in favour of a cheaper American alternative. In response, the UK exited the project, only to later return with a lower ownership stake.


This willingness to risk political blowback and avoid petty chauvinism in equipment choice was rare in industrial strategy.


Béteille went one step further. He designated English the official language of the project, instead of the usual mixture of languages that characterised European projects, and forbade the use of metric measurements to make it easier to sell into the US market.


Air travel computational complexity

Also on the subject of commercial air travel, air travel reservation systems, like early spell checkers, are one of those pieces of software that’s much more complicated than it might first appear. One of the first major air travel reservation software systems, SABRE, was built by IBM using the technology from the recently-completed SAGE defense system (one of the most expensive megaprojects ever built). And these 2003 lecture slides from an MIT course on artificial intelligence talk about some of the computational difficulties involved in booking flights:


At 30,000,000 flights per year, standard algorithms like Dijkstra’s are perfectly capable of finding the shortest path. However, as with any well-connected graph, the number of possible paths grows exponentially with the duration or length one considers. Just for San Francisco to Boston, arriving the same day, there are close to 30,000 flight combinations, more flying from east to west (because of the longer day) or if one considers neighboring airports. Most of these paths are of length 2 or 3 (the ten or so 6-hour non-stops don’t visually register on the chart to the right). For a traveler willing to arrive the next day the number of possibilities more than squares, to more than 1 billion one-way paths. And that’s for two airports that are relatively close. Considering international airport pairs where the shortest route may be 5 or 6 flights there may be more than 1015 options within a small factor of the optimal.


One important consequence of these numbers is that there is no way to enumerate all the plausible one-way flight combinations for many queries, and the (approximately squared) number of round-trip flight combinations makes it impossible to explicitly consider, or present, all options that a traveler might be interested in for almost all queries.


Read more

 •  0 comments  •  flag
Share on Twitter
Published on November 08, 2025 05:03

November 6, 2025

Strap Rail

The early history of the United States runs along with the first years of the railroad. A small prototype of a steam-powered locomotive was first built by William Murdoch in 1784, just a year after the Treaty of Paris ended the Revolutionary War. The first working steam locomotive, Richard Trevithick’s Coalbrookdale Locomotive, was built in 1802, and the first public steam railway in the world, George Stephenson’s Locomotion No. 1, was built in 1825.

Early railroad development largely took place in the UK (Murdoch, Trevithick, and Stephenson all built their locomotives there), and early US locomotives were British imports. However, British locomotives were quickly found to have difficulties running on American railroads. British railroads were “models of a civil engineering enterprise”, having:

…carefully graded roadbeds, substantial tracks, and grand viaducts and tunnels to overcome natural obstacles. Easy grades and generous curves were the rule. Since capital was plentiful, distances short, and traffic density high, the British could afford to build splendid railways.

All this engineering came at a cost: early British railroads cost $179,000 per mile to build (though part of this was the cost of land). But because of Britain’s high population density, traffic was high, routes could be short, and so the costs could be recovered.

Conditions in the US were far different. Distances were large, populations were lower, and financing for large engineering projects was in short supply. As a result, US railroads evolved differently than British ones. Rather than being straight, American railroads tended to have winding routes that followed the curve of the land, and avoided tunnels, grading, or expensive civil engineering works. Instead of expensive stone bridges, US railroads used wooden trestles. Locomotives in the US needed to cope with steeper grades and much less robust track than in Britain, and thus needed to be designed differently

One interesting example of the different railroad conditions in the US and Britain is a railroad technology that was briefly popular for early US railroads: the strap rail track. British railroads were built with solid iron rails which, while effective, were expensive. Strap rail, by contrast, was built by attaching a thin plate of iron to the top of a piece of timber. This greatly reduced the amount of iron required to build railroad track — while British track required 91 tons of iron per mile, strap rail required just 25 tons.

Strap rail, via Material Culture of the Wooden Age.

This style of construction was far cheaper than British iron rails, just $20,000-30,000 per mile, 1/6th to 1/9th the cost of British rail. Building strap rail substituted comparatively precious iron (which was in short supply in the early US) for wood, which was widely available. By 1840, it’s estimated that 2/3rds of the 3,000 miles of railway in the US was strap rail track.

But this thrift wasn’t without consequence. While cheaper to build than solid iron tracks, strap rail lines decayed quickly, and had incredibly high maintenance costs, over twice as much per mile as iron rail (material culture 192). Rather than being built atop crushed rock (which would allow for proper drainage), strap rail lines were typically placed directly onto the ground. Mold, insects, and moisture quickly went to work on the wooden rails, and after a few seasons they became “a hopeless ruin”. And strap rail lines were dangerous: the repeated impact of the locomotive wheels could cause the iron straps to curl up at the ends of the timbers, in some cases derailing trains

Because of these various deficiencies, strap rail became less and less popular for commercial railroads in the US. In 1847, New York banned the use of strap rail for public railroads, and gave existing railways three years to convert to iron rails. After 1850, new commercial installation of strap rail was rare, and by 1860 most existing mileage had been replaced with solid iron rails.

However, strap rail continued to see some use in other areas. Horse-drawn streetcars, which began to appear in US cities in the 1850s, often used strap rail tracks. The comparative lightness of the streetcars made the downsides of strap rail less acute, and strap rail was a popular choice for streetcar lines until electric streetcars began to replace horse-drawn cars in the 1880s. Strap rail also found occasional use on private, industrial railways, and several such lines were built in the 1860s and 1870s. Here too, however, strap rail eventually fell out of fashion.

An even cheaper type of wooden railway was occasionally built for logging railroads to transport felled timbers: the pole road. A pole road was nothing more than two rows of logs, laid parallel on the ground, and tapered so that one log could fit into the next. “Locomotives”, little more than modified agricultural steam tractors, would ride on top of these poles on flanged wheels that wrapped around the poles. Like strap rail, pole roads were inexpensive to build but decayed incredibly quickly, and by the end of the 19th century had become increasingly rare

Pole road in Alabama in 1902, via Material Culture of the Wooden Age.

The history of technological development emerges from a complex interplay between people working to solve a particular problem (like moving travelers and goods from place to place) and the terrain in which that problem exists. The particular constraints that existed in the early 19th century US — little capital, limited access to iron, low population density — shaped how railway technology developed here, producing technological arcs like the rise and fall of the strap railroad.

 •  0 comments  •  flag
Share on Twitter
Published on November 06, 2025 05:01

November 1, 2025

Reading List 11/01/2025

Space shuttle Enterprise being towed across antelope valley in 1977. Via Wikipedia.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure and industrial technology. This week we look at a new semiconductor lithography startup, how to make batteries more like bombs, AI and real estate listings, a monastery being built in Wyoming, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

A few housekeeping items this week:

The Amazon listing for Origins of Efficiency seems to finally be fixed; as of this writing its listing 4-5 day shipping time.

I was on Odd Lots talking about construction, housing, and the book.

ADL Ventures, an energy technology and infrastructure incubator, is hiring for a role for someone who can do economic modeling for industrialized construction.

British shipbuilding reading list

The decline of British shipbuilding has been extensively studied, and there doesn’t seem to be all that much disagreement as to the causes; any book you pick up will probably tell a similar story. Here’s the ones that I found most useful:

British Shipbuilding and the State Since 1918, by Lewis Johnman and Hugh Murphy, 2002 — This book is a very thorough look at the involvement of the UK government in the shipbuilding industry since the end of WWI, but it’s also an extremely good broader survey of the British shipbuilding industry more broadly and the various challenges it faced over the course of the 20th century. If you pick one book on the recent history of British shipbuilding, make it this one.

The Economics of Shipbuilding in the United Kingdom by JR Parkinson, 1960 — This has some good information on British shipbuilding from the late 19th through the middle of the 20th century, and is a good “snapshot” of what the industry looked like right as things really started to turn.

Economic Decline in Britain: the Shipbuilding Industry by Edward Lorenz, 1991 — This is a good articulation of the basic “rational actor” thesis of British shipbuilding decline (which is implicit in a lot of these other discussions).

Sunrise in the East, Sunset in the West, by Dan McWiggings, 2013 — a PhD thesis on the rise of Korean shipbuilding and the decline of British shipbuilding, written by someone who (I believe) formerly worked in shipbuilding, this had some useful information/perspective.

Substrate raises $100 million

The arc of technological progress can be hard to predict. In the 1980s and 1990s X-ray lithography, a semiconductor fabrication technology which uses X-rays to etch microscopic patterns onto silicon wafers was considered the most promising next step in lithography technology. IBM alone had invested over a billion dollars in the technology. Instead, partly due to chronic difficulties with X-ray lithography, the industry coalesced around Extreme Ultraviolet Lithography technology.

Now a new US semiconductor startup, Substrate, is hoping to use X-ray lithography to achieve similar performance to EUV at a fraction of the price. Via the Wall Street Journal:


Substrate’s ambitions don’t end with breaking ASML’s lithography monopoly. Rather than supply the machines to chip manufacturers, known as foundries or fabs, the company says it will establish a network of its own fabs equipped with its lithography machines in time to begin producing chips at scale by 2028.


The company has hired more than 50 employees from IBM, TSMC, Google, Applied Materials and national laboratories.


Proud, who was born and raised in the U.K., was the first recipient of the Thiel Fellowship, awarded to aspiring entrepreneurs who choose to skip higher education and start a company. He renounced his British passport and became an American citizen in 2019, and became fixated on thwarting China’s advancements in semiconductors.


Advanced fabs today start at $20 billion, and sometimes cost twice that. Proud said Substrate, by using its own, cheaper tools, will be able to build fabs for a price in the “single digit billions.”


The company’s website shares a few details of their process:


Our results shown here can be compared with the current industry’s “high numerical aperture” (High NA) EUV lithography and are equivalent in resolution to the 2 nm semiconductor node, with capabilities to push well beyond.


To accomplish this, we had to invent a new technology capable of producing the critical patterns required for today’s advanced silicon, which was lower cost, less complex, more capable, and faster to build.


Random vias with 30 nm center-to-center pitch with superb pattern quality and critical dimension uniformity.


The team at Substrate has designed a new type of vertically integrated foundry that harnesses particle accelerators to produce the world’s brightest beams, enabling a new method of advanced X-ray lithography. Our accelerators create and power beams that generate light billions of times brighter than the sun, directly into our lithography tools, each using a completely new optical and high-speed mechanical system to produce the smallest of features needed for advanced semiconductor chips.


Semianalysis has more, noting that if true these claims are “extraordinary”. And here’s a skeptical take.

Making batteries more like bombs

Historically a major limitation of batteries for things like electric vehicles is energy density. Batteries store much less energy per unit mass or per unit volume compared to something like gasoline, a fact that has historically hampered the performance of electric cars. Battery energy density has improved over time, but it’s still far less than the energy density of gasoline.

Hard tech incubator Orca Sciences has an interesting (but highly technical, I could only roughly follow it) post about what it would take to make extremely energy dense batteries:


Batteries are a lot like explosives. Like explosives, batteries contain both reducing chemicals and oxidizing chemicals bundled tightly together, ready to react with one another. Like explosives, we engineer our batteries to pack these chemicals together as tightly as possible to speed reaction rates. Like explosives, we like our batteries as powerful as possible…


But the way we put batteries together is a bit like how we made gunpowder in the 1700s. Even though all of the salient reactions and transport phenomena occur on the angstrom-scale, we build batteries in mechanically separated layers—100μm anode, 30μm separator, 100μm cathode etc. Tiny as that seems, that’s as enormous in chemical terms as grains of charcoal and saltpeter in a musket. Even 20 microns leaves tens of thousands of separator molecules for each ion to crawl through before it can cough up an electron into your favorite circuit. So slow!


So here’s the obvious question: what if we could make batteries more like TNT, and assemble them at the molecular scale, with anode and cathode only angstroms apart?


$80 billion nuclear deal

The US has only built two operating nuclear power reactors (Vogtle units 3 and 4 in Georgia) in the past 25 years (a third, Watts Bar reactor 2, was completed in 2015 but started construction in 1972). Nuclear enthusiasts are perennially hoping for a nuclear renaissance in the US, and while there’s been some promising signs (such as restarting some shuttered nuclear plants), we haven’t seen any moves towards large-scale reactor construction.

But that might be changing. This week a deal was announced between the US government and Westinghouse to construct eight of their advanced AP1000 nuclear reactors. Via the Financial Times ($):


The US government and the owners of Westinghouse have struck an $80bn deal to build a fleet of nuclear reactors, using funding from a trade agreement with Japan.


Brookfield Asset Management and Cameco, Westinghouse’s owners, said they had formed a new partnership to provide the reactor technology for the plants, which would help realise Donald Trump’s goal to quadruple US nuclear capacity by 2050.


The investment announced on Tuesday would fund about eight Westinghouse AP1000 power plants, according to Brookfield, or a mix of larger facilities and small modular reactors.


The involvement of Japan is a further sign of Trump’s efforts to use ongoing trade negotiations to secure critical supplies of power and mining assers.


Washington’s spending commitment is being funded by a $550bn deal Trump has agreed with Tokyo and endorsed at his first meeting with new Japanese Prime Minister Sanae Takaichi on Tuesday.


Another part of this $550 billion Japanese investment is apparently going to shipbuilding. Via the Maritime Executive:

The Yomiuri Shimbun reports the new agreement calls for the formation of a Japan-U.S. shipbuilding working group that will focus on investments that can be made to make the shipbuilders more efficient and competitive. They call for considering standardizing ship design and parts, and possibly having Japan design parts that could be produced in the United States. By standardizing designs, they propose that the countries could repair each other’s ships.

Read more

 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2025 05:02

October 28, 2025

How the UK Lost Its Shipbuilding Industry

HMHS Britannic under construction at Harland and Wolff, circa 1914. Via Wikipedia.

From roughly the end of the US Civil War until the late 1950s, the United Kingdom was one of the biggest shipbuilders in the world. By the 1890s, UK shipbuilders were delivering 80% of worldwide shipping tonnage, and though the country only briefly maintained this market-dominating level of output— on the eve of World War I, its share of the market had fallen to 60% — it nonetheless remained one of the world’s largest shipbuilders for the next several decades.

Following the end of WWII, UK shipbuilding appeared ascendant. The shipbuilding industries of most other countries had been devastated by the war (or were, like Japan, prevented from building new ships), and in the immediate years after the war the UK built more ship tonnage than the rest of the world combined.

But this success was short-lived. The UK ultimately proved unable to respond to competitors who entered the market with new, large shipyards which employed novel methods of shipbuilding developed by the US during WWII. The UK fell from producing 57% of world tonnage in 1947 to just 17% a decade later. By the 1970s their output was below 5% of world total, and by the 1990s it was less than 1%. In 2023, the UK produced no commercial ships at all.

!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();

Ultimately, UK shipbuilding was undone by the very thing that had made it successful: it developed a production system that heavily leveraged skilled labor, and minimized the need for expensive infrastructure or management overheads. For a time, this system had allowed UK shipbuilders to produce ships more cheaply and efficiently than almost anywhere else. But as the nature of the shipping market, and of ships themselves, changed, the UK proved unable to change its industry in response, and it steadily lost ground to international competitors.

The rise of UK shipbuilding

For much of recent history, the Netherlands boasted the largest and most successful shipbuilding industry. Between 1500 and 1670, Dutch shipping had grown by a factor of 10, and by the end of the 17th century the Dutch merchant fleet, made up of mostly Dutch-made ships, was larger than the commercial fleets of England, France, Spain, Portugal, and what is now Germany combined. Dutch shipbuilding was “technologically the most advanced in Europe,” and Dutch shipbuilders could build ships 40-50% cheaper than English ones.

Over the course of the 18th century, however, the Dutch advantage was gradually eroded by “a failure to keep pace with advances in European sailing ship design and an inherent conservatism within the industry.” But while it briefly looked like the UK would come to dominate shipbuilding in the early 19th century, the mantle instead passed to the United States. By the middle of the 19th century, thanks in part to the easy availability of ship-quality lumber, the US could build wooden ships for 20 to 25% less than in the UK, and “the very existence of the British industry was under threat”.

But as wooden sailing ships gave way to iron and steel steamships over the course of the 19th century, the UK reclaimed its advantage. By the 1850s, the UK was building iron ships more cheaply than wood ships, and while it took decades for iron, steel and steam to displace wood and sail — sail remained a better option than steam for very long voyages until the 1880s — Britain’s access to cheap coal, cheap iron (and later cheap steel), and cheap skilled labor (compared to the US) allowed it to dominate the transformed shipbuilding industry. By 1900, UK shipbuilding productivity was substantially higher than in the US, and even further ahead of other countries. The UK had become the “shipyard of the world” on the back of its inexpensive production.

!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();Structure of UK shipbuilding

The structure of the UK’s shipbuilding industry reflected the conditions it operated under. Building iron and steel ships obviously required machinery and infrastructure — machine tools, cranes, slipways (sloped areas of ground the ship would slide down upon completion) — but wherever possible, British yards still eschewed expensive machinery or infrastructure in favor of skilled labor. British yards in the late 19th and early 20th century have been described as “under-equipped in relation to those of her competitors”: for instance, while the most advanced foreign yards had large, mechanically operated cranes that could be moved from berth to berth, most British yards “retained their fixed cranes, manually operated derricks, and push-carts on rails”. Because of the notoriously cyclical shipbuilding industry, which often had “seven fat years followed by seven lean years”, using labor-intensive production methods let shipbuilders easily scale up and down their operations depending on demand: workers could be hired when they were needed, and quickly fired when weren’t. A capital intensive production operation, by contrast, couldn’t easily reduce its overheads during a downturn: one extremely large, modern, and expensive British shipyard at Dalmuir was forced to close in 1929 due to difficulty servicing its overheads when demand for new ships fell.

British shipbuilders’ production operations were in large part oriented to meet the needs of British shipowners, who made up the majority of their customers. Shipyards (and their owners) developed close relationships with shipowners, who would purchase from the same yards repeatedly; as a result, shipyards did little in the way of marketing their services. Ships in the late 19th and early 20th centuries were “expensive, custom-made commodities built in close consultation with the owner”. Frequent changes during the building process by the owner were common, and standardization was limited to non-existent. With limited ability to take advantage of economies of scale, shipyards were small and numerous, and while yards tended to specialize in certain types of ships, labor-intensive production methods made it comparatively easy for yards to adapt to producing different types of ships depending on the market.

In addition to minimizing infrastructure overheads, British shipyards had low management overheads; much of the work was performed by skilled labor organized into “squads,” groups of tradesmen which could organize work without needing much in the way of specific instructions. As a result, British shipyards employed comparatively few supervisors, and did little in the way of production planning.

Because shipyard workers had little in the way of job security, the British shipbuilding industry became strongly unionized. Iron shipbuilding was done by a group of 15 different unions — riveters, boilermakers, plumbers, and so on — and there were very strict rules about which unions were allowed to do which tasks. This arrangement had its advantages — unions oversaw much of the training and apprenticeship of new workers, and allowed the work to be done with comparatively little management overhead. But it also had drawbacks: the union’s (rational) lack of trust in the shipbuilders, who they correctly viewed as considering their workers disposable, resulted in fierce opposition to changes in the nature of ship production, and made introducing method improvements fraught.

Cracks begin to appear

The first cracks in British shipbuilding hegemony began to appear after World War I. Immediately following the war, the world saw a huge increase in demand for ships: in 1919, worldwide ship production was 7.1 million gross tons, up more than double the pre-war peak of 3.3 million tons in 1913. Initially much of this output was American (thanks to the continuation of the wartime shipbuilding program past the end of the war), but as this program ended the UK was once again dominant: by 1924 the UK was producing 60% of the world’s ship tonnage.

But the competitive landscape of the shipbuilding industry was changing. Worldwide shipyard capacity after the war was roughly double what it had been before it. Other countries were eager to give support to their local shipbuilding industries, and foreign shipyards were catching up to British levels of productivity.

Ships and the methods for producing them were also changing. New shipyard layouts and new production techniques such as welding were being experimented with, and new types of ships such as diesel-powered ships and tankers — kinds of ships British shipowners and shipbuilders seemed less interested in building — were increasingly in demand. When the booming shipbuilding market entered a downturn in the 1920s — worldwide ship output dropped from over 7 million gross tons in 1919 to 2.2 million in 1924 — the competitive pressure on British shipbuilders began to mount. In 1925 the British shipbuilding community was shocked when Furness Withy, a British shipping company, ordered several ships from Germany, citing lower costs and faster delivery times. The news was surprising enough that a British shipbuilding trade association called an emergency conference on the problem of foreign competition:

The employers reviewed the history of foreign competition and conceded that whilst before the war foreign competition had presented a threat, ‘the margin of difference then was sufficiently limited’ to prevent a successful challenge to our premier position and certainly such a catastrophe as an important British order going abroad’. The margin of difference, however, now favoured continental builders and there was every prospect of more British orders going abroad and fewer foreign orders coming to the UK.

What’s more, the postwar boom in shipbuilding had created a huge glut in ships and shipbuilding capacity. In the early 1920s British shipbuilding had a third more capacity than it did before the war, only to be faced with demand less than half of what it was in 1914. By 1933, worldwide seaborne trade was down 6% from 1913, but the tonnage of merchant ship capacity was up by 49%, greatly reducing demand for new ships.

British shipbuilding was further hamstrung by the Washington Naval Treaty, which was signed in 1922 and then extended by the London Naval Treaty in 1930. The treaty, signed by the UK, the US, France, Italy and Japan, attempted to prevent a naval arms race by limiting naval vessel construction for each signatory. What had been a robust British program of naval expansion was suddenly halted. Following the treaty, British naval shipbuilding fell by over 90%.

The lack of shipbuilding, increased pressure from foreign competition, and the global decline in new ship orders wreaked havoc on the British shipbuilding industry in the 1920s and 30s, to the benefit of its competitors. The UK’s fraction of worldwide ship tonnage produced declined from 60% in 1924 to 50% by the end of the 1920s, to less than 40% by the mid-1930s. Over the same period, Germany, Sweden, and Japan’s combined share rose from 12% to 36%. Numerous British shipbuilding firms ceased operations, and shipbuilding unemployment rose from 5.5% in 1920 to over 40% in 1929. In the first 9 months of 1930, only one in five of British shipbuilders received any orders at all.

To help address the problem of excess shipyard capacity, British shipbuilders banded together to form the National Shipbuilders Security (NSS) company, which was funded to “assist the shipbuilding industry by the purchase of redundant and/or obsolete shipyards and dismantling and disposal of their contents.”. By 1938, over 216 ship berths had been demolished. The NSS also made efforts to try to improve productivity and improve the competitive position of British shipyards compared to foreign yards, though it’s not clear if this had much impact: by 1938, foreign orderbooks appeared far fuller than British ones, and the industry was palpably worried about the threat of foreign competition. It wasn’t until the Royal Navy began a period of rearmament in the late 1930s in anticipation of war that fortunes began to improve for the UK shipbuilders.

Post WWII

Following the end of WWII, fortunes initially looked bright for the British shipbuilding industry. German and European shipbuilding capacity had been devastated by the war, the US was dismantling its enormous wartime shipbuilding machine, and Japan was forced to cease ship production. In the immediate years following the war, the UK was once again producing more ship tonnage than the rest of the world combined.

But the issues of foreign competition that had increasingly threatened the UK’s dominance prior to WW2 had only retreated temporarily. What’s more, during the war, developments had taken place that would threaten the skilled labor-intensive production model that British shipbuilders relied on. To win the Battle of the Atlantic and overcome destruction to their fleet caused by German U-boats, the US had used welded, prefabricated construction to rapidly build enormous numbers of simple cargo ships: Liberty ships, Victory ships, and T-2 Tankers. In the 1950s, those methods of ship construction were brought to Japan, where they continued to be refined.

British shipbuilders could have taken advantage of these methods as well. They had seen firsthand the huge number of Liberty ships American shipyards were producing (the Liberty ship was, after all, originally a British design), and had made use of welded, prefabricated construction themselves to build vessels during wartime. But British shipbuilders perceived adopting these radically different production methods as risky. It would require enormous capital expenditure, and British shipbuilders had only survived the brutal 1930s thanks to their comparatively light overheads and labor-intensive methods. Dramatic changes to production methods would also require changes to the very strict demarcation system of its unionized labor force, which the unions, naturally distrustful of shipyard operators, were sure to resist. And while the US had successfully built thousands of ships very rapidly during the war, it had come at a cost: the US cargo ships, using modern methods, were more expensive to build than similar British ships. Though some British shipbuilders recognized the potential of welding and prefabrication, they weren’t clearly worth reorganizing the entire industry around.

Steel plate fabrication at British shipbuilder Stephen of Linthouse in 1950. Johnman and Murphy note that this scene “could just as well have been 1900.” Via Johnman and Murphy 2002.

And beyond their rational reluctance, British shipbuilders were simply not predisposed towards adopting radical innovations. Many of them (along with many British shipowners) were suspicious of welding, partly due to natural conservatism and partly due to several high-profile failures where welded ships cracked in two. (As late as 1954, British shipbuilders noted that “owners do not want a welded box, they expect plenty of riveting.”) British shipbuilders similarly proved somewhat reluctant to enter the world of tanker and diesel-powered ship construction, which were becoming an increasingly large fraction of ship construction. They also seemed to always have an excuse for not making large, new capital investments: when the yards were busy, such investments were disruptive and made it difficult to get on with the business of building ships, and when they weren’t, there was no funding available to do so. Shipbuilders were often small, family-run businesses, sometimes by descendants of the original founders, and they were generally happy to simply carry on business as they always had. And broader ownership did not foster innovation either, with firms distributing their profits as dividends rather than reinvesting them in the business, driving the stock prices of shipbuilders up more than any other manufacturing industry.

!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();

Moreover, British shipyards tended to be cramped, with little room for expansion, and British shipbuilders were fearful that any postwar boom in shipbuilding would (once again) be followed by a bust, requiring retrenchment and making investment in expanding facilities unwise. (One British shipbuilder predicted that “it will be a case of last in, first out, and that Britain’s policy of modernising her shipyards without any great expansion of their capacity will pay dividends in her future competition.”)

Thus, even as the world shipbuilding market boomed in the 1950s, UK shipbuilding output stayed roughly constant. Between 1947 and 1957, UK ship output rose by 18%, while worldwide output comparatively rose by over 300%. Countries like Germany, Sweden, and Japan picked up the slack. The UK lost its position as the world’s biggest shipbuilder by tonnage to Japan in 1956. By the end of the 1950s, Germany and Sweden had passed the UK in ship tonnage built for export and by the end of the 1960s they were outproducing the UK in overall ship tonnage. Between 1950 and 1975, one of the largest shipbuilding booms in history, the UK was the only major shipbuilding country to not increase its output at all.

As other countries expanded their output, adopted modern production methods, and built new, efficient shipyards, British shipbuilders found themselves increasingly uncompetitive. Their costs were higher than those of foreign shipbuilders, and their delivery times were longer. Shipbuilding had traditionally been done on a “cost plus” basis, where owners were charged some percentage above costs incurred when building the ship, but owners were increasingly requiring fixed price bidding. British shipbuilders, with their comparative lack of production planning and managerial control, struggled to adapt to the new reality. British shipowners, traditionally the primary customer of British shipyards, were increasingly buying their ships abroad, citing the UK’s high costs and long delivery times.

Attempts to remedy the situation

Investigations into potential problems in the shipbuilding industry started shortly into the post-war era. In 1948, following a British shipowner cancelling several orders with British yards due to “costs and delivery times,” the government stepped in to conduct analyses of the shipbuilding industry. One, produced by the Labour Party’s research arm, argued that “that new methods of construction, utilising welding and prefabrication, required a complete reconstruction of the yards, and that this could be best achieved via Government finance and assistance…It seems inevitable that a prosperous shipbuilding industry will require heavy expenditure and it is very doubtful whether the industry is willing and or able to undertake this.” The other, produced by the Shipbuilding Costs Committee (staffed largely by industry leaders), was a “’a very non-controversial report,’ which provided ‘no useful recommendations on which action could be taken.’” In light of the industry’s full postwar order books, the government opted not to take any action.

In the mid-1950s, the government examined the practices of British naval shipbuilding, and concluded that “there was a need for a wholesale modernisation of the shipyards in terms of layouts, plant and equipment particularly to increase the use of prefabrication.”

The return on capital employed and investment rates were criticised as poor and British costs were felt to be comparatively high in international terms. Restrictive practices were noted, but more worryingly there was a shortage of skilled labour in the industry, which was responsible for the British single-shift system, with consequently expensive overtime, compared with the double-shift system which was normal on the Continent and in Japan.

A 1957 report from the Working Party on the Transport of Oil From the Middle East similarly noted that the industry had been “sluggish in responding to the opportunities from expansion” and criticized its lack of investment (though it did note that some shipyards had begun some degree of modernization).

As the booming shipbuilding market turned at the end of the 1950s, these investigations grew more frequent and more worried. A 1959 Treasury report concluded the industry “is not competitive. It has not modernised its production methods and organisation so quickly or so thoroughly as its competitors. Its labour relations are poor, and its management, while improving, is not as good as it might be.” It was further “pessimistic over delivery dates and prices and concluded that the prospects of the industry becoming competitive could not be rated very high.”

This report was followed in 1960 by a Department of Scientific and Industrial Research report on the shipbuilding industry, which declared that the industry’s R&D efforts had been “woeful”:

…production control in the industry was primitive, the total effort devoted to research and development was insufficient, and almost no organised research had been applied to production and management problems with a view to improving the productivity of capital and labour and reducing costs.1

That same year, the head of the UK’s Shipbuilding Advisory Committee staged a high-profile resignation, stating that the excuses of the industry to avoid investigating its problems were “so frustrating that to continue serving the industry as chairman would be fruitless.” In response to the resignation, in 1961, the Advisory Committee released a report on the shipbuilding industry. However, other than arguing that credit facilities (lending to shipowners to make purchasing ships easier) should be expanded, the report gave few strong recommendations.

The Shipbuilding Advisory Committee report was followed by another government-commissioned report on why British shipowners were buying from foreign yards; the report concluded that the major reasons were “price; price and delivery date; price and credit facilities; guaranteed delivery dates; and the reluctance of British builders to install foreign-built engines.”

As the industry continued to struggle in the 1960s, the reports piled up even higher. A 1962 report on productivity commissioned by the British Ship Research Association noted “the underdeveloped nature of managerial hierarchies in the industry as a serious weakness and recommended a more systematic approach to production control”. A report by the British Ship Exports Association on the Norwegian market (traditionally one of Britain’s strongest ship export markets) noted that “Norwegian customers... have voiced a series of complaints, the principal of which. .. is that in addition to regarding us as unreliable over delivery, many are now saying that we cannot be trusted to honour contracts.” The author of the report noted that:

…every reported speech by a British shipbuilder in the Norwegian press usually comprised a list of excuses for poor performance, ranging from official and unofficial stoppages, shortages of labour, failings on the part of subcontractors, modernisation schemes not producing the anticipated results, to recently completed contracts having entailed substantial losses. The impression thus gained by the Norwegians, according to Holt, was of an industry where the shipbuilders had no control or responsibility over problems, and worse, had no ideas as to how to address the problems. (since 1918 147).

Accordingly, the UK’s share of Norwegian shipbuilding, its largest export market, fell from 48% in 1951 to 2.8% in 1965.

!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();

As reports documenting the UK’s shipbuilding troubles accumulated, things continued to look worse and worse for the shipbuilders. The market for ocean liners, once a major source of demand for British shipbuilders, was being threatened by the rise of commercial air travel. Transatlantic travel by air surpassed transatlantic ocean travel in 1957, and the ascendance of commercial jetliners further eroded the market. By 1969, passengers crossing the Atlantic by plane outnumbered maritime passengers 24 to 1. And the explosion of global trade in the 1950s and 60s was met not just with more ships, but with larger ships (such as Very Large Crude Carriers for transporting oil) which were cheaper to build and operate. The simple Liberty ship built in huge numbers by the US during the war was about 10,000 tons deadweight, but by the end of the 1950s Japan was building ships in excess of 100,000 tons deadweight.

Likewise, by the end of the 1960s Japanese shipyards were building drydocks with 400,000 tons capacity. By comparison, British shipyards, which had been built in an era of much smaller ships, had trouble accommodating these huge ships, in part due to lack of space for expansion. (Some space-constrained British yards attempted to get around their space constraints by building large ships in two halves, which were then stitched together in the water.) And as ships got larger, prefabricated construction methods got more advanced, utilizing massive gantry cranes to assemble ships out of huge pre-built blocks. Advances in prefabrication continued to drive down costs in the shipyards that implemented them. Between 1958 and 1964, the labor hours required to build a given amount of tonnage in a Japanese shipyard fell by 60%, and the amount of steel fell by 36%. Over the same period the UK’s share of global shipbuilding fell from 15% to 8%.

Average size and speed of tankers, via Harrison 1990.

Throughout this period, major advances in ship technology were increasingly happened outside the UK. A history of UK shipbuilding noted that 13 of 15 major ship innovations introduced between 1800 and 1950 were first widely adopted in Britain. But of 14 major innovations introduced between 1950 and 1980, only three were first widely adopted in Britain.

Neither the government nor the British shipbuilding industry seemed willing to take bold steps to try to improve its fortunes. The Board of Trade stated in 1960 that there was little point to government intervention “until the prospects of the industry are very patently bad”. Despite the numerous government reports pointing out the various industry deficiencies, by 1963 the only government action that had been taken was the aforementioned expansion of credit facilities to shipowners. British shipbuilders remained convinced that a decline in the market was just around the corner and that the expensive shipyards of their competitors would become millstones. And despite the shipbuilding market shifting towards larger, more standardized ships sold at fixed prices, UK builders tried to stick with their tried-and-true strategy of ships tailor-built to the needs of British shipowners. But as British shipowners, who themselves made up an increasingly small fraction of global shipping (the UK’s share of global shipping tonnage fell from 45% in 1900 to 16% in 1960), increasingly took their business to foreign shipyards, British shipbuilders found that “the protective skin that tradition and convention gave to the home market has been split beyond repair”. By 1962, the bankruptcy of many UK shipbuilders had finally become “an alarming reality rather than an impending possibility”.

It wasn’t until after the 1964 elections, with a new Labour government, that more serious actions to rescue the shipbuilding industry began to be taken (possibly because the “vast majority of shipbuilding yards lay within Labour parliamentary seats”). In early 1965, a British group led by the Minister of Trade visited several Japanese shipyards, which were found to have lower costs, shorter delivery times, and higher productivity than British yards. Following this visit, the government instigated yet another investigation into the British shipbuilding industry. The resulting report (known as the Geddes report, after Reay Geddes, the head of the committee that produced it), enumerated a long list of problems of UK shipbuilders, including high costs (20% more than competitors on average), long delivery times, late deliveries, out of date infrastructure, poor labor relations, and poor management. Moreover, even at prices 20% higher than their competitors, British shipyards struggled to be profitable, with the report noting that “many of the orders now in hand will not be remunerative to cover costs”.

British shipyards’ traditional production methods, which demanded little in the way of management, meant that yards had few supervisors. The supervisors they did have had often worked their way up from the shop floor and had never received any business or management training. As a result, production planning and estimates of labor required were poor. While this may have been acceptable when vessels were custom-produced on a cost-plus basis, it made yards ill-equipped for a market where fixed-cost pricing was the norm. A history of British shipbuilding noted that “in an increasingly competitive industry British shipyards did not successfully perform the financial management tasks—marketing, budgeting and procurement--necessary to build vessels profitably.”

The Geddes report also noted that labor relations were abysmal, and strict adherence to labor demarcation rules (which themselves varied extensively from yard to yard) greatly hampered productivity. The labor demarcation problem was so severe that in some yards it literally took three different workers to change a lightbulb:

…a laborer (member of the Transport and General Workers Union) [to] carry the ladder to site, a rigger (member of the Amalgamated Society of Boilermakers, Shipwrights, Blacksmiths and Structural Workers Union) [to] erect it and place it in the proper position, and an electrician (member of the Electrical Trades Union) [to] actually remove the old bulb and screw in the new one. Production was often halted while waiting for a member of the appropriate union to arrive to perform the job reserved by agreement for them. (sunset 96)

The Geddes committee further argued that none of these issues could be fixed at the scale contemporary shipbuilders were operating at, and that a prerequisite for rescuing the industry was grouping existing shipbuilders together to give them the scale needed to be internationally competitive. The report recommended consolidation, and for a comparatively small amount of government financing (37.5 million pounds in grants and loans, and another 30 million in credit guarantees) to be awarded over the next four years to shipbuilders that consolidated and met a strict set of performance targets.

The post-Geddes industry

In 1967, the Shipbuilding Inquiry Act implementing the Geddes Report’s recommendations, was passed, and over the next several years, 27 major British shipbuilders were consolidated into 12 groups (though some of these “groups” were groups of just one). As shipyards consolidated, government financing began to flow: between 1967 and 1972 almost 160 million pounds (much more than had been recommended by the Geddes Report) were provided by the government to various shipbuilders, and the shipping board made recommendations for over a billion pounds worth of bank loan guarantees for British shipowners purchasing in British yards.

None of this helped stem the decline. By 1971, the government-backed British shipyards had achieved “no gains in competitiveness, no improvement in turnover, falling profitability and cash resources [that were] becoming rapidly inadequate.” Late deliveries continued to dog the shipbuilders: from 1967 to 1971 nearly 40% of British ships were 1 month late, and nearly 10% were six months late. With fixed-cost contracts, stiff penalties for late deliveries, and rapid inflation (over 20% from 1967 to 1971), late deliveries drastically eroded profitability, and much of the government funding actually went to writing off shipbuilders’ losses. Between 1967 and 1972 three major shipbuilders — Upper Clyde Shipbuilders (a post-Geddes amalgamation of several smaller builders), Cammell Laird, and Harland and Wolff (the company that built the Titanic, once the largest shipbuilder in the world) — were rescued from bankruptcy and became owned “wholly or in part by the British government”. In the early 1970s, the shipbuilding industry performed “even below the Geddes ‘worst case scenario”.

!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();

The industry’s failure to improve seems to be partly due to a lack of urgency on the shipbuilders part. A combination of factors (British pound devaluation, unexpectedly high enthusiasm for the credit offerings for British shipowners, the Suez canal closing) created enough demand for British ships in the late 1960s that the orderbooks swelled. While these orders were a small fraction of worldwide orders, the rise in demand was sufficient to dampen the urgency for reforming shipbuilding practices. In 1972, yet another report on the British shipbuilding industry (this one produced by Booz-Allen Hamilton), was released. It found that nearly every problem listed in the Geddes report had gotten worse:

This review of the U.K. shipbuilding industry is pessimistic about the general background situation, and critical of the industry in many areas. U.K. yards generally are under-capitalized and poorly managed; the industry has a poor reputation amongst its customers, particularly for delivery and labour relations; overseas competition has moved more rapidly to modernize and re-equip its facilities and is now better placed to face the forecast surplus of capacity which will exist for the remainder of the 1970’s

Even companies that had invested in the infrastructure they needed to compete in the modern shipbuilding world found themselves struggling. Harland and Wolff, for instance, had been “reconfigured in the 1960s to build tankers of up to 1 million deadweight tons”:

…it should have thrived during the 1967-73 large tanker boom. Instead, due to the typical British shipbuilding problems of high costs, low productivity, poor labor relations, and late delivery, it lost money even with a full order book.

The problem of money-losing contracts became so severe that some British shipbuilders found themselves paying owners to cancel contracts they were unable to build profitably.

With seemingly no way of returning the industry to commercial profitability, and already operating struggling shipbuilders directly, the government moved to full nationalization. In 1974, a Labour government was elected with promises to nationalize the industry, which eventually took place in 1977. The Aircraft and Shipbuilding Industries Act of 1977 created a new company, British Shipbuilders, encompassing 97% of Britain’s commercial shipbuilding capacity.2

But even nationalization did nothing to rescue the industry. After peaking in 1975, the worldwide shipbuilding market collapsed in the wake of the 1973 energy crisis. The shipbuilding industries in countries like Japan became desperate for new orders, and recent, subsidized entrants like South Korea, Taiwan and Brazil were hungry as well. In the face of such fierce competition, the new British Shipbuilders company wilted. Despite taking virtually any order that it could get, even at loss-making prices, the UK’s shipbuilding industry continued its inexorable decline. Between 1975 and 1985, the UK’s shipbuilding output declined by nearly 90%, and its share of the world market fell from 3.6% to less than 1%. British Shipbuilders began re-privatization in 1983 with the passage of the British Shipbuilders Act, and over the next several years most of those newly privatized yards would close. In 2024, the UK produced just 0.01% of the commercial ship tonnage built worldwide that year. In 2022 and 2023, the percentage was 0.

Conclusion

In his book on the decline of the British shipbuilding industry, Edward Lorenz argues that while British shipbuilders precipitated their own decline, their decisions were essentially rational, the product of the constraints that they were operating under at the time. A production system based heavily on leveraging skilled, union-trained labor, minimizing the need for expensive infrastructure or equipment and with a minimum of management overheads helped keep British costs low; the labor force could be scaled up and down depending on demand, and workers could easily move from yard to yard as the work required. This production system worked reasonably well for decades, and only truly began to unravel after WWII, when the shipbuilding market transformed technologically and transactionally, and began demanding much larger vessels, made from welded, block-construction, sold under fixed price contracts.

While British shipbuilders could have responded by enthusiastically embracing the new methods, they had learned from a lifetime of doing business in a wildly fluctuating market that investments in expensive new shipbuilding infrastructure or high-overhead production methods were risky. Their strategy of rapidly scaling their labor force depending on day-to-day needs had bred a deep distrust between management and labor, and created a strict demarcation system that was difficult to dislodge. Decades of urban development in port cities had made physical expansion of shipyards difficult. Uncertainty about whether transformation was truly needed, and the certainty of costly disruptions should they try, resulted in British shipbuilders sticking with their existing production methods as the rest of the world passed them by.

Moreover, globally the economic pressure to shift shipbuilding towards locations with lower labor costs was immense. Sweden modernized its operations far more effectively than the UK, and was one of the most efficient, capable shipbuilding countries in the world in the post-war era, but this didn’t stop the Swedish shipbuilding industry from being hollowed out in the face of competition from Asian producers. Similarly, Japan’s skill in shipbuilding hasn’t stopped it from losing ground to China in recent years.

There’s a telling bit in a book about the nationalized British Shipbuilders company called “Crossing the Bar”. At an international shipbuilders conference in London in 1983, a UK shipbuilder asked a Korean delegate why they were keeping their prices so low. The delegate responded that they weren’t worried about UK or European competition, or even Japanese competition: they were worried about China. Several decades on, Korea is now losing ground in the shipbuilding market to China too. So, it’s not clear if a much more vigorous British shipbuilding industry would have been all that much more successful in resisting its ultimate decline.

Nevertheless, the lack of motivation (rational or otherwise) of British shipbuilders to modernize their operations certainly did not help. Perhaps they would have ultimately lost out to low-cost Korean and Japanese builders anyway, but had they shared Japan’s “burning zeal” to make their industry competitive, they might have kept the wind in their sails longer.

1

The DSIR report was initially leaked to the media, and the final published version was stripped of much of its more trenchant criticism.

2

Harland and Wolff, despite then being owned by the British government, was left as an independent entity due to political concerns regarding Northern Ireland.

 •  0 comments  •  flag
Share on Twitter
Published on October 28, 2025 05:03

October 25, 2025

Reading List 10/25/25

Image Overhead view of Barcelona, via YIMBYLAND.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure, and industrial technology. This week we look at jet engine-powered data centers, Brightline train deaths, cracks in a super thin skyscraper, a Chinese particle accelerator, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

Some housekeeping items this week:

No essay this week, but a longer piece on how Britain lost its shipbuilding industry will be out next week.

My book’s Amazon listing is still goofed up (though I’m hopeful it will get fixed soon) — it’s still only available from 3rd party sellers with very long shipping times. I’d recommend ordering from Barnes and Noble or Bookshop in the meantime.

The book had a nice review in the Wall Street Journal.

Jet engines for data centers

Gas turbine power plants and jet engines share a lot of similarities, to the point where some gas turbine power plants are “aeroderiviatives” which are based on the design of a jet engine. Because the backlog for new gas turbine power plants is so long, and because some data centers want their power now, some data center buildings are turning to repurposed jet engines to supply power. Via Data Center Dynamics:


The PE6000 gas turbines are made through retrofitting old CF6-80C2 jet engine cores and matching them with newly manufactured aero-derivative parts made by ProEnergy or its partners.


To make jet engines suitable for use as power generators, they are modified with an expanded turbine section to convert engine thrust into shaft power, a series of struts and supports to mount them on a concrete deck or steel frame, and new controls. Following assembly, the engines can supply 48MW of capacity.


“We have sold 21 gas turbines for two data center projects amounting to more than 1GW,” said Landon Tessmer, VP of commercial operations at ProEnergy. “Both projects are expected to provide bridging power for five to seven years, which is when they expect to have grid interconnection and no longer need permanent behind-the-meter generation.


Powerships

In other “unusual sources of power” news, this week I learned about powerships, which are basically floating power plants built into the hull of a ship. One such ship, provided by the firm Karpowership, was from 2018 until earlier this year providing 40% of the electricity for the west African country The Gambia:

In February 2018, Karpowership signed a 2-year contract with National Water and Electricity Supply Company Ltd. of the Gambia to deploy a Powership of 36 MW. In 2020 the contract was extended for 2 more years and in May 2022 extended for another 3 years. Karpowership has been operational in the Gambia since 2018 and supplying 40% of the Gambia’s total electricity need. In 2017, a year before Karpowership began its operations, only 56% of the population had access to electricity. Today, that figure has increased to 65%.

The ship formerly powering The Gambia, via Karpowership.Brightline train deaths

Related to our recent discussions of pedestrians getting killed by cars, apparently the Brightline train in Florida has become notorious for killing pedestrians. As with pedestrian deaths, the reasons for the large number of people killed by the train is unclear. From the Atlantic:


What the Brightline is best known for is not that it reflects the gleam of the future but the fact that it keeps hitting people. According to Federal Railroad Administration data, the Brightline has been involved in at least 185 fatalities, 148 of which were believed not to be suicides, since it began operating, in December 2017. Last year, the train hit and killed 41 people—none of whom, as best as authorities could determine, was attempting to harm themselves. By comparison, the Long Island Rail Road, the busiest commuter line in the country, hit and killed six people last year while running 947 trains a day. Brightline was running 32.


In January 2023, the National Transportation Safety Board found that the Brightline’s accident rate per million miles operated from 2018 to 2021 was more than double that of the next-highest—43.8 for the Brightline and 18.4 for the Metra commuter train in Chicago. This summer, the Miami Herald and a Florida NPR station published an investigation showing that someone is killed by the train, on average, once every 13 days.


Battery storage and rolling blackouts

We’ve talked a few times about the benefits that battery storage can bring to electrical grids. By storing excess electricity, batteries act as a buffer, making it easier to match supply and demand and reducing the need for expensive “peaker” power plants. We previously noted that the rise of battery storage has been credited with reducing the risk of power outages in Texas:

In 2023, Texas’ ERCOT issued 11 conservation calls (requests for consumers to reduce their use of electricity), including 7 during late August, to avoid reliability problems amidst high summer temperatures. But in 2024 it issued no conservation calls during the summer. In part this was due to the rapid increase in battery storage, which rose by 4 gigawatts (roughly 50%) between January and June 2024.

It seems like something similar has happened in California. The Los Angeles Times has a good piece about the rise of battery storage in California and the simultaneous decline in rolling blackouts:


For decades, rolling blackouts and urgent calls for energy conservation were part of life in California — a reluctant summer ritual almost as reliable as the heat waves that drove them. But the state has undergone a quiet shift in recent years, and the California Independent System Operator hasn’t issued a single one of those emergency pleas, known as Flex Alerts, since 2022.


Experts and officials say the Golden State has reached a turning point, reflecting years of investment in making its electrical grid stronger, cleaner and more dependable. Much of that is new battery energy storage, which captures and stores electricity for later use.


In fact, batteries have been transformative for California, state officials say. In late afternoon, when the sun stops hitting solar panels and people are home using electricity, batteries now push stored solar energy onto the grid.


California has invested heavily in the technology, helping it mature and get cheaper in recent years. Battery storage in the state has grown more than 3,000% in six years — from 500 megawatts in 2020 to more than 15,700 megawatts today.


Read more

 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2025 05:00

October 18, 2025

Reading List 10/18/25

Draugen oil platforms under construction.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure, and industrial technology. This week we look at a North Korean construction company, data center popularity, why robot dexterity is hard, a map of US solar panels, and more. Roughly two thirds of the reading list is paywalled, so for full access become a paid subscriber.

Some housekeeping items this week:

My book was finally released this week! A few book-related items:

The book was temporarily out of stock on Amazon for a few days, possibly because Amazon’s algorithm underestimated demand. It’s back in stock now, but only available from third-party sellers with long delivery times. Stripe Press is working on fixing this. It's available with normal deliverty times from other sellers.

The book was an Amazon Editor’s pick for “Best Nonfiction”.

I was on TBPN talking about the book (I come on at around 2:35:00).

Leah Libresco has a review of the book on Commonplace.

Another excerpt from the book was published on CapX.

No Amazon reviews yet, so if you’ve received a copy and read it drop a review.

I was on Statecraft with Alex Armlovich and Will Poff-Webster talking about the ROAD to Housing Act.

North Korean statue building

I typically think of North Korea as a country that’s almost completely cut off from the global economy, in part due to the large number of sanctions against them, but apparently they have a construction firm, Mansudae Overseas Projects, that builds huge North Korean-style statues all over the world. Via Wikipedia:

As of August 2011, it had earned an estimated US$160 million overseas building monuments and memorials. As of 2015, Mansudae projects have been built in 17 countries: Angola, Algeria, Benin, Botswana, Cambodia, Chad, Democratic Republic of Congo, Egypt, Equatorial Guinea, Ethiopia, Germany, Malaysia, Mali, Mozambique, Namibia, Senegal, Togo and Zimbabwe. The company uses North Korean artists, engineers, and construction workers.

African Renaissance Monument in Dakar, Senegal.Data centers are unpopular

We’ve noted before that data centers, which historically were treated as a neutral to positive for local communities (since they contributed tax revenue without adding much demand for local services), are now increasingly opposed by local residents. Now it seems like politicians on both sides of the aisle are starting to notice. Via Semafor:


GAINESVILLE, Va. — On Friday night, dueling candidates for a board of supervisors seat in this suburban county found a cause that united them: banning new data centers.


“I think we should, personally, block all future data centers,” said Patrick Harders, the Republican running for an open seat on the Prince William County board. George Stewart, his Democratic opponent, agreed that “the crushing and overwhelming weight of data centers” was a crisis, with massive companies “having us, as residents, pay for their energy.”


As electricity bills rise, a growing number of US candidates in both parties are pointing to the high energy costs of data centers — booming thanks to tech companies’ AI investments — as the culprit. While the issue isn’t yet a flashpoint in statewide races, it’s already an overwhelming source of debate in local ones, especially in Virginia.


Similarly, a survey run by The Argument found that “70% of respondents were concerned, and 35% were “very concerned” about the impact of AI on energy costs.”

Image AI water use

Also on the subject of data centers, we’ve previously discussed the amount of water used by data centers in my essay on US water consumption, and in a follow up essay correcting an error in the original piece. Even with the correction (which substantially increased the estimate for how much water data centers use), my conclusion is that while data center and AI water use numbers seem large in absolute terms (because “millions of gallons” always sounds like a lot), they’re very small when compared to other large-scale industrial uses. And because industrial water use goes towards producing products that we end up buying, there’s a lot of water use “baked in” to the various goods we consume that doesn’t get so much attention. On his substack, Andy Masley breaks this down:


Have you ever worried about how much water things you did online used before AI? Probably not, because data centers use barely any water compared to most other things we do. Even manufacturing most regular objects requires lots of water. Here’s a list of common objects you might own, and how many chatbot prompt’s worth of water they used to make (all from this list, and using the onsite + offsite water value):


Leather Shoes - 4,000,000 prompts’ worth of water


Smartphone - 6,400,000 prompts


Jeans - 5,400,000 prompts


T-shirt - 1,300,000 prompts


A single piece of paper - 2550 prompts


A 400 page book - 1,000,000 prompts


If you want to send 2500 ChatGPT prompts and feel bad about it, you can simply not buy a single additional piece of paper. If you want to save a lifetime supply’s worth of chatbot prompts, just don’t buy a single additional pair of jeans.


Why robot dexterity is hard

We’ve previously talked about difficulties with robot dexterity, and how there’s a sort of Moravec’s Paradox at work with robot demonstrations: we see lots of demos of robots doing things that look difficult, like dancing or kung fu, but fewer impressive demonstrations of robots doing simpler, object manipulation tasks. On his substack, Bryson Jones, the “co-founder and CEO of Adjoint, a company focused on building autonomous robots for skilled labor abundance”, explains why robotic manipulation is a harder problem than locomotion: while locomotion has properties that make it amenable to large-scale training in simulation, manipulation generally doesn’t.

Manipulation poses a more sobering challenge:


No compact reward functions The “cost function” for inserting a screw or cooking an omelet is hard to specify. Rewards often need careful shaping or human priors, leading to brittleness and unexpected outcomes from reward-hacking.


Rich sensing requirements Unlike locomotion, manipulation usually requires vision and tactile feedback to estimate object shape, pose, affordances, and contacts. Tactile sensing hardware is still immature. The example I like to give people is: “imagine dipping your hand in anesthetic and trying to pick your phone up… it’s basically impossible”


Discrete modes and long-horizons introduce unique complexity Manipulation often involves discrete modes for task completion: grasping, lifting, pushing, where each object can behave differently. Contact dynamics are messy, occlusions are common, and the precision bar is much higher than for locomotion.


Few emergent behaviors Unlike locomotion, manipulation doesn’t “discover” elegant solutions on its own (so far). Without demonstrations or heavy engineering, policies struggle to converge on useful strategies.


This is why we haven’t been able to just “zero-shot” transfer all of the methods that accelerated locomotion progress in the past 5 years into manipulation.


Read more

 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2025 05:02

Brian Potter's Blog

Brian     Potter
Brian Potter isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Brian     Potter's blog with rss.