Erik Seligman's Blog
December 27, 2023
288: One Stone To Rule Them All
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
With the end of 2023 approaching, I spent some time browsing the internet for interesting math news from this year, and realized there was one big unexpected breakthrough by an amateur that really deserves discussion: the aperiodic monotile. While that term sounds exotic, it actually covers a very simple concept. We’ve talked about planar tilings before: basically, this just means a shape of tile that can cover an infinite version of your bathroom floor. Your current bathroom probably has a bunch of square or hexagonal tiles, which completely fill the available space. Well, maybe there are a few practical issues at the edges, but if you imagine the floor to be infinite, those shapes could fill it forever. If you want to fill your floor with a set of more interesting shapes, you could imagine drawing arbitrary squiggly lines in a hexagon to divide it into several smaller tiles of arbitrary irregularity, and thus these smaller tiles would cover your floor as well. But all these coverings are periodic: they consist of an infinite repetition of some small pattern at known intervals. In contrast, an aperiodic tiling would still cover your floor with copies of a few basic shapes, but the pattern would not show this kind of regular repetition.
When the idea first came up in the early 1960s, logician Hao Wang had hypothesized that no aperiodic tiling existed. But he was proven wrong within a few years by his student Robert Berger, who discovered a set of tiles that would indeed cover the plane aperiodically. His initial set was very complicated, consisting of over 20,000 tiles. But this led to a surge of interest in the topic, and by 1974 Roger Penrose had published a simple example of a pair of 4-sided tiles, known as the “kite” and “dart”, that could cover the plane aperiodically with just their two shapes. These were shown to be examples of an infinite variety of 2-tile sets that enabled such tilings. And that is essentially where the problem stood, for almost 50 years, with nobody knowing whether a single tile could cover a plane aperiodically. Bizarrely, during this long wait, chemists discovered that this mathematical game actually had an application in the real world, where these aperiodic tilings formed the basis of “quasicrystals”, real crystalline structures. Quasicrystals have been used for applications like nonstick cookware and reinforcing steel, and led to the 2011 Nobel Prize in Chemistry awarded to Israeli scientist Dan Shechtman.
Throughout this half century, many mathematicians wondered if there might be a single tile, rather than a set, that could cover the plane aperiodically. The problem was nicknamed the “Einstein” problem, not because of any relation to the famous physicist, but as a pun, with the words “Ein Stein” being German for “One Stone”. In 2010 Socolar and Taylor got close, publishing a single tile that could solve this problem— but their tile was unconnected, consisting of a central bumpy hexagon and six floating double-square shapes that orbited it in known positions. A nice breakthrough, but arguably not a single tile, depending on how you define the concept. Other researchers found a different solution that was contiguous, but only worked if some small overlaps were allowed; also an interesting result, but not really a satisfying solution to the Einstein problem.
The real solution finally came earlier this year, published by Smith, Myers, Kaplan, and Goodman-Strauss: a single tile that would actually cover the plane without regularly repeating a pattern. It was formed by stitching together eight kite shapes into something that looked like an odd-shaped hat: not trivial, but surprisingly simple for something that had evaded discovery for almost fifty years. They also proved that this was just a representative of an infinite family of shapes with this property. You can see it illustrated in some of the articles linked in the show notes at mathmutation.com.
The most surprising thing about this discovery was the fact that it was discovered by an amateur— it didn’t stem from efforts by a professional mathematician. David Smith is a retired print technician who enjoys jigsaw puzzles and playing with shapes. He was using a commonly available software package called the PolyForm puzzle solver, seeing what kind of interesting patterns he could make with different shaped tiles. When he saw that the patterns with his experimental “hat” tile looked especially unusual, he decided to cut out a bunch of this tile out of cardboard and start experimenting by hand. Convincing himself that he had found something significant, he contacted a professor he knew, Craig Kaplan of the University of Waterloo. Together, they investigated further, and after Kaplan recruited two more colleagues to assist with the proof, they confirmed that Smith really had, after all these years, found an Einstein tile.
As a result, if you’ve been thinking of remodeling your bathroom, there is now a nice new floor decoration option you should seriously consider.
And this has been your math mutation for today.
References:
https://cs.uwaterloo.ca/~csk/hat/
https://aperiodical.com/2023/03/an-aperiodic-monotile-exists/
https://hedraweb.wordpress.com/2023/03/23/its-a-shape-jim-but-not-as-we-know-it/
https://en.wikipedia.org/wiki/Aperiodic_tiling
https://en.wikipedia.org/wiki/Penrose_tiling
https://en.wikipedia.org/wiki/Quasicrystal
https://www.quantamagazine.org/hobbyist-finds-maths-elusive-einstein-tile-20230404/
https://maxwelldemon.com/2010/04/01/socolar_taylor_aperiodic_tile/
October 9, 2023
287: The Grim State of Modern Pizza
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
You’ve probably read or heard at some point about the “replication crisis”, and the related epidemic of scientific fraud, discovered over the past few decades. Researchers at many institutions and universities have been accused of modifying or making up data to support their desired results, after detailed analysis determined that the reported numbers were inconsistent is subtle ways. Or maybe you haven’t heard about this— the major media have been sadly deficient in paying attention to these stories. Most amusingly, earlier this year, such allegations were made against famous Harvard professor Francesca Gino, who has written endless papers on honesty and ethics.
Usually these allegations come examining past papers and their associated datasets, and performing various statistical tests to figure out if the numbers have a very low probability of being reasonable. For example, in an earlier podcast we discussed Benford’s Law, a subtle property of leading digits in large datasets, which has been known for many years now. But all these complex tests have overlooked a basic property of data that, in hindsight, seems so simple a grade-school child could have invented it. Finally published in 2016 by Brown and Heathers, this is known as the “granularity-related inconsistency of means” test, or GRIM test for short.
Here’s the basic idea behind the GRIM test. If you are averaging a bunch of numbers, there are only certain values possible in the final digits, based on the value you are dividing into the sum of the numbers. For example, suppose I tell you I asked 10 people to rate Math Mutation on a scale of 1-100, with only whole number values allowed, no decimals. Then I report the average result as “95.337”, indicating the incredible public appreciation of my podcast. Sounds great, doesn’t it?
But if you think about it, something is fishy here. I supposedly got some integer total from those 10 people, and divided it by 10, and got 95.337. Exactly what is the integer you can divide by 10 to get 95.337? Of course there is none— there should be at most one digit past the decimal when you divide by 10! For other numbers, there are wider selections of decimals possible; but in general, if you know you got a bunch of whole numbers and divided by a specific whole number, you can determine the possible averages. That’s the GRIM test— checking if the digits in an average (or similar calculation) are consistent with the claimed data. What’s really cool about this test is that, unlike the many statistical tests that check for low probabilities of given results, the GRIM test is absolute: if a paper fails it, there’s a 100% chance that its reported numbers are inconsistent.
Now you would think this issue is so obvious that nobody would be dumb enough to publish results that fail the GRIM test. But there you would be wrong: when they first published about this test, Brown and Heathers applied it to 71 recent papers from major psychology journals, and 36 showed at least one GRIM failure. The GRIM test also played a major role in exposing problems with the famous “pizza studies” at Cornell’s Food and Brand Lab, which claimed to discover surprising relationships between variables such as the price of pizza, size of slices, male or female accompaniment, and the amount eaten. Sounds like a silly topic, but this research had real-world effects, leading to lab director Brian Wansink’s appointment to major USDA committees & helping to shape US military “healthy eating” programs. Wansink ended up retracting 15 papers, though insisting that all the issues were honest mistakes or sloppiness rather than fraud. Tragically, humanity then had to revert to our primitive 20th-century understanding of the nature of pizza consumption.
Brown and Heathers are careful to point out that a GRIM test failure doesn’t necessarily indicate fraud. Perhaps the descriptions of research methods in a particular paper ignore some detail, like a certain number of participants being disqualified for some legitimate reason and reducing the actual divisor, that would change the GRIM analysis. In other cases the authors have offered excuses like mistakes by low-paid assistants, simple accidents with the notes, typos, or other similar boo-boos short of intentional fraud. But the whole point of scientific papers is to convincingly describe the results in a way that others could reproduce it— so I don’t think these explanations fully let the authors off the hook. And these tests don’t even include the many cases of uncooperative authors who refuse to send researchers like Brown and Heathers their original data. Thus it seems clear that the large number of GRIM failures, and retracted papers as a result of this and similar tests, indicate a serious problem with the way research is coordinated, published, and rewarded in modern academia.
And this has been your math mutation for today.
References:
https://en.wikipedia.org/wiki/GRIM_testhttps://jamesheathers.medium.com/the-grim-test-a-method-for-evaluating-published-research-9a4e5f05e870https://jamesheathers.medium.com/the-grim-test-further-points-follow-ups-and-future-directions-afd55ff67bb0#.vmgjvdvkfhttps://www.npr.org/2023/06/26/1184289296/harvard-professor-dishonesty-francesca-ginohttps://en.wikipedia.org/wiki/Benford%27s_lawhttps://www.vox.com/science-and-health/2018/9/19/17879102/brian-wansink-cornell-food-brand-lab-retractions-jama
August 27, 2023
286: Adventures In Translation
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
You may recall that in several earlier podcasts, 252 and 125, I discussed the famous math writer Douglas Hofstadter’s surprising venture into translating poetry from French and Russian into English. Hofstadter is best known for his popular books that explore the boundaries of math and cognitive science, such as “Godel Escher Bach” and “Metamagical Themas”. I found his discussions of translation challenges fascinating: you would think that translation should be a simple mathematical mapping, a 1-1 correspondence of words from one language into another, but it’s actually much more than that. Aside from different typical structures and speech patterns between languages, every word sits in a cloud of miscellaneous connotations and relationships with other concepts, so you need to really think about the right way to convey the author’s original intentions. This is why tools like Google Translate can give you the general idea of what a passage says, but very rarely create a translated sentence that sounds real and natural.
After reading Hofstadter’s description of his translation work, I thought it might be interesting to translate something myself, though no obvious candidates came to mind, But in the past few years, I became acquainted with a recently escaped Cuban dissident, Nelson Rodriguez Chartrand, who had just published a memoir in Spanish. Curious to read this memoir, and with no English translation having been planned by the author, I volunteered to take this on myself. Now I’ll be the first to admit I’m not fluent in Spanish, though I spent four years studying it in high school, but I had the advantage of direct access to the author to help clarify areas of confusion, as well as the vast general resources of the Internet. This also was probably a much easier translation challenge than Hofstadter’s in general, since this memoir was in prose, so no need to worry about things like rhyme and meter. I also had some decent knowledge of context, having read numerous memoirs by emigres from Communist countries in the past, as well as interviewing several including Nelson. Thus, I went ahead and began translating.
Now, some of you might think this is trivially easy these days due to Google Translate, but as I mentioned before, that tool does not create very good text on its own. You may recall the popular amusement when it first came out, where you translate a small passage across several languages and back to English, and chuckle at how ridiculous it ends up looking. It is useful, however, as a first step: as I approached each paragraph, I used that tool to create a “gloss”, an initial awkward translation to use as a starting point. From there, I would try to understand the core concepts being expressed, and try to rewrite the sentences in more natural sounding English. I would figure this out with a combination of reviewing the original Spanish text, researching some alternate word translations online, and consulting with Nelson in the harder cases. The most entertaining challenges were the cases where the initial version simply didn’t make sense at all.
One simple example was a children’s cheer, “Fidel, Fidel, que tiene Fidel, que los imperialistas no pueden con él”, which Google literally translates as “Fidel, Fidel, that Fidel has, that the imperialists cannot with him”. That doesn’t make much sense initially. After consulting a few sources, I settled on the translation “Fidel, Fidel, what Fidel has, the imperialists cannot overcome”, which probably expresses the core intention a bit better. It could also be a question, asking “Fidel, Fidel, what does Fidel have, that the imperialists cannot overcome?” I toyed with the idea of taking more liberties and trying to make.a rhyming chant like the original Spanish, something like “Fidel, Fidel, what Fidel has, makes the imperialist look like an ass”, similar to some of the liberties Hofstadter describes in his poetry translation efforts. But while that might make it a more effective chant, I was hesitant about making the translation’s tone a bit too lighthearted.
My favorite translation challenges were the ones that made me laugh out loud when first reading the Google translated version. For example, when describing what he liked to do in the evenings as a child, Nelson wrote a phrase that Google translated as “watching the American dolls”. Was life in Cuba really so boring that people enjoyed sitting and starting at dolls for hours? This also conjured up images of those Chucky horror movies— maybe he was trying to make sure the Yankee imperialist dolls didn’t come to life and start chasing poor Communists with a knife? Things in Cuba could be worse than I imagined. As you would expect, with some help from the author, I eventually figured out what he was trying to say— he was talking about watching cartoons on TV.
So, in the end, was my translation any good? Nelson did a basic check by Google Translating each chapter back to Spanish after I delivered the draft— of course not very good Spanish, but at least sufficient to confirm that I got the general idea. While I’m sure a professional Spanish-fluent translator would have done better, the most important goal was to make Nelson’s story available in English, and I’m confident we at least achieved that. We were able to get the translation published, with some aid from a small foundation called the Liberty Sentinels Fund, and it is now available at Amazon and other online booksellers. You can order it using the link in the show notes at mathmutation.com or just search for it on Amazon, and judge my translation for yourself. The book is called “The Revolution of Promises”, by Nelson Rodriguez Chartrand. If you like it, a good review on Amazon would also be helpful.
And this has been your math mutation for today.
References:
https://en.wikipedia.org/wiki/Douglas_Hofstadterhttps://www.amazon.com/Revolution-Promises-Reflections-Cuban-Exile-ebook/dp/B0CG11SYWK/ref=sr_1_1
June 29, 2023
285: Believe My Proofs Or Else
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
Before I start, I’d like to thank listener ‘katenmkate’ for posting an updated review of Math Mutation on Apple Podcasts. You’ve probably observed that the pace of new episodes has been slowing a bit, but seeing a nice review always helps to get me motivated!
Recently I saw an amusing XKCD cartoon online that reminded me of a term I hadn’t heard in a while, “proof by intimidation”. In the cartoon, a teacher explains, “The Axiom of Choice allows you to select one element from each set in a collection…. And have it executed as an example to the others.” The caption underneath reads “My math teacher was a big believer in proof by intimidation.” A nicely absurd joke— wouldn’t it be great if we could just intimidate the numbers and variables into making our proofs work out? “Proof by intimidation” is a real term though. To me, it conjures up an image of Tony Soprano knocking at your door, baseball bat in hand, telling you to deliver your proofs or face an unfortunate accident. But according to the Wikipedia page, the term arose after some lectures in the 1930s by a mathematician named William Feller: “He took umbrage when someone interrupted his lecturing by pointing out some glaring mistake. He became red in the face and raised his voice, often to full shouting range. It was reported that on occasion he had asked the objector to leave the classroom.”
These certainly match with the image that the term itself tends to conjure initially in our brains, but the real meaning is slightly less extreme. As Wikipedia explains it, proof by intimidation “is a jocular phrase used mainly in mathematics to refer to a specific form of hand-waving, whereby one attempts to advance an argument by marking it as obvious or trivial, or by giving an argument loaded with jargon and obscure results.” For example, while skipping a difficult step in proving some complex theorem, you might say, “You know the Zorac Theorem of Hyperbolic Manifold Theory, right?” Chances are that the listener doesn’t know that theorem, and is too embarrassed about it to try to bring up & expose your missing argument.
As you would expect, that type of argument certainly makes a proof invalid in many cases. But there is one interesting wrinkle that most of the online explanations skip over. Often, stating that something is obvious, or trivially follows from well-known results, is a very important part of a valid mathematical argument or paper It simply wouldn’t be feasible to publish modern math, science, or engineering results (or at least, to publish them in an article of reasonable length) if you could not use the building blocks provided by past researchers without re-creating them.
I remember when I started my first advanced math class as a freshman in college, the TA gave us some guidelines for what he expected in our homework, and one of them was that “This is obvious” or “This trivially follows” ARE perfectly acceptable— as long as they are accurate. Apparently his burdens of grading homework were significantly complicated in the past by students who felt they had to justify the foundations of algebra in every answer. For example, if for some step in a proof you need to say, “Let y be a prime number greater than x”, you don’t need to embed a copy of Euler’s proof of the existence of arbitrarily large prime numbers. However, encouraging students to use the word “obvious” in our homework did leave him open to bluff attempts through proof by intimidation— he had to keep on his toes to make sure we really were only using it for trivial steps, rather than actually cutting larger corners in our proofs. In one assignment that I was a bit rushed for, I actually did say “And obviously, …” while skipping over the hardest part of a problem. The TA, giving me 0 points, commented, “If this was really obvious, we wouldn’t have bothered to assign the problem in your homework!”
Proof by intimidation is just one representative of a family of related logical fallacies. A humorous list by someone named Dana Angluin has been circulating the net for a while, with methods including “proof by vigorous handwaving”, “proof by cumbersome notation”, “proof by exhaustion” (meaning making the reader tired, not the mathematically valid proof by exhaustion method), “proof by obfuscation”, “proof by eminent authority”, “proof by importance”, “proof by appeal to intuition”, “proof by reduction to the wrong problem”, and of course the classic “proof by reference to inaccessible literature”. As I read these, I think a lot of them seem to be common these days outside the domain of mathematics…. Though since this podcast isn’t focused on theology, media or politics, I’ll stay out of that quagmire.
And this has been your math mutation for today.
References:
https://mitadmissions.org/blogs/entry/mathematics-for-computer-science-top-10-proof-techniques-not-allowed/https://proftomcrick.com/2011/11/25/proof-by-intimidation/https://users.cs.northwestern.edu/~riesbeck/proofs.htmlhttps://en.wikipedia.org/wiki/Proof_by_intimidationApril 30, 2023
284: Don't Square Numbers, Triangle Them
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
As I mentioned in the last episode, recently I read an unusual book, “A Fuller Explanation”, that attempts to explain the mathematical philosophy and ideas of Buckminster Fuller. You probably know Fuller’s name due to his popularization of geodesic domes, those sphere-like structures built of a large number of connected triangles, which evenly distribute forces to make the dome self-supporting. His unique approach to mathematics, “synergetics”, probably didn’t make enough true mathematical contributions to justify all the new terminology he created, but does look at many conventional ideas in different ways and through unusual philosophical lenses, which likely helped him to derive new and original ideas in architecture. Today we’ll discuss another of his key points, his central focus on triangles as a key foundation of mathematics.
While most people who are conventionally educated tend to primarily think of geometry in terms of right angles and square coordinate systems, Fuller considered triangles to be a more fundamental element. This follows directly from their essential ability to distribute forces without deforming. Think about it for a minute: if you apply pressure to the corner of a 4-sided figure, such as a square, its angles can be distorted to something different without changing the lengths of the sides: for example, you can squish it into a parallelogram. Triangles don’t have this flaw: assuming the sides are rigid and well-fastened together, you can’t distort it at all. This stems from the elementary geometric fact that the lengths of the three sides fully determine the triangle: for any 3 side lengths, there is only one possible set of angles to go with them and make a triangle. Thus, if you are building something in the real world, triangles and related 3-dimensional shapes naturally result in strong self-supporting structures.
According to one of the many legends of Fuller’s life, he first discovered the strength of this type of construction when he was a child in kindergarten. His teacher had given the class an exercise where they were to build a small model house out of toothpicks and semi-dried peas. At the time, Fuller’s parents had not yet discovered his need for glasses, so he was nearly blind and could not see what his classmates were doing at all. As he recalled it, “All the other kids, the minute they were told to make structures, immediately tried to imitate houses. I couldn’t see, so I felt. And a triangle felt great!” Basing his decisions on how the strength of the building felt as he was constructing it, he ended up developing a triangle-based complex of tetrahedra and octahedra. The unusual look accompanied by the unexpected solidity and strength of his little house surprised his teachers and classmates, and many years later was patented by Fuller as the “Octet Truss”. Fuller wasn’t the first architect to discover the strength of triangles, of course, but he put a lot more thought into extending them to useful 3-D structures than many had in the past.
With the strong memory of such early discoveries, it’s probably not surprising that Fuller decided he wanted to build an entire mathematical system based on triangles. He took this philosophy into all areas of mathematics that he looked at, sometimes to a bit of an extreme. For example, a fundamental building block of algebra is looking at higher powers of numbers, starting with squaring. But Fuller considered the operation of “triangling” much more important. Of course this is a useful aspect of general algebra: you might recall that the triangular numbers represent the number of units it takes to form an equilateral triangle: start with 1 ball, add a row of 2 under it, add a row of 3 under that, etc. So the series of triangular numbers start with 1, 3, 6, 10, and so on, representing the value ((n squared - n) / 2) for each n. Fuller considered these numbers especially significant because they also happen to represent the set of pairwise relationships between n items. For example, if you have 4 people and want to connect phone lines so any 2 can always talk to each other, you need 6 phone lines, the 3rd triangular number. For 5 people you need 10, the 4th triangular number, etc. Fuller was pleased that this abstract concept of the number of mutual relationships had such a simple geometric counterpart. This observation helped to guide him in many of his further explorations related to constructing strong 3-dimensional patterns that could be used in construction.
So, are you convinced yet that triangles are important? Of course these have been a fundamental element of mathematics for thousands of years, but it’s been relatively rare for people to view them and think about them exactly the way Fuller did. That’s probably why, despite exploring areas of mathematics that had been relatively well-covered in the past, he still was able to make original and useful observations that led to significant practical results. It’s a nice reminder that just because other people have already examined some area of mathematics, it doesn’t mean that there aren’t more fascinating discoveries waiting just under the surface to be uncovered.
And this has been your math mutation for today.
References:
https://en.wikipedia.org/wiki/Buckminster_Fullerhttps://www.amazon.com/Fuller-Explanation-Buckminster-Back-Action-ebook/dp/B002YQ2X5S
February 26, 2023
283: Escape From Infinity
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
Recently I read an interesting book, “A Fuller Explanation”, that attempts to explain the mathematical philosophy and ideas of Buckminster Fuller, written by a former student and colleague of his named Amy Edmondson. Fuller is probably best known as the architect who popularized geodesic domes, those sphere-like structures built of a large number of connected triangles, which evenly distribute forces to make the dome self-supporting. He had a unique approach to mathematics, defining a system of math, physics, and philosophy called “synergetics”, which coined a lot of new terms for what was essentially the geometry of 3-dimensional solids. It’s a very difficult read, which is what led to the need for Edmonson’s book.
One of the basic insights guiding Fuller was the distrust of the casual use of infinities and infinitesimals throughout standard mathematics. He would not accept the definition of infinitely small points or infinitely thin lines, as these cannot exist in the physical world. He was also bothered by the infinite digits of pi, which are needed to understand a perfect sphere. Pi isn’t equal to 3.14, or 3.142, or 3.14159, but keeps needing more digits forever. If a soap bubble is spherical, how does nature know when to stop expanding the digits? Of course, in real life, we know a soap bubble isn’t truly spherical, as it’s made of a finite number of connected atoms. Or as Fuller would term it, it’s a “mesh of energy events interrelated by a network of tiny vectors.” But can you really do mathematics without accepting the idea of infinity?
With a bit of googling, I was surprised to find that Fuller was not alone in his distrust of infinity: there’s a school of mathematics, known as “finitism”, that does not accept any reasoning that involves the existence of infinite quantities. At first, you might think this rules out many types of mathematics you learned in school. Don’t we know that the infinite series 1/2 + 1/4 + 1/8 … sums up to 1? And don’t we measure the areas of curves in calculus by adding up infinite numbers of infinitesimals?
Actually, we don’t depend on infinity for these basic concepts as much as you might think. If you look at the precise definitions used in many areas of modern mathematics, we are talking about limits— the convenient description of these things as infinites is really a shortcut. For example, what are we really saying when we claim the infinite series 1/2+1/4+1/8+… adds up to 1? What we are saying is that if you want to get a value close to 1 within any given margin, I can tell you a number of terms to add that will get you there. If you want to get within 25% of 1, add the first two terms, 1/2+1/4. If you want to get within 15%, you need to add the three terms 1/2+1/4+1/8. And so on. (This is essentially the famous “epsilon-delta” proof method you may remember from high school.). Thus we are never really adding infinite sums, the ‘1’ is just a marker indicating a value we can get arbitrarily close to by adding enough terms.
You might object that this doesn’t cover certain other critical uses of infinity: for example, how would we handle Zeno’s paradoxes? You may recall that one of Zeno’s classic paradoxes says that to cross a street, we must first go 1/2 way across, then 1/4 of the distance, and so on, adding to an infinite number of tasks to do. Since we can’t do an infinite number of tasks in real life, motion is impossible. The traditional way to resolve it is by saying that this infinite series 1/2+1/4+… adds up to 1. But if a finitist says we can’t do an infinite number of things, are we stuck? Actually no— since the finitist also denies infinitesimal quantities, there is some lower limit to how much we can subdivide the distance. After a certain amount of dividing by 2, we reach some minimum allowable distance. This is not as outlandish as it seems, since physics does give us the Planck Length, about 10^-35 meters, which some interpret as the pixel size of the universe. If we have to stop dividing by 2 at some point, Zeno’s issue goes away, as we are now adding a finite (but very large) number of tiny steps which take a correspondingly tiny time each to complete. Calculating distances using the sum of an infinite series again becomes just a limit-based approximation, not a true use of infinity.
Thus, you can perform most practical real-world mathematics without a true dependence on infinity. One place where finitists do diverge from the rest of mathematics is in reasoning about infinity itself, or the various types of infinities. You may recall, for example, Cantor’s ‘diagonal’ argument which proves that the infinity of real numbers is greater than the infinity of integers, leading to the idea of a hierarchy of infinities. A finitist would consider this argument pointless, having no applicability to real life, even if it does logically follow from Cantor’s premises.
In Fuller’s case, this refusal to accept infinities had some positive results. By focusing his attention on viewing spheres as a mesh of finite elements and balanced force vectors, this probably set him on the path of understanding geodesic domes, which became his major architectural accomplishment. As Edmondson describes it, Fuller’s alternate ways of looking at standard areas of mathematics enabled him and his followers to circumvent previous rigid assumptions and open “rusty mental gates that block discovery”. Fuller’s views were also full of other odd mathematical quirks, such as the idea that we should be “triangling” instead of “squaring” numbers when wanting to look at higher dimensions; maybe we will discuss those in a future episode.
And this has been your math mutation for today.
References:
https://en.wikipedia.org/wiki/Finitismhttps://en.wikipedia.org/wiki/Buckminster_Fullerhttps://www.amazon.com/Fuller-Explanation-Buckminster-Back-Action-ebook/dp/B002YQ2X5S
December 20, 2022
282: The Man With The Map
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
I was surprised to read about the recent passing of Maurice Karnaugh, a pioneering mathematician and engineer from the early days of computing. Karnaugh originally earned his PhD from Yale, and went on to a distinguished career at Bell Labs and IBM, also becoming an IEEE Fellow in 1976. My surprise came from the fact that he was still alive so recently: he was born in 1924, and his key contributions were in the 1950s and 1960s, so I had assumed he died years ago. In any case, to honor his memory, I thought it might be fun to look at one of his key contributions: the Karnaugh Map, known to generations of engineering students as the K-map for short.
So, what is a K-map? Basically, it’s a way of depicting the definition of a Boolean function, that is a function that takes a bunch of inputs and generates an output, with all inputs and the output being a Boolean value, that is either 0 or 1. As you probably know, such functions are fundamental to the design of computer chips and related devices. When trying to design an electronic circuit schematic that implements such a function, you usually want to try to find a minimum set of basic logic gates, primarily AND, OR, and NOT gates, that defines it.
For example, suppose you have a function that takes 4 inputs, A, B, C, and D, and outputs a 1 only if both A and B are true, or both C and D are true. You can basically implement this with 3 gates: (A AND B) , (C AND D), and an OR gate to look at those two results, outputting a 1 if either succeeded. But often when defining such a function, you’re initially given a truth table, a table that lists every possible combination of inputs and the resulting output. With 4 variables, the truth table would have 2^4, or 16, rows, 7 of which show an output of 1. A naive translation of such a truth table directly to a circuit would result in one or more gates for every row of the table, so by default you would generate a much larger circuit than necessary. The cool thing about a K-map is that even though it’s mathematically trivial— it actually just rewrites the 2^n lines of the truth table in a 2^(n/2) x 2^(n^2) square format— it makes a major difference in enabling humans to draw efficient schematics.
So how did Karnaugh help here? The key insight of the K-map is to define a different shape for the truth table, one that conveys the same information, but in a way that the human eye can easily find a near-minimal set of gates that would implement the desired circuit. First, we make the table two-dimensional, by grouping half the variables for the rows, and half for the columns. So there would be one row for AB = 00, one row for AB = 01, etc, and a column for CD=00, another for CD=01, etc. This doesn’t actually change the amount of information: for each row in the original truth table, there is now a (row, column) pair, leading to a corresponding entry in the two-dimensional K-map. Instead of 16 rows, we now have 4 rows and 4 columns, specifying the outputs for the same 16 input combinations.
The second clever trick is to order each set of rows and columns according to a Gray code— that is, an ordering such that each pair of inputs only differs from the previous pair in one bit. So rather than the conventional numerical ordering of 00, 01, 10, 11, corresponding to our ordinary base-10 of 0, 1, 2, 3 in order, we sort the rows as 00, 01, 11, 10. These are out of order, but the fact that only one bit is changing at a time makes the combinations more convenient to visually analyze.
One you have created this two-dimensional truth table with the gray code ordering, it has the very nice property that if you can spot rectangular patterns, they correspond to boolean expressions, or minterms, that enable an efficient representation of the function in terms of logic gates. In our example, we would see that the row for AB=11 contains a 4x1 rectangle of 1s, and the column of CD=11 contains a 1x4 rectangle of 1s, leading us directly to the (A AND B) OR (C AND D) solution. Of course, the details are a bit messy to convey in an audio podcast, but you can see more involved illustrations in online sources like the Wikipedia page in the show notes. But the most important point is that in this two-dimensional truth table, you can now generate a minimal-gate representation by spotting rectangles of 1s, greatly enhancing the efficiency of your circuit designs.
Over the years, as with many things in computer science, K-maps have faded in significance. This is because the power of our electronic design software has grown exponentially: these days, virtually nobody hand-draws a K-map to minimize a circuit. Circuit synthesis software directly looks at high-level definitions like truth tables, and does a much better job at coming up with minimal gate implementations than any person could do by hand. Some of the techniques used by this software relate to K-maps, but of course many more complex algorithms, most of which could not be effectively executed without a computer, have been developed in the intervening decades. Despite this, Karnaugh’s contribution was a critical enabler in the early days of computer chip design, and the K-map is still remembered by generations of mathematics and computer science students.
And this has been your math mutation for today.
References:
https://en.wikipedia.org/wiki/Karnaugh_maphttps://en.wikipedia.org/wiki/Maurice_Karnaughhttps://www.quora.com/What-is-the-practical-use-of-a-Karnaugh-maphttps://www.eetimes.com/karnaugh-maps-101/
October 30, 2022
281: Pascal Vs Mathematics
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
If you have enough interest in math to listen to this podcast, I’m pretty sure you’ll recognize the name of Blaise Pascal, the 17th-century French mathematician and physicist. Among other achievements, he created Pascal’s Triangle, helped found probability theory, invented and manufactured the first major mechanical calculator, and made essential contributions to the development of fluid mechanics. His name was eventually immortalized in the form of a computer language, a unit of pressure, a university in France, and an otter in the Animal Crossing videogame, among other things.. But did you know that in the final decade of his life, he essentially renounced the study of mathematics to concentrate on philosophy and theology?
According to notes found after his death in 1662, Pascal had some kind of sudden religious experience in 1654, when he went into a trance for two hours, during which time he claims that God gave him some new insights on life. At that point he dropped most of his friends and sold most of his possessions to give money to the poor, leaving himself barely able to afford food. He also decided that comfort and happiness were immoral distractions, so started wearing an iron belt with interior spikes. And, most disturbingly, he decided that math and science were no longer worthy of study, so he would devote all his time to religious philosophy. He was still a very original and productive thinker, however, as in this period he wrote his great philosophical work known as the Pensees.
There were a couple of reasons why he may have decided to give up on math and physics at this point. Part of it was certainly just a change in emphasis: he was concentrating on something else now, which he considered more important. He also made comments about worldly studies being used to feed human egos, which means nothing in the eyes of God. At one point he stated that he could barely remember what geometry was.
He never completely suppressed his earlier love of mathematics though. Ironically, at several points in the Pensees he uses clearly mathematical ideas. Most famously, you may have heard of “Pascal’s Wager”, where he discusses the expected returns of believing in God vs not believing, based on his ideas of probability theory. You have two choices, to believe or not believe. If you choose to believe, you may suffer a finite net loss from time spent on religion, if God doesn’t exist— but if he does, you have an infinite payoff. Choosing not to believe offers at best the savings from that lifetime loss. Thus, the rational choice to maximize your expected gain is to believe in and worship God.
As many have pointed out since then, there is at least one huge hole in Pascal’s argument: what if you choose to believe in the wrong God, or worship him in the wrong way? Many world religions consider heresy significantly worse than nonbelief. He has an implicit assumption of Christianity, in the form he knows, being the only option other than agnosticism or atheism. I think Homer Simpson once refuted Pascal’s Wager effectively, when trying to get out of going to church with his wife: “But Marge, what if we chose the wrong religion? Each week we just make God madder and madder.”
Another surprising use of math in the Pensees is Pascal’s comments on why the study of math and science may be pointless in general. He compares the finite knowledge that man may gain by these studies against the infinite knowledge of God: “… what matters it that man should have a little more knowledge of the universe? If he has it, he but gets a little higher. Is he not always infinitely removed from the end…? In comparison with these Infinites all finites are equal, and I see no reason for fixing our imagination on one more than on another.” While he doesn’t write any equations here, the ideas clearly have a basis in his previous studies related to finite and infinite values. We could even consider this a self-contradiction: wouldn’t the fact that his math just gave him some theological insight mean that it was, in fact, worthy of study to get closer to God?
Pascal did also still engage a few times during this period in direct mathematical studies. Most notably, in 1658, he started speculating on some properties of the cycloid, the curve traced by a point on a moving circle, to distract himself while suffering form a toothache. When his ache got better, he took this as a sign from God to continue his work on this topic. That excuse seems a bit thin to me: clearly he never lost his inbuilt love for mathematics, even when he felt his theological speculations were pulling him in another direction.
And this has been your math mutation for today.
References:
https://en.wikipedia.org/wiki/Blaise_Pascalhttps://www.amazon.com/Drunkards-Walk-Randomness-Rules-Lives/dp/0307275175https://www.gutenberg.org/ebooks/18269
August 26, 2022
280: Rubik's Resurgence
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
Do you remember the Rubik’s Cube, that 3x3x3 cube puzzle of multicolored squares, with each side able to rotate independently, that was a fad in the 1980s? The goal was to take a cube that has been scrambled by someone else with a few rotations, and get it back to a configuration where all squares on each side match in color. Originally developed by Hungarian architecture professor Erno Rubik in 1975, he had initially intended it to model how a structure can be designed with parts that move independently yet still hold together. The legend is that after he demonstrated the independent motion of a few sides and had trouble rearranging it back to the original configuration, he realized he had an interesting puzzle. In the early 1980s, it started winning various awards for best toy or puzzle, and quickly became the best selling toy of all time. (A title which it apparently still holds.) It was insanely difficult for the average person to solve, though typically with some trial and error you could get one or two sides done, the key to holding people’s interest. To get a sense of how popular it was back then, there were cube-solving guidebooks that sold millions of copies, and even a Saturday morning cartoon series about a cube-shaped superhero. But by 1983 or so sales were dropping off, and the fad was considered over.
Like everyone else who was alive in the early 80s, I spent some time messing around with cubes, but found it too frustrating after a while, and eventually solved it with the aid of a guide book. I remember being impressed by a classmate who swore he hadn’t read any guidebooks, but could take a scrambled cube from me, go work on it in a corner of the room, and come back with it fully solved. His secret was to remove the colored stickers from the squares and put them back in the right configuration, without rotating the sides at all. But the non-cheating solutions in the guidebooks typically revolve around identifying sequences of moves that can move around known sets of squares while keeping others in their current configuration, then getting the desired cubes in place layer by layer. These sequences are tricky in that they appear to be completely scrambling the cube before restoring various parts, which is a key reason why average cubers would fail to discover them— you need to mess up your cube on the way to completing the solution. It’s actually been mathematically proven that any reachable configuration can be solved in 20 moves or fewer.
One aspect of the cube that is always fun to mock is its marketing campaign. Typically the cube was sold and advertised with the phrase, “Over 3 Billion Combinations!” But if you think about it for a minute, there are a lot more. A cube has 8 corner pieces, so you could combine those in 8 factorial (8!) ways, which is 8 * 7 * 6 … down to 1. And since each of these combinations can have each corner piece in 3 different rotations, you need to multiply by 3 to the 8th power. Similarly, the 12 side pieces can be arranged in 12 factorial (12!) ways, then you need to multiply by 2 to the 12th power. So to find the total possibilities, we need to multiply 8! * 3^8 * 12! * 2^12. It turns out that only 1/12 of these positions are actually reachable from a starting solved state, so we need to divide the result by 12. But we still get a total a bit larger than 3 billion: 4.3 * 10^19. So the marketing campaign was underestimating the possibilities by a ridiculous amount— in fact, if you square the 3 billion that they gave, you still don’t quite reach the true number of cube configurations. Perhaps they were afraid that the terms for larger numbers, such as the quintillions needed for the true number, would confuse the average customer.
I was surprised to read recently that Rubik’s Cube actually has actually gained in popularity again in the modern era, fueled by daring YouTubers who speed-solve cubes, solve them with their feet, and perform similar feats. Lots of enthusiasts take their cubing very seriously, and there are Rubik’s Cube speed-solving championship events held regularly. If you’re a professional-class cuber, you can buy specially made Rubik’s lubricants to enable you to rotate your sides faster. The latest championship, this year in Toronto, included standard cube solving plus events on varying size cubes (up to 7x7x7), blindfolded cube solving, and one-handed cube solving. (Apparently they eliminated the foot-only solving, though, concerned that it was unsanitary.) Champion Matty Inaba fully solved a 3x3x3 cube in 4.27 seconds, which sounds like about the time it usually takes me to rotate a side or two. Author A.J. Jacobs, in “The Puzzler”, also points out that if you’re too intimidated to compete yourself, there are Fantasy Cubing leagues, similar to Fantasy Football, where you can bet on your favorite combination of winners.
So, what does the future hold for the sport of Rubik’s Cubing? Well, even though the big leagues are only competing up to the 7x7x7 level this year, Jacobs tracked down a French inventor who has put together a 33x33x33 one. As you would guess, the larger cubes are pretty challenging to build— this one involved over 6000 moving parts and ended up the size of a medicine ball. Experienced cubers do say, however, that the basic algorithms for solving are fundamentally the same for all cube sizes. Due to the home manufacturing capabilities enabled by modern 3-D printing, one of Jacobs’ interviewees points out that we are in a “golden age of weird twisty puzzles”. Hobbyists have invented many Rubik’s like variants that are not perfect cubes, and have numerous asymmetric parts, to create some extra challenge. Personally I’m not sure I would ever have the patience to deal with anything beyond a basic 3x3x3 cube, though maybe I’ll look into joining a fantasy league sometime.
And this has been your math mutation for today.
References:
https://en.wikipedia.org/wiki/Rubik%27s_Cubehttps://www.worldcubeassociation.org/competitions/NAC2022https://ajjacobs.com/books/the-puzzler/https://web.mit.edu/sp.268/www/rubik.pdf
July 2, 2022
279: Improbable Envelopes
Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
Today we’re going to talk about a well-known paradox that a co-worker recently reminded me about, the Two Envelopes Paradox. It’s similar to some others we have discussed in past episodes, such as the Monty Hall paradox, in that a slightly incorrect use of the laws of probability gives an apparent result that isn’t quite correct.
Here’s how the basic paradox goes. You are shown two envelopes, and told that one contains twice as much money as the other one, with the choice of envelopes for each amount having been pre-determined by a secret coin flip. No information is given as to the exact amount of money at stake. You need to choose which envelope you want. After you initially point at one, you are told the amount of money in it, and asked, “Would you like to switch to the other one?” Since you have been presented no new information about which envelope has more money, it should be obvious that switching makes no difference at this point, as either way you have a 50% chance of having guessed the right one.
But let’s calculate the expected value of switching envelopes. Say the envelope you chose contains D dollars. If you stick with your current envelope, the expected amount you will get is simply D. The other one contains either 2D or D/2 dollars, with 1/2 probability for each. The expected value if you switch is then 1/2*2D + 1/2*D/2, which adds up to (5/4)D. Thus your expected winnings if you switch are greater than the D you would gain from your first choice, and you should always switch! But this doesn’t make much sense, if you had no additional information. Can you spot the flaw in this reasoning?
The key is to recognize that you’re combining two different dollar amounts in your expected value calculation: the D that exists in the case where you initially chose the smaller envelope is different from the D if you chose the larger one. The easiest way to see this is if you define another variable, T, the total money in the combination of two envelopes. In this case, the larger one contains (2/3)T, and the smaller has (1/3)T. Now your expected winnings become the same as the expected value from switching, (1/2)*(2/3)T + (1/2)*(1/3)T, or simply T/2. Alternatively, you could have come to the same conclusion by modifying our original reasoning using Bayes’ Theorem, replacing our reuse of D with correct calculations for the conditional values in each envelope.
But weirdly enough, brilliant.org seems to point out an odd strategy that will let you choose whether to switch with a greater than 50% chance of getting the higher envelope. Here’s how it works. After you choose your first envelope, choose a random value N using a normal probability distribution. This is the common “bell curve”, with the important property that any number on the number line has a probability of being chosen, as the ‘tail’ of the bell asymptotically approaches and never reaches 0. So if the center of the normal distribution is at 100, you have the highest probability of choosing a number near 100, but still a small chance of choosing 1000, and a really tiny chance of choosing 10 billion. Therefore, if you’ve chosen a number N using this distribution, there is some probability, P, that N is between the dollar values in the two envelopes. Now assume the envelope you didn’t choose contains that number N, and choose to switch on that basis: if N is greater than the amount in the envelope you chose, you switch, and otherwise keep your original envelope.
Why does using this random number help? Well, if your random number N was smaller than both envelopes, then you will always keep your first choice, and there is no change to the overall 50-50 chance of winning. If it was larger than both, you’ll always switch, and again no change. But what if you got lucky, and N was between the two values, which we know can happen with nonzero probability P? Then you will make the right choice: you will switch if and only if your initial envelope was the smaller one! Thus, with probability P, the chance of N being in this lucky interval, you will be guaranteed a win, while the rest of the time you have the original odds. So the overall chance of winning is (1/2)*(1-P) + 1*P, which is slightly greater than 1/2.
Something seems fishy about this reasoning, but I can’t spot an obvious error. I remember also hearing about this solution from a professor back in grad school, and periodically tried searching the web for a refutation, but so far haven’t found one. Of course, with no information about the actual value of P, this edge can be unpredictably small, so is probably of no real value in practical cases. There also seems to be a philosophical challenge here: How meaningful is an unpredictable bonus to your odds of an unknowable amount? I’ll be interested to hear if any of you out there have some more insight into the problem, or pointers to further discussions of this bizarre probability trick.
And this has been your math mutation for today.
References:
https://brilliant.org/wiki/two-envelope-paradox/


