Ep. 229 Bob Murphy Admits Steve Patterson Was Right About the Problems With Infinity
Bob has Steve Patterson back on the show, to concede that Steve’s skepticism of higher mathematics was right all along. Specifically, Bob explains how his recent discovery of a theorem from Riemann showed that something is indeed rotten in the way mathematicians typically handle infinite sets.
Mentioned in the Episode and Other Links of Interest:
- The YouTube version of this interview.
- Steve Patterson’s website and YouTube channel.
- Steve’s interview with NJ Wildberger.
- A great summary of Riemann’s Rearrangement Theorem.
- The BMS episodes on Godel’s Incompleteness Theorems and Arrow’s Impossibility Theorem.
- Steve’s previous appearance on the Bob Murphy Show.
- Help support the Bob Murphy Show.
The audio production for this episode was provided by Podsworth Media.
This episode made me think of Gabrielโs Horn, a shape with finite volume and infinite surface area. If you filled the shape with liquid paint, you would not have enough paint to coat the inside of the shape.
I am a high school math teacher and am very interested in this. I think I might be able to save math from the weirdness Bob and Steve allege. But I must investigate this further. Thanks for doing this intriguing episode.
Check out Karma Peny’s channel on Youtube, as well as of course Wildberger and Patterson.
Ok, now that I’ve listened to the episode, as a high school calculus teacher, I still don’t feel the force of any of these objections. In Real Analysis in college, I was excited one day to ask the professor about the function (-2)^x. I had tried to envision the function before or understand, and it did not really make sense to me. To my surprise, the professor simply said we don’t use negative bases with exponential functions. In other words, they are excluded because of the problematic nature of what they would create.
Similarly, Bob, I think we could just discard conditionally convergent series as inconsistent concepts. That does not thing to show absolutely convergent or divergent series create the same problems.
In my high school calculus class, I’ve actually shown pictures of Steve and explained some of his objections. To my mind, most of them rest on a too-literal understanding of the equal sign and other things in certain contexts. For instance, when we say 1 + 1 = 2 and then we also say 1 + 0.5 + 0.25 + . . . = 2, the equal sign is not used univocally. In other words, its meaning in the statement 1 + 1 = 2 is different than its meaning in 1 + 0.5 + 0.25 + … = 2. And the second statement by no means implies that we have a “completed infinity” or that anyone can “actually carry out” an infinite number of calculations. It simply means that 2 is the limit value i.e. the value to which we can get arbitrarily close by adding more and more terms.
So, I don’t really see a problem with that. But I’d be happy to discuss this more with either of you on a future podcast (or just through comments/email). I’m open to being wrong. I’m only an amateur mathematician as a high school teacher, and I agree that the mathematicization of economics and other fields causes grave problems. I just don’t yet feel the force of any of Steve’s criticisms of mathematics yet. Though I do acknowledge the “don’t ask questions!” attitude of mathematicians with which he has spoken is a big problem.
Peace,
John
Thanks John. I already lined up a different listener (who has a PhD in math) who is going to defend against our heresies…
Excited to listen!
Thank you. I majored in Math, and then went on to get a masters and PhD in Operations Research (applied math). I never was bothered by the sorts of results that you discussed. In fact, I found results like these to be quite beautiful. As an example, all of the strange results around the Mandelbrot set (which involves a healthy dose of infinities and infinitesimals) I found to be beautiful as well. I definitely don’t think that all infinities, or concepts related to infinities, should be removed from mathematics. As an aside, I love your episodes that deal with these sorts of topics. Please keep creating them.
Bob,
When will the rebuttal to Steve Patteron’s ideas take place on your show??
Thanks, Charlie
Appreciate the thoughtful response John. Just wanted to respond to the idea that ” the equal sign is not used univocally”. Two main points:
1) I would agree that one way to *rescue* the concepts of convergence/limits is to say the equals sign means different things in different contexts–(in fact, I explain this in a video entitled “Why Calculus Does Not Solve Zeno’s Paradoxes: https://www.youtube.com/watch?v=iU59S5JDpSU).
Unfortunately, that brings me to point #2:
2) The idea of treating equality unequally is anathema to the orthodoxy! For example, if you read about the “0.999… = 1” equality, you’ll see that the academics treat it as regular, absolute equality–that the value on the left hand side of the equation is the exact same value on the right.
In fact, this is such entrenched dogma, that there’s considerable academic research studying how long it takes undergrads to shake their pesky intuitions that the two values are different!
If you follow your (sound) intuitions here, then consult Official Theory, you will find yourself a heretic too.
Thanks, Steve, for the reply. Yes, I think you may be right that I’m “rescuing” the concepts. Here’s another way to “rescue” the absolute equality which treats the equal sign the same in both cases:
1) Let “=” relate the resulting quantities of the expressions that flank it. So, 1 + 1 = 2, since the resulting quantity of 1 + 1 has the value of 2. And 2 has the value of 2. Similarly, 1 + 0.5 + 0.25 + … = 2, since the LHS has a limit value of 2. In other words, the expression “1 + 0.5 + 0.25 + …” is taken to mean the limit of the infinite series (i.e. the value we get arbitrarily close to as we add more and more terms). So, if we allow infinite series (or other expressions with ellipses like 0.999…) to be shorthand for limit values, and treat the “resulting quantity” as the limit value, we obtain true statements of the form 1 + 0.5 + 0.25 + … = 2 while not going so far as to affirm “completed infinites” or any contradiction.
2) A more obvious case where the equal sign is not used univocally is when we write things like “the limit as x -> infinity of x^2 = infinity” — The “= infinity” phrase, sometimes used in calculus books, is shorthand for the notion that there is no limit value since the expression grows without bound (it keeps getting arbitrarily large). In this case, there are not two clear “resulting quantities” that flank the equal sign. It’s an extended usage of the same symbol (analogous yes, though not univocal). Nonetheless, this is different from my initial claim of non-univocal usage which I need to revise.
Also, FYI, this is the time of year when AP Calculus teachers across the nation are reviewing convergent and divergent series, so it’s a good time to get conversations going about those ideas! I look forward to you and Bob talking to the PhD mathematician.
well said. You did a nice job differentiating the two meanings of equivalence…
Very interesting!
I wonder, Bob, how you think about “a priori” thinking in economics then? Mises is pretty big on it.
(1) I’ve written tons of stuff defending Misesian apriorism.
(2) I don’t have a problem with an a priori approach, so long as you say “Wait a minute” when the result is absurd. I’m not arguing we should be empirical in math, though Steve might.
Yea it sounded like he was pretty dismissive of a priori in general, but maybe he just meant in the context of math?
Bob mentioned imaginary numbers here – also ‘fundamentally wrong’. Via maxwell’s equations they enable us to manipulate electricity, transmit radio, build computers, have modern society.
But in some ways these seem now to be fruits of a poison tree. One wonders…
Are you disagreeing with something I said? Didn’t I say that complex numbers were used in equations for electromagnetism?
Interestingly, “but it works! magnets!” was the very first reply I got when I mentioned the content of this episode to someone.
The guy then explained to me why “of course the sum of the series changes because it’s a different series when you re-order it” but I honestly didn’t understand why it would be a different series. It sounded to me something like the commenter above, where the “=” sign just seems to mean something different when it comes to infinite series (or series in general?)
It’s easy enough to find equations which work that way. Probably the simplest one I can think of would be:
At the point (x = 0, y = 0) what happens to the value of z?
The answer depends on how you approach that point. Thus if you start at (x = 1, y = 0) then you have z = 0 … suppose you halve the value of x with each step, thus approaching the desired limit point, and at every step z = 0. We can conclude that the final value of z must also be 0.
However, try it this way … start at (x = 1, y = 1) and you have z = 1. Now halve BOTH the x and the y with each step, you find that z = 1 at every step along the way. As you approach the desired limit point you conclude that z must be 1.
Using the same technique you can get any value you want for z and that’s not weird, that’s the very well known danger of dividing by zero … you might end up crossing over a discontinuity. Note that the original equation never changed up above, I only change the way I approach the limit point.
For what it’s worth, that’s part of Newtonian physics … the basic belief that the real physical world does NOT contain any discontinuity points and therefore when you calculate a derivative of a real world quantity (e.g. distance to velocity to acceleration, or energy to power) then you never bump into the problem described above. This is a belief, it’s not proven, nor would there be any way to conclusively prove the absence of a thing in a rather large universe.
Of course Newtonian physics has already been demonstrated to be wrong, in the sense that Einstein’s relativity is more accurate under certain situations, and once you allow for bending space/time you do (at least in principle) also weaken the belief that no discontinuity can ever exist. For example, very tiny black holes might come along, and the normal presumptions of how a physical derivative operates might not work if the space/time has a kink right at the point you are measuring. Thing is … no one has ever found a tiny black hole … and that’s probably a good thing!
I would agree that if something like imaginary numbers is that useful, even if it contradicts our intuitions, there must be something true about it. Same with quantum mechanics. The math seems to work. Maybe we just can’t come up with intuitive explanations…and that may be something lacking in us….even though we did manage to find the math (which is interesting in itself).
I had a good laugh at the part where the guest assumed a geocentric model is “completely wrong” and a “messier” sun-spiraling model is correct.
Why do people strawman that earth is the centre of “the solar system”, anyway? It’s an *Earth* System, within which the sun circuits. (What are we? Sun-worshipping pagans? ๐ Calling it a solar system is loaded.
The whole podcast was skepticism about mathematics and its unreality. Has anyone looked into how we ‘know’ the earth is a moving sphere and that space itself warps? Do we not just assume these? The revered Einstein himself said that he “[came] to believe that the motion of the Earth cannot be detected by any optical experiment, though the Earth is revolving around the Sun.” Really? Isn’t that more like he Believes the Earth is revolving around the Sun, but Knows that this cannot be detected?
This whole science business does seem more and more like inversion of the truth (“Satanism”, for lack of a secular term that properly conveys the extent and uniformity of the practice). I have personally had to eat crow from my zealous attachment to science, backed by education, “experts”, and a degenerate understanding of religions and scriptures.
Today, some finally know that germs do not cause disease, but diseases breed germs (just as maggots do not kill rodents, nor vultures deer), yet the correlation, however spurious, has been used as evidence for a failed theory and justification for all manner of drugs (“for by thy *pharmakeia* were all nations deceived”, Rev 18:23 ๐
What of heliocentrism, then? Jim Carrey, who joked about the “Luminutty” and “that Neil Armstrong guy” on nighttime talk TV appearances, once reassured us, “Don’t worry folks as long as the Sun is revolving around the Earth, we’ll be fine!”… I wonder if Jim knew something!
I have not been the most faithful listener or reader, Bob, as life takes you on different paths. But I wanted to say that some of the odd “Sunday” blog posts of yours, which I read a long time ago as an atheist, were some of the most moving… And I now understand why.
Check out a BASIC computer program, like this:
Is that an infinite loop? Well it sure looks like it will run forever … but can we say that it has really been thoroughly tested and proven to run forever? Depends on what you accept as proof.
If you demand that it must run all the way to infinity, before you acknowledge that loop to be infinite, then your demands are not achievable so you need to say it is not really infinite, or at best speculatively infinite.
On the other hand if you accept, “Seems obvious” as a proof then … well it’s obviously going to run forever, isn’t it? But you bump into this with all convergence problems … no one actually divides infinitesimal numbers to get a derivative, instead what you do is construct a limit and work out what would happen as you approach the limit, then convince yourself that it’s OK to jump ahead to dividing by zero. These type of problems all involve a little leap of faith, and the danger goes away for a problem that’s been well tested (e.g. Newtonian physics) but not because there never was any danger to begin with, but only because the path is well trodden.
This type of thing gets much worse when you bog down into numerical methods, and then constantly deal with errors. You see games sometimes where the physics engine can generate free energy … turns out that solving Newtonian physics equations while keeping the energy balanced is a bit trickier than it looks.
By the way, I don’t accept that Mathematics is at a crisis … and the stuff about pompous gatekeepers who have risen above their genuine ability and want to discourage upstarts from challenging them … it’s as old as humanity. The big dog in the pack doesn’t want to fight every other dog, because he can’t afford the injury and sooner or later he inevitably makes a mistake and loses. Most fights are settled by posturing and threats alone, which is how you can have a stable leadership in the first place.
You know my low opinion of the Coase Theorem, which leads to a patently ridiculous conclusion … but most economists believe it. Any attempt to disbelieve will result in being told that no one is as smart as *insert famous guy here* and stop arguing. Personally I see the Coase Theorem as another of these situations where you have a hidden presumption of convergence in the system, but worse there’s a presumption that convergence will always be to a single point and never path dependent. This assumption has become so instinctive for economists they no longer examine it critically … but I would have thought there are plenty of examples (both theoretical and real) that can demonstrate counter-examples.
When Lorenz stumbled across the idea that equations can have a strange attractor, he annoyed a lot of people who believed they could calculate anything they wanted. I don’t see this as any sort of “crisis” … it merely made the subject richer and more interesting. By the way, Lorenz kept annoying people until he died, and you will notice that right after he died, his homepage vanished … there used to be a bunch of his best articles available for free download and all of those are gone, as far as I can discover. It’s doubly annoying because making a reference to the article with a URL, now results in a broken link. On that topic though, few people understand the deeper implications of what chaos is really about. Take the Lorenz Equations, since he is the topic of conversation:
https://en.wikipedia.org/wiki/Lorenz_system
Use the most standard parameter values: sigma = 10, beta = 8/3, rho = 28 and select an easy starting point for time zero: x = 1, y = 1, z = 1 (all ones makes it both precise and simple). What are the x, y, z values at time 100? Getting the answer to two significant figures would be quite sufficient.
That’s a well posed problem … in as much as the equations, parameters, and initial conditions are all exactly known … however to the best of my knowledge there is no currently known mathematical technique that gives the real answer. Numerical techniques will give you some kind of answer, but not accurate to two significant figures … actually not even accurate to one significant figure … you can get all sorts of answers to that question, if you go and try it. It’s a great exercise for students who are interested in learning about ODE solvers, because it’s simple enough to have a got at, and you can send them all out independently then when they come back again you compare results. It’s fascinating to look at where and why the convergence fails … and you can learn a lot by trying a range of approaches.
Failure to solve a simple ODE like that suggests there must exist a large family of intrinsically incalculable equations, based on the same chaos problem. Next year some clever young thing might figure out how to accurately calculate some or all of these, but then again, possibly not … maybe there’s a reason why it simply can’t be done. I have found difficulty convincing some Mathematicians that this ODE cannot be calculated, they generally will wave it away but not give me the actual answer.
Regarding Bob’s skepticism of complex numbers… math is all about taking an abstract concept and running with it to see how far you can get. Two very important ideas in math are 1) completeness and 2) closure.
You start with a set of natural numbers, 1, 2, 3 … and start applying arithmetic operations to them. A sum of two natural numbers is a natural number. However, a difference of two natural numbers is not always a natural number (for example, 1-3). In other words, the set of natural numbers if not “closed under the operation of subtraction”. If you “close” the set of natural numbers under the ‘subtract’ operation, you get to the set of integers (… -2, -1, 0, 1, 2, …). The set of integers is closed under the operations of addition and subtraction because the sum or difference of any two integers is also in the set of integers. Integers are also closed under the operation of multiplication, but not division (1/3 is not an integer.) Closing the set of integers under division is how you get to the set of rational numbers (which are a closed set under addition, subtraction, multiplication and division.)
So far so good. Now we get to explore the “completeness” property of a set. The set of rational numbers (Q) is closed, but it’s not “complete” (in the sense that not every sequence of rational numbers converges to a rational number.) This is how you get to real numbers (R) – by taking rational numbers and adding limits of converging sequences. The result remains a complete set.
We could probably stop at real numbers since that’s all we use in everyday life. In fact, we could probably stop at rational numbers since they approximate any number we want to any finite precision.
But math goes further by exploring the concept of “algebraic closure”, i.e. making sure all polynomials factor completely into linear polynomials (i.e. have roots in the same field.) Once you go there, you quickly realize that the field of real numbers is not algebraically closed. Indeed, complex numbers (C) are the algebraic closure of the field of real numbers. Complex numbers have the rather unique property of being both complete *and* algebraically closed.
This should not be taken for granted – in fact, it is very rare for a field to be both complete and algebraically closed. If you extend rational numbers in the direction of p-adic numbers (where a “small distance” between two numbers is defined as divisibility by a high power of a prime p), you’ll find that the algebraic closure of that field is not complete. It takes another completion and algebraic closure to arrive at a complete and algebraically closed field. But p-adic numbers simply redefine what the “distance” or “difference” between two numbers is… and yet we arrive at a completely different result. p-adic numbers are used in cryptography and other applications and are a useful concept that simply follows the same completeness/closure path in a different direction.
In math, you never know what you’ll find until you go looking, no matter how “absurd” or “useless” something appears to be.
“However, a difference of two natural numbers is not always a natural number (for example, 1-3)”
I think this depends on what kinds of things you’re trying to subtract and whether or not the operation is appropriate for that type of problem.
For example, if you have 1 apple, you cannot subtract 3 apples in the real world – you’re not left with -2 apples.
So, I think it would be more appropriate to say that 3-1 is a nonsense problem in this context, and that the only reason a negative number can make sense is if it’s considered a debit of some kind – something that’s owed.
Something that’s owed, though, can be stated in natural numbers – 3-1 equals 2 units owed.
And that, I think, is the proper way to think of negative numbers: they are simply positive debits.
There are other math concepts I have problems with, like, I think one was finding the square root of a negative number, or something, where you had to cheat and do the math inside the radicand first cuz if you tried to find it by multiplying one number against itself, it could never result in a negative number. I’m sure I’m getting the problem wrong – I know it had something to do with negative numbers and radicands, and possibly “e”.
Reminds me a bit of William Lane Craig’s defense of the Kalam Cosmological argument, in which he asserts that there are no actual infinites (and therefore the universe had a beginning), and he uses Hilbert’s Hotel to illustrate the absurdities that arise when we assume that there are.
I see math as a language, yes a subset of logic and thus a subset of the mind of God. I’ve programmed Mandlebrot Set explorers in multiple languages, and now Steve has me wondering, what was I actually computing? Because the computer doesn’t know that I’m pretending there’s a square root of -1, all it’s doing is following the algorithm I gave it. In fact, the computer doesn’t even know what negative numbers are (which reminds me, early mathematicians didn’t believe in them either).
What’s giving me pause though, is that the language of math we use now is interestingly beautiful (and useful, as you’ve pointed out) in some ways… see the youtube channel “3blue1brown” for some really insightful visual explanations. Would a more discrete expression of some of these abstract concepts be easier to understand, even if they are somehow logically equivalent? Maybe lead to math & physics breakthroughs? Now you’ve got me wondering. Great conversation.
Question, why are imaginary numbers fishy because they don’t conform to the rules of multiplication? Does all of multiplication conform to the rules of addition?
Why is a negative number times a negative number a positive number? We can show, algebraically why a negative times a negative must be a positive for it to fit into the previous rules about multiplication, but we’re sort of saying “a negative times a negative equals a positive because it has to.” It doesn’t really make intuitive sense as a short-hand for addition which motivated multiplication to begin with (ie: 5 * 2 is 5 + 5 or 2 + 2 + 2 + 2 + 2).
The same argument can be made for introducing complex numbers. They sort of just have to exist in order for us to resolve things like quadratics. Here is a weird little thing that we can just get around by saying, “well, just define it as such and move on.”
“Why is a negative number times a negative number a positive number?”
The reason is because you’re saying there’s a debit of negatives. (See above where I make the case that negative numbers only make sense where the concept of debits are applicable.)
You can work backward to see this: if I’m subracting from a negative number, what I’m doing is increasing the amount of a debit “Negative 3 minus 1” increases the debit.
But if I say that the amount of debits are decreasing by X amount, then the result has to be positive, or, in other words, a lack of debits.
Take “-25 * -1”: What we’re saying is that there’s a debit of one set of negative-twenty-fives – which is positive 25.
As an aside, there’s this really great video by the YouTube channel “Veritasium” that talks about the difference between digital and analog computers.
What I came away with, from that video, is that it’s entirely possible that some of the problems we’re trying to solve with math may not even be math problems at all, and we’re using the wrong method.
Consider the following statement:
The Most Powerful Computers You’ve Never Heard Of
youtube [dot] com/watch?v=IgF3OX8nT0w
(@ 1:23) “Analog computers have a continuous range of inputs and outputs, whereas digital only works with discreet values.
“With analog computers, the quantities of interest are actually represented by something physical, like the amount a wheel has turned, whereas digital computers work on symbols, like zeros and ones.
“If the answer is, say, two, there is nothing in the computer that is ‘twice as much’ as a one. In analog coputers, there is.”
The whole video is great, and there’s examples of really complex analog computers that solved real world problems.
Maybe the reason why we can’t find pi with absolute precisionis is because we’re using the wrong method and/or a number system with the wrong base.
I’m not sure where you found this guy, Bob, but he’s clearly a nutter.
The entire discussion was premised on the idea that infinity is nonsense because the observable real world is finite.
But when you bring up the perfectly reasonable argument about velocity, he talks about velocity not being real. Sorry, no. We *see* velocity. You don’t get to just deny it because it’s inconvenient to you.
When did he say velocity wasn’t real? Maybe you misheard?
I only heard him speak on velocity at 1:25:08
I am disappointed by this episode. I don’t want you to have Mr. Patterson on again, but I wish that when you interviewed him you had pressed him more about his unusual beliefs. Sure, the conclusions of the Banach-Tarski construction and the Riemann rearrangement theorem are startling to say the least, but to go from this to the claim that Pi is a rational number should require much more explanation. Patterson claims space is ultimately discrete: that there is a smallest unit of distance. The fact that we can move isotropically , that the hypotenuse of a right triangle has a measure greater than any side and less than the sum of the sides: This seems a simple disproof of his claim.
Wildberger claims and Patterson seems also to believe that rationals, and even integers ( larger than humanly computable) do not exist. Does this not mean there is a smallest existing integer? What happens when you try to add one to this? Is Patterson really willing to bit a bullet and agree that the abstract idea of a prime number is nonsensical, even if certain small integers can be factored?
Bob, I loved this show. I’ve been railing against dark matter for 30 years, and it’s highly validating to see so many scientists giving up on it today.
You used to talk a lot about the epistemology of economics relating it to geometry, and I firmly believed you were on the right track. However, I saw a debate you had years ago where your opponent retorted with a Neil deGrasse Tyson, “We now know… space is not euclidian,” and I didn’t hear you bring up that geometry analogy since. And you guys touched on it briefly here.
You can’t “discover” a new geometry any more than you can trip over it. Geometry is the rules we derive from logical proofs that demarcate whether one is reasoning about space rationally. The new geometries are simply a poor interpretation of experimental results, and the more you look under the covers of their “new geometry,” the more irrational it becomes.
For example, it was observed the light “travels” from point A to point B at a fixed rate regardless of A and B’s relative velocity to point C (or anything else.) However, a “photon travels between A and B” paradigm suggests that C should perceive the photon traveling faster if A and B move relative to C, but that is not observed. To resolve this, to make the photon’s velocity constant to all frames, they “discovered” that space and time bend (which is entirely irrational.)
Maxwell’s original equations, which preceded the “discovery” of photons, defined the phenomena as: A induces an electromagnetic response in B at a rate proportional to their distance (regardless of A and B’s relative velocity to anything else.) Nobody had to “discover” a new geometry to understand it, and few suggested a particle was “traveling between” because that didn’t fit the evidence. A more sound hypothesis is that some process is instantaneously occurring between A and B, and the process occurs at a rate dependent on their distance – no objects “traveling between.”
I hope you will consider this, and I might hear some geometric analogies to economic epistemology from you in the future:)
[Relativity] is “a mass of error and deceptive ideas violently opposed to the teachings of great men of science of the past and even to common sense.” – Nikola Tesla
I wish that Patterson’s thesis were true, in the sense that it would be a fun and surprising outcome that complements my political views. And I think you make some good points, like that we should verify our assumptions/preconditions, that game theory isn’t a good representation of actual human behavior, and that even if true say the Banach-Tarski paradox doesn’t imply that it’s possible in the real world to transform the matter from one orange into two oranges that are the same as the original orange. But I’m afraid I’m not convinced of Patterson’s thesis.
In particular, I’d like to defend Riemann’s rearrangement theorem. Your argument against Riemann’s rearrangement theorem runs something like this:
– Addition is commutative, i.e. x + y = y + x.
– If you rearrange the terms of a finite sum, you will get the same result.
– If you rearrange the terms of an infinite sum, you will also get the same result.
But I object to the third claim. In the case of a finite sum, we can apply the commutative property a finite number of times to prove the second claim in any particular instance. But regarding the third claim, (a) there is no such thing as an actually realized infinite sum, and (b) even if we were to imagine such a thing, we could not actually realize an infinite number of applications of the commutative property in order to execute the rearrangement. Really, I think the third claim amounts to more of an intuition than an actual argument, and that intuition turns out to be wrong.
Carefully stated, Riemann’s rearrangement theorem is actually a theorem about finite things, not a theorem about infinite things. To use your example, we would say that given a sequence defined as a_i = if i mod 3 > 0 then 1 / (2 * floor(i / 3) + (i mod 3)) else -1 / (i / 3) (the first few terms of which are 1/1, 1/2, -1/1, 1/3, 1/4, and -1/2), then for any value epsilon > 0, there exists a value n_0 such that for all n > n_0, the sum of the first n values of the sequence is within epsilon of ln 2.
The concept of infinity appears nowhere in this statement. We are taking sums of finite numbers of terms. epsilon, n_0, and n are all finite. The sequence itself isn’t some kind of mystical infinite entity, but rather a sort of a function or algorithm that has a finite description, takes a finite positive integer as input, and outputs a fraction that is the ratio of two finite integers. It’s hard to see what specifically one would object to in this claim. (Unless you object to the use of ln 2 on the grounds that it is irrational, in which case substitute an example that converges to a rational number.)
For anyone who thinks this claim is false, I challenge you to demonstrate it. Give me a value of epsilon, then I’ll give you a value of n_0, then you can give me a value of n, and we can compute the sum of the first n terms and prove that I was wrong. If you object to ln 2 on the grounds that it’s irrational, then we can try and find a different example that converges to a rational value and work off of that. Or, if you’re okay with a version where we change “within epsilon of ln 2” to “between 0.6 and 0.8,” which would be sufficient to show that the limit either doesn’t exist or is greater than 0, I’ll kick off the challenge by offering n_0 = 50.
I think it’s easier to see why your intuition about converging sums is wrong by using a different but related example. Forget about Riemann’s rearrangement theorem and converging sums for a moment. I want to talk about Bill’s rearrangement “theorem,” which I am just now inventing. Informally, it says that for every integer k, there exists some sequence consisting of 1’s and -1’s whose partial sums eventually alternate between k and k + 1. Formally, it states that for every integer k, there exists a sequence a_i (defined by some algorithm) where a_i is either 1 or -1 for all i, and a value n_0 such that for all n > n_0, the sum of the first n terms is between k and k + 1.
Once you understand what this is saying, I think you would agree that it has to be true. If a_i starts with ten 1’s and then alternates between 1 and -1, then the partial sums after the ninth will alternate between 10 and 11. If a_i starts with twenty 1’s and then alternates between 1 and -1, then any partial sum after the 19th will be between 20 and 21. And note that both sequences have infinitely many 1’s and infinitely many -1’s. So informally, you could say that we can rearrange the terms of the first sequence to yield the second sequence, thereby changing the series’ quasi-sum.
This could even be demonstrated or realized or illustrated (or whatever term you want to use) in a physical process, at least for say two particular values of k. For example, you could have a bag of coins, start by putting ten coins in it, and then alternately add a coin and remove a coin any number of times. After adding the first nine coins, you would never reach a point where the bag has anything other than 10 or 11 coins in it. Then you could do the same for 20/21 coins. So the “theorem” actually tells us something about the real world. Bill’s rearrangement “theorem” is a synthetic a priori, or rather, it yields synthetic a priori claims.
The point being, I think the same intuition that suggests that Riemann’s rearrangement theorem is wrong would also have to suggest that Bill’s rearrangement “theorem” is wrong. But since I imagine you would agree that Bill’s rearrangement “theorem” is correct, you should be able to see that there must be something wrong with your intuition.
Nice! I heartily approve of Bill’s Rearrangement Theorem. I got a little lost in the epsilons and stuff, but when you started adding +1’s and -1’s I began to grasp it, and then the coins in the bag was a great clincher.
I would point out that just because we humans invented “partial sums” and then chose them as our analysis tool, that is not infinity’s fault. Partial sums, as (I think) you point out, have order dependence built right in (not to mention dynamical system behavior), so WE put it there! Silly to complain about it then. If only we had a better tool… but I wrote a much longer comment of my own about this, so I will wait for its moderation. ๐
I hereby exonerate infinity from the “rearrangement” troubles of infinite series. The culprit is partial sums, the tool that HUMAN mathematicians CHOSE for analysis. As proof consider that
* Partial sums create the same “rearrangement” troubles when used on finite terms, which isn’t normally done, but we can easily do it to investigate.
I illustrate this in an article on my Substack:
“Unscrambling infinity and chronicling math’s forbidden numbers”
https://www.twadpocklereport.com/p/unscrambling-infinity-and-chronicling
* Infinite processes/sets/entities have been analyzed quite successfully when the tool of partial sums is not needed, e.g. the proof that there are more real numbers than integers, which is to say we have one “infinity” that is measurably larger than another. This classic proof can be understood by middle schoolers and makes no fanciful assumptions.
CHOOSING partial sums as the tool to analyze infinite series, by DEFINITION, leads to “order matters,” as we are literally unfolding a sum one term at a time from left to right, and it converts what started out as a static sum into a dynamical system, opening a supersized can of worms and dumping it right onto our table. Riemann’s definitions of “summing” on these series only scratch the surface, in a rather naive way, of dynamical system analysis, as many engineers and physicists can point out. We could have some lively debates about whether Riemann made the “right choices” here–and I would be one of the lively voices–but banishing “infinity” from math is not called for.
Do I have a better tool in mind for summing infinite series? No, I do not. But that does not impugn “infinity” any more than carpenters’ banged-up thumbs impugn carpentry. It’s the hammer’s fault, the HUMAN CHOICE to make and use that tool, not to be blamed on the grand concept of all fastening and joinery, or even the smaller idea of nails themselves. Despite all their modern tools, carpenters have yet to retire their hammers, and so they will have banged-up thumbs for the foreseeable future. Nevertheless, I can sit on wooden chairs whether or not they could theoretically be made without hammers, and I do not need to ask myself whether this chair was made with a hammer (as if I am some sort of hammer-vegan) or whether its carpenter’s thumbs came through the process unscathed. In a post-hammer and post-partial-sum Utopia, after some geniuses rid us of the need for these tools, we would have none of these problems–carpentry and infinity would be just fine–and we might even be better at summing infinite series and driving nails.
I realize this is but one point within the broad discussion, and I will not launch into all the others here (you’re welcome!)… but just a few more things:
* We should quit saying “infinity” context-free as if it is a thing. This word invokes a very broad category of different concepts, which I labeled “processes/sets/entities” above but probably still didn’t catch it all. When we approach any endless thing, our framing of it and our purpose in asking questions about it determine what tools are appropriate. There are many important distinctions and commonalities that are slurred together by calling them all “infinity” and expecting them all to share the same fate.
* I absolutely agree that SCIENCE has been horribly abused in many ways, and I would be a big fan of efforts to fix that, so I apologize for focusing on the negative here, the selective parts of this BMS episode I disagree with–I just don’t have much to add to the rest. I also am sympathetic to improving any mathematical tools/strategies in common use, such as partial sums, as is hopefully clear at this point. So, soldier on, Steve Patterson, but I advise more caution with the distinctions around infinity and human choice/purpose in analysis tools.
* Moving the goalposts is FUNDAMENTAL to math! This feels very wrong or at best paradoxical, since we like to say “math is objective” etc. It is, but fashioning our questions, framing them into symbols, and choosing our analytical tools for the job is not. In middle school, we were told that negative numbers do not have square roots, and then in high school they told us to use imaginary numbers. This is not a contradiction; it is an intentional broadening of purpose.
But I can make that even simpler. Let me close with this little story that we all must have endured in grade school, in one form or another, and yet we’ve all survived to this day:
—
TEACHER: multiplying is when you add a number to itself a given number of times. Now tell me, what is FOUR times ONE?
US: Okay, add FOUR to itself ONE time… that’s 4+4… EIGHT! FOUR times ONE is EIGHT.
TEACHER: Uhhh, I misspoke. I mean, add the given number of fours all together.
US: Oh, I see. To get EIGHT, we need to add up TWO fours, so FOUR times TWO would be EIGHT.
TEACHER: YES! And what is four times one?
US: Hah, trick question! Adding takes two numbers, and you said only one four, so I can’t actually add. I guess you can only multiply things by two or more–
TEACHER: Sheesh, just let the “other” number be zero. You do have a four, plus “nothing” else, right?
US: Okay, I guess… my tummy feels a little weird about this though. I thought we were doing MATH, not Constitutional law. It’s almost like you have an INTUITION of how you want multiplying to work, and you keep moving the goalposts to get it.
NARRATOR: We all know the punchline. Multiplication STARTS from this intuition of repeated adding, and then we turn it into a regular pattern of counting up and down by fours. From 4*2=8, we go down the ladder to 4*1=4, 4*0=0, 4*(-1)=-4, 4*(-2)=-8…
We ultimately discard the original intuition in favor of this abstract ladder that we climb easily up and down without kinks or ambiguities. But it all started from an intuition of adding something to itself repeatedly.
Now let’s do exponents:
TEACHER: Exponents are when you multiply a number by itself a given number of times.
US: Okay, four-to-the-one is 16, because we multiply 4 by itself once.
TEACHER: I’m getting deja vu.
US: Okay, I guess it’s actually multiplying a certain number of fours all together, so 4-to-the-two is 16, and four-to-the-one is… Wait, I can’t multiply just one number…
TEACHER: Deja vu again.
US: FINE! I’ll let myself multiply just one number… so FOUR times… NOTHING… is ZERO!
TEACHER: FOUR to the ONE is ZERO?? NO!!
US: B-b-but you said let the other number be “nothing”!
TEACHER: IT’S FOUR!! FOUR to the ONE is JUST FOUR!!!!
US: WHY??? *crying*
TEACHER: *goes on lunch break*
NARRATOR: (chuckling) Wait’ll they tackle FOUR-to-the-ZERO… equals ONE? Oh, boy!
Again, we all know the punchline, and we understand the intuition that led HUMAN BEINGS to CHOOSE how we define exponents. We could have really chosen 4-to-the-one to be ZERO for the precise reason above, but WHY would we do that? It’s not very useful. Instead, we made a little ladder that we climb up and down by multiplying and dividing by four. 16, 4, 1, 1/4, 1/16… easy, and it leads to nice formulas (e.g., combinatorics) without awkward exceptions. But not quite the intuition we started with.
It’s okay. Math does not have to be “objective” in the sense of our initial intuitions rigidly maintained through every level of every niche of every topic. Moving these goalposts is a feature, not a bug. In real life, we always reconfigure our tools to tackle different jobs, and the more refined niches require exotic tools unsuitable for any other purpose. Eventually we are climbing a specialized ladder into the clouds, peering through custom prismatic eyepieces into thickly gloved hands and declaring that all the positive integers add up to -1/12. Totally normal, from THAT vantage point and ONLY that one.
This goes way beyond math, though. Take the following English sentence:
“The baseball pitcher WALKED two batters, then WALKED to his car, and back at home WALKED his dogs.”
There is no crisis over the symbol “walked” meaning three different things, though I might confuse you if I were sloppy with the contexts, or if you had never heard of baseball.
Another simple example of CHOOSING our tools/definitions in the wild:
“Zero to the power of zero”
https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero
The discussion explores advantages and disadvantages of choosing what zero-to-the-zero ought to equal in various contexts.
This is all very Austrian: subjective human choice and purpose leading to objective outcomes. Rejoice!
Bernie has a very interesting video lecture on math and epistemology. If he one day shares it here, I recommend taking the time to view it. It’s cool.
Thanks, Adam!
The video’s URL is included in the Substack article I linked to above, and I was trying not to be too link-spammy. But gee, since you mentioned it, the talk itself is here:
Forbidden Numbers Ate My Brain: The dark side of math and why we need it
https://fnamb.primetime.games/
It starts with the story of Hippasus of Metapontum, a member of the Pythagorean cult who was martyred for the square root of two, and it only gets weirder from there. Because so does math.