# Ep. 229 Bob Murphy Admits Steve Patterson Was Right About the Problems With Infinity

Bob has Steve Patterson back on the show, to concede that Steve’s skepticism of higher mathematics was right all along. Specifically, Bob explains how his recent discovery of a theorem from Riemann showed that something is indeed rotten in the way mathematicians typically handle infinite sets.

**Mentioned in the Episode and Other Links of Interest:**

- The YouTube version of this interview.
- Steve Patterson’s website and YouTube channel.
- Steve’s interview with NJ Wildberger.
- A great summary of Riemann’s Rearrangement Theorem.
- The BMS episodes on Godel’s Incompleteness Theorems and Arrow’s Impossibility Theorem.
- Steve’s previous appearance on the Bob Murphy Show.
- Help support the Bob Murphy Show.

The audio production for this episode was provided by Podsworth Media.

This episode made me think of Gabriel’s Horn, a shape with finite volume and infinite surface area. If you filled the shape with liquid paint, you would not have enough paint to coat the inside of the shape.

I am a high school math teacher and am very interested in this. I think I might be able to save math from the weirdness Bob and Steve allege. But I must investigate this further. Thanks for doing this intriguing episode.

Check out Karma Peny’s channel on Youtube, as well as of course Wildberger and Patterson.

Ok, now that I’ve listened to the episode, as a high school calculus teacher, I still don’t feel the force of any of these objections. In Real Analysis in college, I was excited one day to ask the professor about the function (-2)^x. I had tried to envision the function before or understand, and it did not really make sense to me. To my surprise, the professor simply said we don’t use negative bases with exponential functions. In other words, they are excluded because of the problematic nature of what they would create.

Similarly, Bob, I think we could just discard conditionally convergent series as inconsistent concepts. That does not thing to show absolutely convergent or divergent series create the same problems.

In my high school calculus class, I’ve actually shown pictures of Steve and explained some of his objections. To my mind, most of them rest on a too-literal understanding of the equal sign and other things in certain contexts. For instance, when we say 1 + 1 = 2 and then we also say 1 + 0.5 + 0.25 + . . . = 2, the equal sign is not used univocally. In other words, its meaning in the statement 1 + 1 = 2 is different than its meaning in 1 + 0.5 + 0.25 + … = 2. And the second statement by no means implies that we have a “completed infinity” or that anyone can “actually carry out” an infinite number of calculations. It simply means that 2 is the limit value i.e. the value to which we can get arbitrarily close by adding more and more terms.

So, I don’t really see a problem with that. But I’d be happy to discuss this more with either of you on a future podcast (or just through comments/email). I’m open to being wrong. I’m only an amateur mathematician as a high school teacher, and I agree that the mathematicization of economics and other fields causes grave problems. I just don’t yet feel the force of any of Steve’s criticisms of mathematics yet. Though I do acknowledge the “don’t ask questions!” attitude of mathematicians with which he has spoken is a big problem.

Peace,

John

Thanks John. I already lined up a different listener (who has a PhD in math) who is going to defend against our heresies…

Excited to listen!

Thank you. I majored in Math, and then went on to get a masters and PhD in Operations Research (applied math). I never was bothered by the sorts of results that you discussed. In fact, I found results like these to be quite beautiful. As an example, all of the strange results around the Mandelbrot set (which involves a healthy dose of infinities and infinitesimals) I found to be beautiful as well. I definitely don’t think that all infinities, or concepts related to infinities, should be removed from mathematics. As an aside, I love your episodes that deal with these sorts of topics. Please keep creating them.

Appreciate the thoughtful response John. Just wanted to respond to the idea that ” the equal sign is not used univocally”. Two main points:

1) I would agree that one way to *rescue* the concepts of convergence/limits is to say the equals sign means different things in different contexts–(in fact, I explain this in a video entitled “Why Calculus Does Not Solve Zeno’s Paradoxes: https://www.youtube.com/watch?v=iU59S5JDpSU).

Unfortunately, that brings me to point #2:

2) The idea of treating equality unequally is anathema to the orthodoxy! For example, if you read about the “0.999… = 1” equality, you’ll see that the academics treat it as regular, absolute equality–that the value on the left hand side of the equation is the exact same value on the right.

In fact, this is such entrenched dogma, that there’s considerable academic research studying how long it takes undergrads to shake their pesky intuitions that the two values are different!

If you follow your (sound) intuitions here, then consult Official Theory, you will find yourself a heretic too.

Thanks, Steve, for the reply. Yes, I think you may be right that I’m “rescuing” the concepts. Here’s another way to “rescue” the absolute equality which treats the equal sign the same in both cases:

1) Let “=” relate the resulting quantities of the expressions that flank it. So, 1 + 1 = 2, since the resulting quantity of 1 + 1 has the value of 2. And 2 has the value of 2. Similarly, 1 + 0.5 + 0.25 + … = 2, since the LHS has a limit value of 2. In other words, the expression “1 + 0.5 + 0.25 + …” is taken to mean the limit of the infinite series (i.e. the value we get arbitrarily close to as we add more and more terms). So, if we allow infinite series (or other expressions with ellipses like 0.999…) to be shorthand for limit values, and treat the “resulting quantity” as the limit value, we obtain true statements of the form 1 + 0.5 + 0.25 + … = 2 while not going so far as to affirm “completed infinites” or any contradiction.

2) A more obvious case where the equal sign is not used univocally is when we write things like “the limit as x -> infinity of x^2 = infinity” — The “= infinity” phrase, sometimes used in calculus books, is shorthand for the notion that there is no limit value since the expression grows without bound (it keeps getting arbitrarily large). In this case, there are not two clear “resulting quantities” that flank the equal sign. It’s an extended usage of the same symbol (analogous yes, though not univocal). Nonetheless, this is different from my initial claim of non-univocal usage which I need to revise.

Also, FYI, this is the time of year when AP Calculus teachers across the nation are reviewing convergent and divergent series, so it’s a good time to get conversations going about those ideas! I look forward to you and Bob talking to the PhD mathematician.

Very interesting!

I wonder, Bob, how you think about “a priori” thinking in economics then? Mises is pretty big on it.

(1) I’ve written tons of stuff defending Misesian apriorism.

(2) I don’t have a problem with an a priori approach, so long as you say “Wait a minute” when the result is absurd. I’m not arguing we should be empirical in math, though Steve might.

Yea it sounded like he was pretty dismissive of a priori in general, but maybe he just meant in the context of math?

Bob mentioned imaginary numbers here – also ‘fundamentally wrong’. Via maxwell’s equations they enable us to manipulate electricity, transmit radio, build computers, have modern society.

But in some ways these seem now to be fruits of a poison tree. One wonders…

Are you disagreeing with something I said? Didn’t I say that complex numbers were used in equations for electromagnetism?

Interestingly, “but it works! magnets!” was the very first reply I got when I mentioned the content of this episode to someone.

The guy then explained to me why “of course the sum of the series changes because it’s a different series when you re-order it” but I honestly didn’t understand why it would be a different series. It sounded to me something like the commenter above, where the “=” sign just seems to mean something different when it comes to infinite series (or series in general?)

It’s easy enough to find equations which work that way. Probably the simplest one I can think of would be:

At the point (x = 0, y = 0) what happens to the value of z?

The answer depends on how you approach that point. Thus if you start at (x = 1, y = 0) then you have z = 0 … suppose you halve the value of x with each step, thus approaching the desired limit point, and at every step z = 0. We can conclude that the final value of z must also be 0.

However, try it this way … start at (x = 1, y = 1) and you have z = 1. Now halve BOTH the x and the y with each step, you find that z = 1 at every step along the way. As you approach the desired limit point you conclude that z must be 1.

Using the same technique you can get any value you want for z and that’s not weird, that’s the very well known danger of dividing by zero … you might end up crossing over a discontinuity. Note that the original equation never changed up above, I only change the way I approach the limit point.

For what it’s worth, that’s part of Newtonian physics … the basic belief that the real physical world does NOT contain any discontinuity points and therefore when you calculate a derivative of a real world quantity (e.g. distance to velocity to acceleration, or energy to power) then you never bump into the problem described above. This is a belief, it’s not proven, nor would there be any way to conclusively prove the absence of a thing in a rather large universe.

Of course Newtonian physics has already been demonstrated to be wrong, in the sense that Einstein’s relativity is more accurate under certain situations, and once you allow for bending space/time you do (at least in principle) also weaken the belief that no discontinuity can ever exist. For example, very tiny black holes might come along, and the normal presumptions of how a physical derivative operates might not work if the space/time has a kink right at the point you are measuring. Thing is … no one has ever found a tiny black hole … and that’s probably a good thing!

I would agree that if something like imaginary numbers is that useful, even if it contradicts our intuitions, there must be something true about it. Same with quantum mechanics. The math seems to work. Maybe we just can’t come up with intuitive explanations…and that may be something lacking in us….even though we did manage to find the math (which is interesting in itself).

I had a good laugh at the part where the guest assumed a geocentric model is “completely wrong” and a “messier” sun-spiraling model is correct.

Why do people strawman that earth is the centre of “the solar system”, anyway? It’s an *Earth* System, within which the sun circuits. (What are we? Sun-worshipping pagans? 😉 Calling it a solar system is loaded.

The whole podcast was skepticism about mathematics and its unreality. Has anyone looked into how we ‘know’ the earth is a moving sphere and that space itself warps? Do we not just assume these? The revered Einstein himself said that he “[came] to believe that the motion of the Earth cannot be detected by any optical experiment, though the Earth is revolving around the Sun.” Really? Isn’t that more like he Believes the Earth is revolving around the Sun, but Knows that this cannot be detected?

This whole science business does seem more and more like inversion of the truth (“Satanism”, for lack of a secular term that properly conveys the extent and uniformity of the practice). I have personally had to eat crow from my zealous attachment to science, backed by education, “experts”, and a degenerate understanding of religions and scriptures.

Today, some finally know that germs do not cause disease, but diseases breed germs (just as maggots do not kill rodents, nor vultures deer), yet the correlation, however spurious, has been used as evidence for a failed theory and justification for all manner of drugs (“for by thy *pharmakeia* were all nations deceived”, Rev 18:23 😉

What of heliocentrism, then? Jim Carrey, who joked about the “Luminutty” and “that Neil Armstrong guy” on nighttime talk TV appearances, once reassured us, “Don’t worry folks as long as the Sun is revolving around the Earth, we’ll be fine!”… I wonder if Jim knew something!

I have not been the most faithful listener or reader, Bob, as life takes you on different paths. But I wanted to say that some of the odd “Sunday” blog posts of yours, which I read a long time ago as an atheist, were some of the most moving… And I now understand why.

Check out a BASIC computer program, like this:

Is that an infinite loop? Well it sure looks like it will run forever … but can we say that it has really been thoroughly tested and proven to run forever? Depends on what you accept as proof.

If you demand that it must run all the way to infinity, before you acknowledge that loop to be infinite, then your demands are not achievable so you need to say it is not really infinite, or at best speculatively infinite.

On the other hand if you accept, “Seems obvious” as a proof then … well it’s obviously going to run forever, isn’t it? But you bump into this with all convergence problems … no one actually divides infinitesimal numbers to get a derivative, instead what you do is construct a limit and work out what would happen as you approach the limit, then convince yourself that it’s OK to jump ahead to dividing by zero. These type of problems all involve a little leap of faith, and the danger goes away for a problem that’s been well tested (e.g. Newtonian physics) but not because there never was any danger to begin with, but only because the path is well trodden.

This type of thing gets much worse when you bog down into numerical methods, and then constantly deal with errors. You see games sometimes where the physics engine can generate free energy … turns out that solving Newtonian physics equations while keeping the energy balanced is a bit trickier than it looks.

By the way, I don’t accept that Mathematics is at a crisis … and the stuff about pompous gatekeepers who have risen above their genuine ability and want to discourage upstarts from challenging them … it’s as old as humanity. The big dog in the pack doesn’t want to fight every other dog, because he can’t afford the injury and sooner or later he inevitably makes a mistake and loses. Most fights are settled by posturing and threats alone, which is how you can have a stable leadership in the first place.

You know my low opinion of the Coase Theorem, which leads to a patently ridiculous conclusion … but most economists believe it. Any attempt to disbelieve will result in being told that no one is as smart as *insert famous guy here* and stop arguing. Personally I see the Coase Theorem as another of these situations where you have a hidden presumption of convergence in the system, but worse there’s a presumption that convergence will always be to a single point and never path dependent. This assumption has become so instinctive for economists they no longer examine it critically … but I would have thought there are plenty of examples (both theoretical and real) that can demonstrate counter-examples.

When Lorenz stumbled across the idea that equations can have a strange attractor, he annoyed a lot of people who believed they could calculate anything they wanted. I don’t see this as any sort of “crisis” … it merely made the subject richer and more interesting. By the way, Lorenz kept annoying people until he died, and you will notice that right after he died, his homepage vanished … there used to be a bunch of his best articles available for free download and all of those are gone, as far as I can discover. It’s doubly annoying because making a reference to the article with a URL, now results in a broken link. On that topic though, few people understand the deeper implications of what chaos is really about. Take the Lorenz Equations, since he is the topic of conversation:

https://en.wikipedia.org/wiki/Lorenz_system

Use the most standard parameter values: sigma = 10, beta = 8/3, rho = 28 and select an easy starting point for time zero: x = 1, y = 1, z = 1 (all ones makes it both precise and simple). What are the x, y, z values at time 100? Getting the answer to two significant figures would be quite sufficient.

That’s a well posed problem … in as much as the equations, parameters, and initial conditions are all exactly known … however to the best of my knowledge

there is no currently known mathematical technique that gives the real answer. Numerical techniques will give you some kind of answer, but not accurate to two significant figures … actually not even accurate to one significant figure … you can get all sorts of answers to that question, if you go and try it. It’s a great exercise for students who are interested in learning about ODE solvers, because it’s simple enough to have a got at, and you can send them all out independently then when they come back again you compare results. It’s fascinating to look at where and why the convergence fails … and you can learn a lot by trying a range of approaches.Failure to solve a simple ODE like that suggests there must exist a large family of intrinsically incalculable equations, based on the same chaos problem. Next year some clever young thing might figure out how to accurately calculate some or all of these, but then again, possibly not … maybe there’s a reason why it simply can’t be done. I have found difficulty convincing some Mathematicians that this ODE cannot be calculated, they generally will wave it away but not give me the actual answer.

Regarding Bob’s skepticism of complex numbers… math is all about taking an abstract concept and running with it to see how far you can get. Two very important ideas in math are 1) completeness and 2) closure.

You start with a set of natural numbers, 1, 2, 3 … and start applying arithmetic operations to them. A sum of two natural numbers is a natural number. However, a difference of two natural numbers is not always a natural number (for example, 1-3). In other words, the set of natural numbers if not “closed under the operation of subtraction”. If you “close” the set of natural numbers under the ‘subtract’ operation, you get to the set of integers (… -2, -1, 0, 1, 2, …). The set of integers is closed under the operations of addition and subtraction because the sum or difference of any two integers is also in the set of integers. Integers are also closed under the operation of multiplication, but not division (1/3 is not an integer.) Closing the set of integers under division is how you get to the set of rational numbers (which are a closed set under addition, subtraction, multiplication and division.)

So far so good. Now we get to explore the “completeness” property of a set. The set of rational numbers (Q) is closed, but it’s not “complete” (in the sense that not every sequence of rational numbers converges to a rational number.) This is how you get to real numbers (R) – by taking rational numbers and adding limits of converging sequences. The result remains a complete set.

We could probably stop at real numbers since that’s all we use in everyday life. In fact, we could probably stop at rational numbers since they approximate any number we want to any finite precision.

But math goes further by exploring the concept of “algebraic closure”, i.e. making sure all polynomials factor completely into linear polynomials (i.e. have roots in the same field.) Once you go there, you quickly realize that the field of real numbers is not algebraically closed. Indeed, complex numbers (C) are the algebraic closure of the field of real numbers. Complex numbers have the rather unique property of being both complete *and* algebraically closed.

This should not be taken for granted – in fact, it is very rare for a field to be both complete and algebraically closed. If you extend rational numbers in the direction of p-adic numbers (where a “small distance” between two numbers is defined as divisibility by a high power of a prime p), you’ll find that the algebraic closure of that field is not complete. It takes another completion and algebraic closure to arrive at a complete and algebraically closed field. But p-adic numbers simply redefine what the “distance” or “difference” between two numbers is… and yet we arrive at a completely different result. p-adic numbers are used in cryptography and other applications and are a useful concept that simply follows the same completeness/closure path in a different direction.

In math, you never know what you’ll find until you go looking, no matter how “absurd” or “useless” something appears to be.

“However, a difference of two natural numbers is not always a natural number (for example, 1-3)”

I think this depends on what kinds of things you’re trying to subtract and whether or not the operation is appropriate for that type of problem.

For example, if you have 1 apple, you cannot subtract 3 apples in the real world – you’re not left with -2 apples.

So, I think it would be more appropriate to say that 3-1 is a nonsense problem in this context, and that the only reason a negative number can make sense is if it’s considered a debit of some kind – something that’s owed.

Something that’s owed, though, can be stated in natural numbers – 3-1 equals 2 units owed.

And that, I think, is the proper way to think of negative numbers: they are simply positive debits.

There are other math concepts I have problems with, like, I think one was finding the square root of a negative number, or something, where you had to cheat and do the math inside the radicand first cuz if you tried to find it by multiplying one number against itself, it could never result in a negative number. I’m sure I’m getting the problem wrong – I know it had something to do with negative numbers and radicands, and possibly “e”.

Reminds me a bit of William Lane Craig’s defense of the Kalam Cosmological argument, in which he asserts that there are no actual infinites (and therefore the universe had a beginning), and he uses Hilbert’s Hotel to illustrate the absurdities that arise when we assume that there are.

I see math as a language, yes a subset of logic and thus a subset of the mind of God. I’ve programmed Mandlebrot Set explorers in multiple languages, and now Steve has me wondering, what was I actually computing? Because the computer doesn’t know that I’m pretending there’s a square root of -1, all it’s doing is following the algorithm I gave it. In fact, the computer doesn’t even know what negative numbers are (which reminds me, early mathematicians didn’t believe in them either).

What’s giving me pause though, is that the language of math we use now is interestingly beautiful (and useful, as you’ve pointed out) in some ways… see the youtube channel “3blue1brown” for some really insightful visual explanations. Would a more discrete expression of some of these abstract concepts be easier to understand, even if they are somehow logically equivalent? Maybe lead to math & physics breakthroughs? Now you’ve got me wondering. Great conversation.

Question, why are imaginary numbers fishy because they don’t conform to the rules of multiplication? Does all of multiplication conform to the rules of addition?

Why is a negative number times a negative number a positive number? We can show, algebraically why a negative times a negative must be a positive for it to fit into the previous rules about multiplication, but we’re sort of saying “a negative times a negative equals a positive because it has to.” It doesn’t really make intuitive sense as a short-hand for addition which motivated multiplication to begin with (ie: 5 * 2 is 5 + 5 or 2 + 2 + 2 + 2 + 2).

The same argument can be made for introducing complex numbers. They sort of just have to exist in order for us to resolve things like quadratics. Here is a weird little thing that we can just get around by saying, “well, just define it as such and move on.”

“Why is a negative number times a negative number a positive number?”

The reason is because you’re saying there’s a debit of negatives. (See above where I make the case that negative numbers only make sense where the concept of debits are applicable.)

You can work backward to see this: if I’m subracting from a negative number, what I’m doing is increasing the amount of a debit “Negative 3 minus 1” increases the debit.

But if I say that the amount of debits are decreasing by X amount, then the result has to be positive, or, in other words, a lack of debits.

Take “-25 * -1”: What we’re saying is that there’s a debit of one set of negative-twenty-fives – which is positive 25.

As an aside, there’s this really great video by the YouTube channel “Veritasium” that talks about the difference between digital and analog computers.

What I came away with, from that video, is that it’s entirely possible that some of the problems we’re trying to solve with math may not even be math problems at all, and we’re using the wrong method.

Consider the following statement:

The Most Powerful Computers You’ve Never Heard Of

youtube [dot] com/watch?v=IgF3OX8nT0w

(@ 1:23) “Analog computers have a continuous range of inputs and outputs, whereas digital only works with discreet values.

“With analog computers, the quantities of interest are actually represented by something physical, like the amount a wheel has turned, whereas digital computers work on symbols, like zeros and ones.

“If the answer is, say, two, there is nothing in the computer that is ‘twice as much’ as a one. In analog coputers, there is.”

The whole video is great, and there’s examples of really complex analog computers that solved real world problems.

Maybe the reason why we can’t find pi with absolute precisionis is because we’re using the wrong method and/or a number system with the wrong base.

I’m not sure where you found this guy, Bob, but he’s clearly a nutter.

The entire discussion was premised on the idea that infinity is nonsense because the observable real world is finite.

But when you bring up the perfectly reasonable argument about velocity, he talks about velocity not being real. Sorry, no. We *see* velocity. You don’t get to just deny it because it’s inconvenient to you.

When did he say velocity wasn’t real? Maybe you misheard?

I only heard him speak on velocity at 1:25:08

I am disappointed by this episode. I don’t want you to have Mr. Patterson on again, but I wish that when you interviewed him you had pressed him more about his unusual beliefs. Sure, the conclusions of the Banach-Tarski construction and the Riemann rearrangement theorem are startling to say the least, but to go from this to the claim that Pi is a rational number should require much more explanation. Patterson claims space is ultimately discrete: that there is a smallest unit of distance. The fact that we can move isotropically , that the hypotenuse of a right triangle has a measure greater than any side and less than the sum of the sides: This seems a simple disproof of his claim.

Wildberger claims and Patterson seems also to believe that rationals, and even integers ( larger than humanly computable) do not exist. Does this not mean there is a smallest existing integer? What happens when you try to add one to this? Is Patterson really willing to bit a bullet and agree that the abstract idea of a prime number is nonsensical, even if certain small integers can be factored?

Bob, I loved this show. I’ve been railing against dark matter for 30 years, and it’s highly validating to see so many scientists giving up on it today.

You used to talk a lot about the epistemology of economics relating it to geometry, and I firmly believed you were on the right track. However, I saw a debate you had years ago where your opponent retorted with a Neil deGrasse Tyson, “We now know… space is not euclidian,” and I didn’t hear you bring up that geometry analogy since. And you guys touched on it briefly here.

You can’t “discover” a new geometry any more than you can trip over it. Geometry is the rules we derive from logical proofs that demarcate whether one is reasoning about space rationally. The new geometries are simply a poor interpretation of experimental results, and the more you look under the covers of their “new geometry,” the more irrational it becomes.

For example, it was observed the light “travels” from point A to point B at a fixed rate regardless of A and B’s relative velocity to point C (or anything else.) However, a “photon travels between A and B” paradigm suggests that C should perceive the photon traveling faster if A and B move relative to C, but that is not observed. To resolve this, to make the photon’s velocity constant to all frames, they “discovered” that space and time bend (which is entirely irrational.)

Maxwell’s original equations, which preceded the “discovery” of photons, defined the phenomena as: A induces an electromagnetic response in B at a rate proportional to their distance (regardless of A and B’s relative velocity to anything else.) Nobody had to “discover” a new geometry to understand it, and few suggested a particle was “traveling between” because that didn’t fit the evidence. A more sound hypothesis is that some process is instantaneously occurring between A and B, and the process occurs at a rate dependent on their distance – no objects “traveling between.”

I hope you will consider this, and I might hear some geometric analogies to economic epistemology from you in the future:)

[Relativity] is “a mass of error and deceptive ideas violently opposed to the teachings of great men of science of the past and even to common sense.” – Nikola Tesla