Ep. 107 Max Sklar Interviews Bob Murphy on Mises’ Class vs. Case Probability
On his podcast The Local Maximum, software engineer (and developer at Foursquare) Max Sklar has Bob explain Ludwig von Mises’ distinction between class and case probability. They apply Mises’ framework to other economic theory, and discuss Bayesian inference and machine learning.
Mentioned in the Episode and Other Links of Interest:
- Max Sklar’s podcast, the Local Maximum.
- Mises’ Human Action and Bob’s Study Guide to Human Action; the class vs. case probability distinction is handled in Chapter VI on uncertainty.
- Max Sklar’s episode dealing with his own views on probability.
- Help support the Bob Murphy Show.
The audio production for this episode was provided by Podsworth Media.
Interesting. It seems to me like it’s something to do with the definition of class vs. case probabilities being subjective.
Isn’t there really a gradient between the two? One extreme is the singular case. The other extreme is something that is 100% repeatable in a controlled environment. E.g. a die roll. Even a regular die roll could be turned into a unique case if an earthquake occured just as we rolled, but we could make it even more repeatable by performing the roll in an earthquake-proof bunker with an asteroid-safe ceiling.
It seems to me that we can only talk about probabilities for things that we can actually repeat. “If we did this election 1 million times” – does this include path dependency, i.e. do people remember the last 999,999 elections? Does time go forward as we do this? Or are we re-setting and re-playing the entire universe, atom by atom, every time? In which case, the answer would probably depend on determinism being true or false. So I’d agree with Mises that it makes sense to speak of the probability of an event only in connection with its repeatability. “Presidential election” as a class happens every 4 years, so it’s somewhat of a class, but the circumstances are all different.
But there definitely seems to be more to how we think about risk or likelihood of events, even if putting numbers on it doesn’t make sense. E.g. Max’s example with the airplane crash – maybe saying 1 million vs. 1/10 doesn’t make sense, but there is a sense in which we’d much rather fly in a new Boeing 757 than a Boeing 737 MAX – and we’d probably take that over a broken Cessna with a defective engine. It’s not just ordinal, I think. There is a huge “distance” between some of the points. Or maybe I have 90 units of 757 on my ordinal scale and then 10 of 737 MAX.
Say I have a choice of taxis to take, and the only difference is the car. One is a 2019 model, one is a 2018 model, and one is a 1945 model. I can easily order them ordinally, but the 2019 and 2018 are really close together, where I might not care. The 1945 model on the other hand is likely a very distant 3rd choice, unless I am an old timer enthusiast (in which case the modern cars would be distant choices).
Yet I agree that it doesn’t make much sense to put numbers on this. I could give you a range of percentages or of how much more “likely” I would take one over the other, but those are probably just projections onto a (numerical) scale we’re familiar with. Like when people say they’re 80% sure – they’d probably also be ok with 75% or 90%, 80% is just one number in the comfortable range. 30% might seem too low to them.
So how do we express an ordinal ordering with different distances between the choices? Doesn’t that turn it cardinal?
It struck me in the shower this morning that I could easily express the “distance” in my above example by incorporating other options. Here could be my ordinal scale of preferences:
1.2019 car model
2.2018 car model
7.Stay over night instead of traveling
99.1945 car model
Maybe artificially excluding other options because they don’t seem to fit is the issue?
Hi Not Bob! This is Max, the interviewer in the episode. Thanks for the response – I’ll give my take on it, but I’ll let someone else fill in the gaps for the Misesian/Austrian view.
From talking about this topic on the last several weeks of The Local Maximum, I’ve gotten to dive into the rich and subtle topic of the philosophy of probability, which has several different definitions (subjective, objective, logical, etc). Sometimes these definitions are a matter of personal taste, and sometimes people pick them based on the types of problems they are solving.
In my view, probability is always a cardinal value. Specifically, if you say that A is twice as likely to occur as B, then there’s a difference if you say A is 10 times as likely to occur as B.
In terms of actual human decision-making, I think that all decisions are made under some uncertainty, and people have a sense of the magnitude there (although our intuitions about probability can fool us at times). Most decisions are made following our intuition rather than being data-driven.
Now, if you can actually quantify probability and uncertainty and do it in a competent way, then you can make better decisions. That’s what actuaries do, or candidates running for office, and in my work as in machine learning.
I agree with you that there’s have to be a gradient between case and class. Everything that happens in the world is a one-off unique event. We can only group several together to form a class if we think they are somehow analogous. For example, I think all coin tosses are analogous, and maybe elections are only loosely related to each other.
In terms of repeated experiments – consider that nothing can be repeated in exactly the same way, and also that we never have an infinite amount of data. Even so, we are able to draw conclusions. While our conclusion drawing power might increase if we have more repeatability – there’s no magic number where I can repeat an experiment (say 100 times) and all of a sudden I have a probability. This supports the idea that there’s no perfect class probability.