Bayesianism is (some would say) a radical alternative philosophy and practice for both understanding probability and performing statistical analysis. So, like all young contrarian students of science, I was intrigued when I first found Bayesian probability. But where is the fun in blaming this on my own faults? I’m going to blame someone else—I’m going to blame it on Chris Fuchs.

On two occasions the Perimeter Institute for Theoretical Physics (PI) hosted lectures on the Foundations of Quantum Mechanics. I was lucky enough to be a graduate student in Waterloo at the time. The first, in 2007, was my first taste of the field. It was exciting to hear the experts at the forefront speaking about deep implications for physics and—indeed—even the philosophy of science itself. I knew then that this was the area I wanted to work in.

However, I quickly became disillusioned. The literature was plagued by lazy physicists posing as armchair philosophers. There was no interest in real problems—only the pandering of borderline pseudoscience. It’s no wonder—why bother doing hard work and difficult mathematics when peddling quantum mysticism is what gets you press?

I stayed, though, because there were several researchers at PI who seemed interested in solving real, technical problems—and, they were doing so using techniques from another field I had already worked in: Quantum Information Theory. I learned an immense amount from Robin Blume-Kohout, Rob Spekkens and Lucien Hardy while there, but the one who left a lasting impression was Chris Fuchs.

Before we get to Fuchs, though, let’s back up for a moment—just what is this Quantum Foundations thing, and what has it got to do with Bayesianism? As you know, quantum theory dictates that the world is uncertain. That is, as a scientific theory, it makes only probabilistic predictions. Many of the philosophical problems and misunderstandings of quantum theory can be traced back to this fact. Thus, if one really wants to understand quantum theory, one ought to understand probability first. Easy, right?

Nope. As it turns out, more people argue about how to interpret the seemingly simple and everyday concept of probability than do our most sophisticated and complex physical theory. Generally speaking, there are two camps in the interpretations of probability: frequentists and Bayesians. As noted, every student begins as one of the former. It was in 2010 when my conversion to the latter was complete.

In 2010, PI hosted its second course on the foundations of quantum theory. This time around I had a few years of experience under my belt and my bullshit detectors were on high alert. My final assignment was to summarize the course, as shown above. The only lectures that didn’t leave me disappointed where Chris Fuchs’. Because I had been reading up on Bayesian probability anyway, his “Quantum Bayesian” interpretation of quantum theory just clicked.

And it wasn’t just about philosophy. Concurrently, I was taking a great course on Stochastic Processes from Matt Scott. Most of this field takes an objective (frequentist) view of probability. Matt was patient with my constant questions on how to phrase the concepts in terms of the subjective Bayesian view. I was starting to feel a bit overwhelmed with the burden of translating everything to the new framework… then it happened.

The assignment question was as follows: Show that , the simplest case of the Chapman-Kolmogorov equation. What you were *supposed* to do is use the Markov property, , and integrate. It was a straightforward, but tedious, calculation. Here is what I wrote: first . Since is a probability, it integrates to 1. Done.

This was seen as *unphysical* because is a negative time. So what?—I thought—probabilities are not *physical*, they are subjective inferences. If I want to consider negative time to help me do my calculation, so be it. After all, I considered negative money to get me into university. But what I couldn’t believe is how difficult it was to convince others the solution was correct. It was at that moment I realized how powerful a slight change of view can be. I was a Bayesian.

I understand that your table is a piece of juvenalia and you don’t want it taken too seriously. But in case there are those coming across these ideas for the first time, here are some criticisms:

a) I’ve never understood what the “statistical interpretation” is supposed to be. It sounds like the “epistemic interpretation” of Spekkens, except you say it requires frequentism. Why do you say that? And where is the epistemic interpretation on your list? Would it make ontological claims? If so, why the difference with frequentism?

b) Either you are wrong to say that Bohr, Statistical, and QBism have no measurement problem, or you are missing their “worst weakness”es. That is, if they have no measurement problem it is only because they radically oppose scientific realism, denying statements like “all ordinary matter is made up of atoms”.

LikeLike