New papers dance!

Two new papers were recently posted on the arXiv with my first two official PhD students since becoming a faculty member! The earlier paper is titled Efficient online quantum state estimation using a matrix-exponentiated gradient method by Akram Youssry and the more recent paper is Minimax quantum state estimation under Bregman divergence by Maria Quadeer. Both papers are co-authored by Marco Tomamichel and are on the topic of quantum tomography. If you want an expert’s summary of each, look no further than the abstracts. Here, I want to give a slightly more popular summary of the work.

Efficient online quantum state estimation using a matrix-exponentiated gradient method

This work is about a practical algorithm for online quantum tomography. Let’s unpack that. First, the term work. Akram did most of that. Algorithm can be understood to be synonymous with method or approach. It’s just a way, among many possibilities, to do a thing. The thing is called quantum tomography. It’s online because it works on-the-fly as opposed to after-the-fact.

Quantum tomography refers to the problem of assigning a description to physical system that is consistent with the laws of quantum physics. The context of the problem is one of data analysis. It is assumed that experiments on this to-be-determine physical system will be made and the results of measurements are all that will be available. From those measurement results, one needs to assign a mathematical object to the physical system, called the quantum state. So, another phrase for quantum tomography is quantum state estimation.

The laws of quantum physics are painfully abstract and tricky to deal with. Usually, then, quantum state estimation proceeds in two steps: first, get a crude idea of what’s going on, and then find something nearby which satisfies the quantum constraints. The new method we propose automatically satisfies the quantum constraints and is thus more efficient. Akram proved this and performed many simulations of the algorithm doing its thing.

Minimax quantum state estimation under Bregman divergence

This work is more theoretical. You might call it mathematical quantum statistics… quantum mathematical statistics? It doesn’t yet have a name. Anyway, it definitely has those three things in it. The topic is quantum tomography again, but the focus is different. Whereas for the above paper the problem was to devise an algorithm that works fast, the goal here was to understand what the best algorithm can achieve (independent of how fast it might be).

Work along these lines in the past considered a single figure of merit, the thing the defines what “best” means. In this work Maria looked at general figures of merit called Bregman divergences. She proved several theorems about the optimal algorithm and the optimal measurement strategy. For the smallest quantum system, a qubit, a complete answer was worked out in concrete detail.

Both Maria and Akram are presenting their work next week at AQIS 2018 in Nagoya, Japan.

Quantum computing worst case scenario: we are Lovelace and Babbage

As we approach the peak of the second hype cycle of quantum computing, I thought it might be useful to consider the possible analogies to other technological timelines of the past. Here are three.

Considered Realism

We look most like Lovelace and Babbage, historical figures before their time. That is, many conceptual, technological, and societal shifts need to happen before—hundreds of years from now—future scientists say “hey, they were on to something”.

Charles Babbage is often described as the “father of the computer” and he is credited with inventing the digital computer. You might be forgiven, then, if you thought he actually built one. The Analytical Engine, Babbage’s proposed general purpose computer, was never built. Ada Lovelace is credited with creating the first computer program. But, again, the computer didn’t exist. So the program is not what you are currently imagining—probably, like, Microsoft Excel, but with parchment?

By the time computing began in earnest, Lovelace and Babbage were essentially forgotten. Eventually, historians restored them to their former glory—and rightfully so as they were indeed visionaries. Lovelace anticipated many ideas in theoretical computer science. However, the academic atmosphere at the time lacked the language and understanding to appreciate it.

Perhaps the same is true of quantum computation? After all, we love to tout the mystery of it all. Does this point to a lack of understanding comparable to that in computing 200 years ago?

This I see as the worst case scenario for quantum computation. We are missing several conceptual—and possibly societal—ideas to articulate this thing which obviously has merit. Eventually, humanity will have a quantum computer. But, will that future civilisation look at us as their contemporaries or a bunch of idiots mostly interested in killing each other while a few of our social elite played with ideas of quantum information?

Cautious Optimism

We are on the cusp off a quantum AI winter. We’re in for a long calm before any storm.

This is probably where most academic quantum scientists sit. We’ve seen 10-year roadmaps, 20-year roadmaps, even 50-year roadmaps. The truth is that every “scalable” proposal for quantum technology contains a little magic. We really don’t know what secret sauce is going to suddenly scale us up to a quantum computer.

On the other hand, very very few scientists believe quantum computing to be impossible—it’s going to happen eventually. At the same time, most would also not bet their own money on it happening any time soon. And, if most scientists are correct, the hype doesn’t match reality and we’re headed for a crash—a crash in funding, a crash in interest, and—worst of all—a crash in trust.

Some would argue that there are too many basic science questions unanswered before we harness the full potential of this theory that even its practitioners continue to call strange, weird, and counterintuitive. The science will march on anyway, though. Memes with truth and merit have a habit of slow and steady longevity. The ideas will evolve and—much like AI—eventually become mainstream, probably in our lifetime.

Unabated Opportunism

We will follow the same steady forward march that digital computers did the past 50 years.

If you are involved with a start-up company with an awkwardly placed “Q” in its name, this is where you sit. You believe our current devices are “quantum ENIAC machines”. Following the historical trajectory of classical computers, we just need some competitive players making a steady stream of breakthroughs and—voila!—quantum iPads for your alchemy simulations in no time. Along the way, we will reap continuing benefits from the spin-offs of quantum tech.

This is the quantum tech party line: quantum supremacy (yep, that’s a term of art now) is near. We are on the precipice of a technological—no, societal—revolution. It’s a new space race with equally high stakes. Get your Series A while the gettin’s good.

Like it or not, this is the best case scenario for the field. Scientists like to argue about what the “true” resource for quantum computation is. Turns out, it was money all along. Perhaps the hype will create a self-fulfilling prophecy that draws the the hobbyists and tinkerers that fueled much of the digital revolution. Can we engineer such a situation? I think we better find that out sooner rather than later.