When will we have a quantum computer? Never, with that attitude

We are quantum drunks under the lamp post—we are only looking at stuff that we can shine photons on.

In a recently posted paper, M.I. Dyakonov outlines a simplistic argument for why quantum computing is impossible. It’s so far off the mark that it’s hard to believe that he’s even thought about math and physics before. I’ll explain why.

abstract

Find a coin. I know. Where, right? I actually had to steal one from my kid’s piggy bank. Flip it. I got heads. Flip it again. Heads. Again. Tails. Again, again, again… HHTHHTTTHHTHHTHHTTHT. Did you get the same thing? No, of course you didn’t. That feels obvious. But why?

Let’s do some math. Wait! Where are you going? Stay. It will be fun. Actually, it probably won’t. I’ll just tell you the answer then. There are about 1 million different combinations of heads and tails in a sequence of 20 coin flips. The chances that we would get the same string of H’s and T’s is 1 in a million. You might as well play the lottery if you feel that lucky. (You’re not that lucky, by the way, don’t waste your money.)

Now imagine 100 coin flips, or maybe a nice round number like 266. With just 266 coin flips, the number of possible sequences of heads and tails is just larger than the number of atoms in the entire universe. Written in plain English the number is 118 quinvigintillion 571 quattuorvigintillion 99 trevigintillion 379 duovigintillion 11 unvigintillion 784 vigintillion 113 novemdecillion 736 octodecillion 688 septendecillion 648 sexdecillion 896 quindecillion 417 quattuordecillion 641 tredecillion 748 duodecillion 464 undecillion 297 decillion 615 nonillion 937 octillion 576 septillion 404 sextillion 566 quintillion 24 quadrillion 103 trillion 44 billion 751 million 294 thousand 464. Holy fuck!

So obviously we can’t write them all down. What about if we just tried to count them one-by-one, one each second? We couldn’t do it alone, but what if all people on Earth helped us? Let’s round up and say there are 10 billion of us. That wouldn’t do it. What if each of those 10 billion people had a computer that could count 10 billion sequences per second instead? Still no. OK, let’s say, for the sake of argument, that there were 10 billion other planets like Earth in the Milky Way and we got all 10 billion people on each of the 10 billion planets to count 10 billion sequences per second. What? Still no? Alright, fine. What if there were 10 billion galaxies each with these 10 billion planets? Not yet? Oh, fuck off.

Even if there were 10 billion universes, each of which had 10 billion galaxies, which in turn had 10 billion habitable planets, which happened to have 10 billion people, all of which had 10 billion computers, which count count 10 billion sequences per second, it would still take 100 times the age of all those universes to count the number of possible sequences in just 266 coin flips. Mind. Fucking. Blown.

Why I am telling you all this? The point I want to get across is that humanity’s knack for pattern finding has given us the false impression that life, nature, the universe, or whatever, is simple. It’s not. It’s really fucking complicated. But like a drunk looking for their keys under the lamp post, we only see the simple things because that’s all we can process. The simple things, however, are the exception, not the rule.

Suppose I give you a problem: simulate the outcome of 266 coin tosses. Do you think you could solve it? Maybe you are thinking, well you just told me that I couldn’t even hope to write down all the possibilities—how the hell could I hope to choose from one of them. Fair. But, then again, you have the coin and 10 minutes to spare. As you solve the problem, you might realize that you are in fact a computer. You took an input, you are performing the steps in an algorithm, and will soon produce an output. You’ve solved the problem.

A problem you definitely could not solve is to simulate 266 coin tosses if the outcome of each toss depended on the outcome of the previous tosses in an arbitrary way, as if the coin had a memory. Now you have to keep track of the possibilities, which we just decided was impossible. Well, not impossible, just really really really time consuming. But all the ways that one toss could depend on previous tosses is yet even more difficult to count—in fact, it’s uncountable. One situation where it is not difficult is the one most familiar to us—when each coin toss is completely independent of all previous and future tosses. This seems like the only obvious situation because it is the only one we are familiar with. But we are only familiar with it because it is one we know how to solve.

Life’s complicated in general, but not so if we stay on the narrow paths of simplicity. Computers, deep down in their guts, are making sequences that look like those of coin-flips. Computers work by flipping transistors on and off. But your computer will never produce every possible sequence of bits. It stays on the simple path, or crashes. There is nothing innately special about your computer which forces it to do this. We never would have built computers that couldn’t solve problems quickly. So computers only work at solving problems that can we found can be solved because we are at the steering wheel forcing them to the problems which appear effortless.

In quantum computing it is no different. It can be in general very complicated. But we look for problems that are solvable, like flipping quantum coins. We are quantum drunks under the lamp post—we are only looking at stuff that we can shine photons on. A quantum computer will not be an all-powerful device that solves all possible problems by controlling more parameters than there are particles in the universe. It will only solve the problems we design it to solve, because those are the problems that can be solved with limited resources.

We don’t have to track (and “keep under control”) all the possibilities, as Dyakonov suggests, just as your digital computer does not need to track all its possible configurations. So next time someone tells you that quantum computing is complicated because there are so many possibilities involved, remind them that all of nature is complicated—the success of science is finding the patches of simplicity. In quantum computing, we know which path to take. It’s still full of debris and we are smelling flowers and picking the strawberries along the way, so it will take some time—but we’ll get there.

 

Estimation… with quantum technology… using machine learning… on the blockchain

A snarky academic joke which might actually be interesting (but still a snarky joke).

Abstract

A device verification protocol using quantum technology, machine learning, and blockchain is outlined. The self-learning protocol, SKYNET, uses quantum resources to adaptively come to know itself. The data integrity is guaranteed with blockchain technology using the FelixBlochChain.

Introduction

You may have a problem. Maybe you’re interested in leveraging the new economy to maximize your B2B ROI in the mission-critical logistic sector. Maybe, like some of the administration at an unnamed university, you like to annoy your faculty with bullshit about innovation mindshare in the enterprise market. Or, maybe like me, you’d like to solve the problem of verifying the operation of a physical device. Whatever your problem, you know about the new tech hype: quantum, machine learning, and blockchain. Could one of these solve your problem? Could you really impress your boss by suggesting the use of one of these buzzwords? Yes. Yes, you can.

Here I will solve my problem using all the hype. This is the ultimate evolution of disruptive tech. Synergy of quantum and machine learning is already a hot topic1. But this is all in-the-box. Now maybe you thought I was going outside-the-box to quantum agent-based learning or quantum artificial intelligence—but, no! We go even deeper, looking into the box that was outside the box—the meta-box, as it were. This is where quantum self-learning sits. Self-learning is protocol wherein the quantum device itself comes to learn its own description. The protocol is called Self Knowing Yielding Nearly Extremal Targets (SKYNET). If that was hard to follow, it is depicted below.

hypebox
Inside the box is where the low hanging fruit lies—pip install tensorflow type stuff. Outside the box is true quantum learning, where a “quantum agent” lives. But even further outside-the-meta-box is this work, quantum self-learning—SKYNET.

Blockchain is the technology behind bitcoin2 and many internet scams. The core protocol was quickly realised to be applicable beyond digital currency and has been suggested to solve problems in health, logistics, bananas, and more. Here I introduce FelixBlochChain—a data ledger which stores runs of experimental outcomes (transactions) in blocks. The data chain is an immutable database and can easily be delocalised. As a way to solve the data integrity problem, this could be one of the few legitimate, non-scammy uses of blockchain. So, if you want to give me money for that, consider this the whitepaper.

Problem

 

99probs
Above: the conceptual problem. Below: the problem cast in its purest form using the formalism of quantum mechanics.

The problem is succinctly described above. Naively, it seems we desire a description of an unknown process. A complete description of such a process using traditional means is known as quantum process tomography in the physics community3. However, by applying some higher-order thinking, the envelope can be pushed and a quantum solution can be sought. Quantum process tomography is data-intensive and not scalable afterall.

The solution proposed is shown below. The paradigm shift is a reverse-datafication which breaks through the clutter of the data-overloaded quantum process tomography.

fuckyeahquantum
The proposed quantum-centric approach, called self-learning, wherein the device itself learns to know itself. Whoa. 

It might seem like performing a measurement of \{|\psi\rangle\!\langle \psi|, \mathbb I - |\psi\rangle\!\langle \psi|\} is the correct choice since this would certainly produce a deterministic outcome when V = U. However, there are many other unitaries which would do the same for a fixed choice of |\psi\rangle. One solution is to turn to repeating the experiment many times with a complete set of input states. However, this gets us nearly back to quantum process tomography—killing any advantage that might have been had with our quantum resource.

Solution

quantumintensifies
Schematic of the self-learning protocol, SKYNET. Notice me, Senpai!

This is addressed by drawing inspiration from ancilla-assisted quantum process tomography4. This is depicted above. Now the naive looking measurement, \{|\mathbb I\rangle\!\langle\mathbb I |, \mathbb I - |\mathbb I\rangle\!\langle \mathbb I|\}, is a viable choice as

|\langle\mathbb I |V^\dagger U \otimes \mathbb I |\mathbb I\rangle|^2 = |\langle V | U\rangle|^2,

where |U\rangle = U\otimes \mathbb I |\mathbb I\rangle. This is exactly the entanglement fidelity or channel fidelity5. Now, we have |\langle V | U\rangle| = 1 \Leftrightarrow U = V, and we’re in business.

Though |\langle V | U\rangle| is not accessible directly, it can be approximated with the estimator P(V) = \frac{n}{N}, where N is the number of trials and n is the number of successes. Clearly, \mathbb E[P(V)] = |\langle V | U\rangle|^2.

Thus, we are left with the following optimisation problem:
\min_{V} \mathbb E[P(V)] \label{eq:opt},

subject to V^\dagger V= \mathbb I. This is exactly the type of problem suitable for the gradient-free cousin of stochastic gradient ascent (of deep learning fame), called simultaneous perturbation stochastic approximation6. I’ll skip to the conclusion and give you the protocol. Each epoch consists of two experiments and a update rule:

V_{k+1} = V_{k} + \frac12\alpha_k \beta_k^{-1} (P(V+\beta_k \triangle_k) - P(V-\beta_k \triangle_k))\triangle_k.

Here V_0 is some arbitrary starting unitary (I chose \mathbb I). The gain sequences \alpha_k, \beta_k are chosen as prescribed by Spall6. The main advantage of this protocol is \triangle_k, which is a random direction in unitary-space. Each epoch, a random direction is chosen which guarantees an unbiased estimation of the gradient and avoids all the measurements necessary to estimation the exact gradient. As applied to the estimation of quantum gates, this can be seen as a generalisation of Self-guided quantum tomography7 beyond pure quantum states.

To ensure integrity of the data—to make sure I’m not lying, fudging the data, p-hacking, or post-selecting—a blochchain-based solution is implemented. In analogy with the original bitcoin proposal, each experimental datum is a transaction. After a set number of epochs, a block is added to the datachain. Since this is not implemented in a peer-to-peer network, I have the datachain—called FelixBlochChain—tweet the block hashes at @FelixBlochChain. This provides a timestamp and validation that the data taken was that used to produce the final result.

Results

results
SKYNET finds a description of its own process. Each N is a different number of bits per epoch. The shaded region is the interquartile range over 100 trials using a randomly selected “true” gate. The solid black lines are fits which suggest the expected 1/\sqrt{N} performance.

Speaking of final result, it seems SKYNET works quite well, as shown above. There is still much to do—but now that SKYNET is online, maybe that’s the least of our worries. In any case, go download the source8 and have fun!

Acknowledgements

The author thanks the quantum technology start-up community for inspiring this work. I probably shouldn’t say this was financially supported by ARC DE170100421.


  1. V. Dunjko and H. J. Briegel, Machine learning and artificial intelligence in the quantum domain, arXiv:1709.02779 (2017)
  2. N. Satoshi, Bitcoin: A peer-to-peer electronic cash system, (2008), bitcoin.org. 
  3. I. L. Chuang and M. A. Nielsen, Prescription for experimental determination of the dynamics of a quantum black box, Journal of Modern Optics 44, 2455 (1997)
  4. J. B. Altepeter, D. Branning, E. Jerey, T. C. Wei, P. G. Kwiat, R. T. Thew, J. L. O’Brien, M. A. Nielsen, and A. G. White, Ancilla-assisted quantum process tomography, Phys. Rev. Lett. 90, 193601 (2003)
  5. B. Schumacher, Sending quantum entanglement through noisy channels, arXiv:quant-ph/9604023 (1996)
  6. J. C. Spall, Multivariate stochastic approximation using a simultaneous perturbation gradient approximation, IEEE Transactions on Automatic Control 37, 332 (1992)
  7. C. Ferrie, Self-guided quantum tomography, Physical Review Letters 113, 190404 (2014)
  8. The source code for this work is available at https://gist.github.com/csferrie/1414515793de359744712c07584c6990

Milking a new theory of physics

For the first time, physicists have found a new fundamental state of cow, challenging the current standard model. Coined the cubic cow, the ground-breaking new discovery is already re-writing the rules of physics.

A team of physicists at Stanford and Harvard University have nothing to do with this but you are probably already impressed by the name drop. Dr. Chris Ferrie, who is currently between jobs, together with a team of his own children stumbled upon the discovery, which was recently published in Nature Communications*.

sphericalcow2
Image credit: Ingrid Kallick

The spherical theory of cow had stood unchallenged for over 50 yearsand even longer if a Russian physicist is reading this. The spherical cow theory led to many discoveries also based on O(3) symmetries. However, spherical cows have not proven practically useful from a technological perspective. “Spherical cows are prone to natural environmental errors, whereas our discovery digitizes the symmetry of cow,” Ferrie said.

Just as the digital computer has revolutionized computing technology, this new digital cow model could revolutionize innovation disrupting cross-industry ecosystems, or something.

Lead author Maxwell Ferrie already has far-reaching applications for the result. “I like dinosaurs,” he said. Notwithstanding these future aspirations, the team is sure to be milking this new theory for all its worth.

* Not really, but this dumping ground for failed hypesearch has a bar so low you might as well believe it.

No one is going to take you seriously

I make jokes. I do scientist. I make jokes while doing science.

Recently, at the Australian Institute of Physics Congress I presented this poster:

I think it was generally well received. Of course it got lots of double takes and laughs, but was it a good scientific poster? One of my senior colleagues was of mixed minds, eventually concluding with some familiar life advice:

Yes, I admit it is funny. But, eventually it will catch up with you. No one is going to take you seriously. You will not be seen as a serious scientist.

Good—because I am not a serious scientist. I am a (hopefully) humorous scientist, but a scientist nonetheless.

I’m going to get straight to the point with my own advice: avoid serious scientists at all costs. They are either psychopaths or sycophants. I can’t find it in me to be either. So I’ll continue doing science, and having a bit of fun while I’m at it. You only science once, right?