I gave up social media for a month. This happened next.

Nothing. Nothing, and it was glorious. If you haven’t tried giving up social media, I highly recommend giving it a try. But, now I’m back and — as you can see from the awesome clickbait title — I haven’t lost it. Why am I back and — for that matter — why did I leave? Read on.

First, a little back story for context. I joined social media in earnest about 5 years ago after I published my first book. I thought that I needed to be out there promoting my books. Around the same time, a growing number of academics were also adopting social media. I thought then that I could use social media to promote my academic work as well. Certainly, the number of eyes seeing my work increased with my presence on social media. But the big question was always left unanswered — was it worth the time spent?

This is a very difficult question to answer. I still don’t have the answer and I don’t think I ever will. In part, this is because not all time spent on social media has equal value. As my children get ever-closer to the age when all of their peers have a social media connected phone, I’ve become more and more interested in social media, who uses it, and what they use it for. This has been by no means a controlled — or even exhaustive — study, but I learned enough that I scared myself right off the platforms. I paid close attention as I used (mostly) Twitter, Instagram, and Facebook. I talked to colleagues at the university, other authors and parents, and observed people in public. Here is what I learned.

The uses of social media form a multidimensional spectrum, but there are easy to identify extreme behaviors:

  • Use it as a megaphone to broadcast your message or brand without any further engagement.
  • Use it to pass time, starring zombie-like at your phone as you scroll endlessly through your feed, which is curated by an algorithm maximizing the number of advertisements you see.
  • Use it to troll by intentionally offending people.
  • Use it to communicate with friends, family, or colleagues.
  • Use it to engage your audience.

In an ideal social network, there would be mutually beneficial interaction between creators and consumers of media. In reality, though, it’s just a vicious cycle of memes, with the most controversial or sensational going viral. It’s like 24-hour news, but a million times worse. It’s not a nice place to be. So, I left.

But what was the first thought to enter my mind after making this decision? Hey, I should tweet about this. Oops. I became addicted to social media. Luckily, I foresaw this and deleted the apps from my phone and had my browser forget my password. This was enough of a barrier to keep me away, and I stayed away for a month.

It was a great month, too. I was much happier and I got heaps done. It wasn’t just that I got back all the time spent on social media, but that social media was a huge distraction. Every time I had a break in my train of thought, or felt a little bored, or wanted a little dopamine hit from some likes, I’d pick up my phone or open a new tab. Even if I only spent a minute there, it was like hours were lost because that break in my train of thought was now completely lost.

So, given all that, clearly I made the correct decision in leaving social media, right? Well, no. The real lesson I have learned is that I wasn’t using social media optimally. There is value in being on social media, but you must be vigilant. And so, I’m back — ready to make the best of this mess called social media.

When will we have a quantum computer? Never, with that attitude

We are quantum drunks under the lamp post—we are only looking at stuff that we can shine photons on.

In a recently posted paper, M.I. Dyakonov outlines a simplistic argument for why quantum computing is impossible. It’s so far off the mark that it’s hard to believe that he’s even thought about math and physics before. I’ll explain why.

abstract

Find a coin. I know. Where, right? I actually had to steal one from my kid’s piggy bank. Flip it. I got heads. Flip it again. Heads. Again. Tails. Again, again, again… HHTHHTTTHHTHHTHHTTHT. Did you get the same thing? No, of course you didn’t. That feels obvious. But why?

Let’s do some math. Wait! Where are you going? Stay. It will be fun. Actually, it probably won’t. I’ll just tell you the answer then. There are about 1 million different combinations of heads and tails in a sequence of 20 coin flips. The chances that we would get the same string of H’s and T’s is 1 in a million. You might as well play the lottery if you feel that lucky. (You’re not that lucky, by the way, don’t waste your money.)

Now imagine 100 coin flips, or maybe a nice round number like 266. With just 266 coin flips, the number of possible sequences of heads and tails is just larger than the number of atoms in the entire universe. Written in plain English the number is 118 quinvigintillion 571 quattuorvigintillion 99 trevigintillion 379 duovigintillion 11 unvigintillion 784 vigintillion 113 novemdecillion 736 octodecillion 688 septendecillion 648 sexdecillion 896 quindecillion 417 quattuordecillion 641 tredecillion 748 duodecillion 464 undecillion 297 decillion 615 nonillion 937 octillion 576 septillion 404 sextillion 566 quintillion 24 quadrillion 103 trillion 44 billion 751 million 294 thousand 464. Holy fuck!

So obviously we can’t write them all down. What about if we just tried to count them one-by-one, one each second? We couldn’t do it alone, but what if all people on Earth helped us? Let’s round up and say there are 10 billion of us. That wouldn’t do it. What if each of those 10 billion people had a computer that could count 10 billion sequences per second instead? Still no. OK, let’s say, for the sake of argument, that there were 10 billion other planets like Earth in the Milky Way and we got all 10 billion people on each of the 10 billion planets to count 10 billion sequences per second. What? Still no? Alright, fine. What if there were 10 billion galaxies each with these 10 billion planets? Not yet? Oh, fuck off.

Even if there were 10 billion universes, each of which had 10 billion galaxies, which in turn had 10 billion habitable planets, which happened to have 10 billion people, all of which had 10 billion computers, which count count 10 billion sequences per second, it would still take 100 times the age of all those universes to count the number of possible sequences in just 266 coin flips. Mind. Fucking. Blown.

Why I am telling you all this? The point I want to get across is that humanity’s knack for pattern finding has given us the false impression that life, nature, the universe, or whatever, is simple. It’s not. It’s really fucking complicated. But like a drunk looking for their keys under the lamp post, we only see the simple things because that’s all we can process. The simple things, however, are the exception, not the rule.

Suppose I give you a problem: simulate the outcome of 266 coin tosses. Do you think you could solve it? Maybe you are thinking, well you just told me that I couldn’t even hope to write down all the possibilities—how the hell could I hope to choose from one of them. Fair. But, then again, you have the coin and 10 minutes to spare. As you solve the problem, you might realize that you are in fact a computer. You took an input, you are performing the steps in an algorithm, and will soon produce an output. You’ve solved the problem.

A problem you definitely could not solve is to simulate 266 coin tosses if the outcome of each toss depended on the outcome of the previous tosses in an arbitrary way, as if the coin had a memory. Now you have to keep track of the possibilities, which we just decided was impossible. Well, not impossible, just really really really time consuming. But all the ways that one toss could depend on previous tosses is yet even more difficult to count—in fact, it’s uncountable. One situation where it is not difficult is the one most familiar to us—when each coin toss is completely independent of all previous and future tosses. This seems like the only obvious situation because it is the only one we are familiar with. But we are only familiar with it because it is one we know how to solve.

Life’s complicated in general, but not so if we stay on the narrow paths of simplicity. Computers, deep down in their guts, are making sequences that look like those of coin-flips. Computers work by flipping transistors on and off. But your computer will never produce every possible sequence of bits. It stays on the simple path, or crashes. There is nothing innately special about your computer which forces it to do this. We never would have built computers that couldn’t solve problems quickly. So computers only work at solving problems that can we found can be solved because we are at the steering wheel forcing them to the problems which appear effortless.

In quantum computing it is no different. It can be in general very complicated. But we look for problems that are solvable, like flipping quantum coins. We are quantum drunks under the lamp post—we are only looking at stuff that we can shine photons on. A quantum computer will not be an all-powerful device that solves all possible problems by controlling more parameters than there are particles in the universe. It will only solve the problems we design it to solve, because those are the problems that can be solved with limited resources.

We don’t have to track (and “keep under control”) all the possibilities, as Dyakonov suggests, just as your digital computer does not need to track all its possible configurations. So next time someone tells you that quantum computing is complicated because there are so many possibilities involved, remind them that all of nature is complicated—the success of science is finding the patches of simplicity. In quantum computing, we know which path to take. It’s still full of debris and we are smelling flowers and picking the strawberries along the way, so it will take some time—but we’ll get there.

 

The point of physics

Something I lost sight of for a long time is the reason I study physics, or the reason I started studying it anyway. I got into it for no reason other than it was an exciting application of mathematics. I was in awe, not of science, but of the power of mathematics.

Now there are competing pressures. Sometimes I find myself “doing physics” for reasons that can only best be seen as practical. Fine—I’m a pragmatic person after all. But practicality here is often relative to a set of arbitrarily imposed constraints, such as requiring a CV full of publications in journals with the highest rank in order to be a good academic boi.

You may say that’s life. We all start with naive enthusiasm and end up doing monotonous things we don’t enjoy. But then we tell ourselves, and each other, lies about it being in service of some higher purpose. Scientists see it stated so often that they start to repeat it, and even start to believe it. I know I’ve written and repeated thoughtless platitudes about science many times. It’s almost necessary to convince yourself of these myths as you struggle through your school or your job. Why am I doing this, you wonder, because it certainly doesn’t feel rewarding in those moments.

On the other hand, many people are comfortable decoupling their passion from their job. Do the job to earn money which funds your true passions. Not all passions provide the immediate monetary returns one needs to live a comfortable life after all. So you can study science to learn the skills that someone will pay you to employ. There are many purely practical reasons to study physics, for example, which have nothing to do with answering to some higher calling. This certainly seems more honest than having to lie to yourself when expectations fail.

(I should point out that if you are one of those people currently struggling through graduate school, academia is not the only way—maybe not even the best way—to sate your hunger for knowledge, or just solve cool maths problems.)

A lot of scientists, teachers, and university recruiters get this wrong. There is a huge difference between being curious about nature and reality and suggesting it is morally good to devote one’s life to playing a small part in answering specific questions about such.

Einstein did not develop general relativity to usher in a new era of gravitational wave astronomy, as cool as that is. He did it because he was obsessed with answering his own questions driven by his insatiable imagination. Even the roots of the now enormous collaboration of scientists which detected gravitational waves started in a water cooler conversation among a few physicists, which is best summarized by this tweet:

In other words, we don’t actually do things through a consensual agreement about its potential value to a higher power called science. We think about doing certain things because we are curious, because we want to see what will happen, or because we can.

Like all other myths scientists and their adoring followers like to deride, science as a moral imperative is just that—a myth. Might we not get further with honesty, by telling ourselves and others that we are just people—people trying to do cool shit. The great things will come as they always have, emerging from complex interactions—not by everyone collectively following a blinding light at the end of tunnel, but by lighting the tunnel itself with millions of unique candles.

The minimal effort explanation of quantum computing

Quantum computing is really complicated, right? Far more complicated than conventional computing, surely. But, wait. Do I even understand how my laptop works? Probably not. I don’t even understand how a doorknob works. I mean, I can use a doorknob. But don’t ask me to design one, or even draw a picture of the inner mechanism.

We have this illusion (it has the technical name in the illusion of explanatory depth) that we understand things we know how to use. We don’t. Think about it. Do you know how a toilet works? A freezer? A goddamn doorknob? If you think you do, try to explain it. Try to explain how you would build it. Use pictures if you like. Change your mind about understanding it yet?

We don’t use quantum computers so we don’t have the illusion we understand how they work. This has two side effects: (1) we think conventional computing is generally well-understood or needs no explanation, and (2) we accept the idea that quantum computing is hard to explain. This, in turn, causes us to try way too hard at explaining it.

Perhaps by now you are thinking maybe I don’t know how my own computer works. Don’t worry, I googled it for you. This was the first hit.

Imagine if a computer were a person. Suppose you have a friend who’s really good at math. She is so good that everyone she knows posts their math problems to her. Each morning, she goes to her letterbox and finds a pile of new math problems waiting for her attention. She piles them up on her desk until she gets around to looking at them. Each afternoon, she takes a letter off the top of the pile, studies the problem, works out the solution, and scribbles the answer on the back. She puts this in an envelope addressed to the person who sent her the original problem and sticks it in her out tray, ready to post. Then she moves to the next letter in the pile. You can see that your friend is working just like a computer. Her letterbox is her input; the pile on her desk is her memory; her brain is the processor that works out the solutions to the problems; and the out tray on her desk is her output.

That’s all. That’s the basic first layer understanding of how this device you use everyday works. Now google “how does a quantum computer work” and you are met right out of the gate with an explanation of theoretical computer science, Moore’s law, the physical limits of simulation, and so on. And we haven’t even gotten to the quantum part yet. There we find qubits and parallel universes, spooky action at a distance, exponential growth, and, wow, holy shit, no wonder people are confused.

What is going on here? Why do we try so hard to explain every detail of quantum physics as if it is the only path to understanding quantum computation? I don’t know the answer to that question. Maybe we should ask a sociologist. But let me try something else. Let’s answer the question how does a quantum computer work at the same level as the answer above to how does a computer work. Here we go.

How does a quantum computer work?

Imagine if a quantum computer were a person. Suppose you have a friend who’s really good at developing film. She is so good that everyone she knows posts their undeveloped photos to her. Each morning, she goes to her letterbox and finds a pile of new film waiting for her attention. She piles them up on her desk until she gets around to looking at them. Each afternoon, she takes a photo off the top of the pile, enters a dark room where she works at her perfected craft of film development. She returns with the developed photo and puts this in an envelope addressed to the person who sent her the original film and sticks it in her out tray, ready to post. Then she moves to the next photo in the pile. You can’t watch your friend developing the photos because the light would spoil the process. Your friend is working just like a quantum computer. Her letterbox is her input; the pile on her desk is her classical memory; while the film is with her in the dark room it is her quantum memory; her brain and hands are the quantum processor that develops the film; and the out tray on her desk is her output.

The real magic of quantum computing

By now you have read many articles on quantum computing. Congratulations. You know nothing about quantum computing.

There is a magician on stage. It’s tense. Maybe it’s a primetime TV show and the production value is super high. The celebrity judges look nervous. There is epic build up music as the magician calls their assistant on stage. The assistant climbs into a box that is covered with a velvet blanket. Why a blanket? I mean, isn’t the box good enough? What a pretentious as… forget it, I’m ruining this for myself. OK, so the assistant is in the box with their head and legs sticking out. What the fuck? Who made this box, anyway? Damn it, I’m doing it again. Then—oh shit—is that a saw? What’s going to happen with that? Fuck! No! The assistant’s been cut in half! And then the quantum computer outputs the answer. Wait, what? Where did the quantum computer come from? I don’t know—quantum computing is magic like that.

By now you have read many articles on quantum computing. Congratulations. You know nothing about quantum computing. I know what you are thinking: Whoa, Chris, I wasn’t ready for these truth bombs. Take it easy on us. But I see a problem and I just need to fix it. Or, more likely, call the rental agent to fix it.

You probably think that a qubit can represent a 0 and a 1 at the same time. Or, that quantum computing takes advantage of the strange ability of subatomic particles to exist in more than one state at any time. I can hardly fault you for that. After all, we expect Scientific American and WIRED to be fairly reputable sources. And, I’m not cherry picking here—these were the first two hits after the Wikipedia entry on a Google search of “What is quantum computing?” Nearly every popular account of quantum computing has this “0 and 1 at the same time” metaphor.

I say metaphor because it is certainly not literally true that the things involved in quantum computing—those qubits mentioned above—are 0 and 1 at the same time. Why? Well, for starters, 0 and 1 are defined to be mutually exclusive (that means it’s either one OR the other). Logically, 0 is defined as [NOT 1]. Then 0 AND 1 is equal to [NOT 1] AND 1, which is a false statement. “0 and 1 at the same time” just doesn’t make sense, and it’s false anyway. Next.

OK, so what’s the big deal? We all play fast and loose with words. Surely this little… let me stop you right there, because it gets worse. Much worse.

The Scientific American article linked above then deduces that, “This lets qubits conduct vast numbers of calculations at once, massively increasing computing speed and capacity.” That’s a pretty big logical leap, but I’d say it’s a correct one. Let’s break it down. First, if a qubit can be 0 and 1 at the same time then two qubits can be 00 and 01 and 10 and 11 at the same time. And three qubits can be 000 and 001 and 010 and 011 and 100 and 101 and 110 and 111 at the same time. And… well, you get the picture. Like mold on that organic bread you bought, exponential growth!

The number of possible ways to set some number of bits, say n of them, is 2n—a big number. If n = 300, 2300 is more than the number of atoms in the universe! Think about that. Flip a coin just 300 times and the number of possible ways they could land is unfathomable. And 300 qubits could be all of them at the same time. If you believe that, then it is easy to believe that quantum computers will just calculate every possible solution to your problem at once and pick the right answer. That would be magic. Alas, this is not how quantum computers work.

Lesson 1: don’t take a bad metaphor and draw your own simplistic conclusions from it.

Try this one out from Forbes: “A bit can be at either of the two poles of the sphere, but a qubit can exist at any point on the sphere.” Spot on. This is 100% accurate. But, wait! “So, this means that a computer using qubits can store an enormous amount of information and uses less energy doing so than a classical computer.” The fuck? No. In fact, a qubit cannot be used to store and retrieve more than 1 bit of data. Again, magic, but not how quantum computers work.

Lesson 2: don’t reduce an entire field to one idea and draw your own simplistic conclusions from it.

I can just imagine what you are thinking right now. OK hotshot, how would you explain quantum computing? I’m glad you asked. After bashing a bad analogy, I’m going to use another, better analogy. I like analogies—they are my favorite method of learning. Teaching by analogy is kind of like being in two places at the same time.

Alright, I’m going to tell you the correct analogy between quantum physics and magic. Let’s think about what a magic trick looks like abstractly. The magician, who is highly trained, spends a huge amount of time choreographing a mechanism which is then hidden from the audience. The show begins, the “magic” happens, and we are returned to reality with bafflement. If you are under 20, then you also take a selfie for the Insta #fuckyeahmagic.

Now here is what happens in a quantum computation. A quantum engineer, who is highly trained, spends a huge amount of time choreographing a mechanism which is then hidden from the audience. The show begins, quantum computation happens, and we are returned the answer to our problem. Tada! Quantum computation is magic. Selfie, Insta, #fuckyeahquantum.

Let’s dig into this a bit deeper, though. Why not uncover the quantum computer—open the box—to reveal the mechanism? Well, we can’t. If we “watch” the computation happen, we expose the quantum computer to an environment and this will break the computation. The kind of things a quantum computer needs to do requires complete isolation from the environment. Just like a magician’s trick, if we reveal the mechanism, the magic doesn’t happen.

OK, fine. The “magic” will be lost, but at least I could understand the mechanism, right? Sure, that’s right. But here’s the catch: a magician spends countless hours training and preparing for the trick. Knowing the mechanism doesn’t help you understand how to actually perform the trick. Nor does seeing that the mechanism of quantum computing is some complicated math actually help you understand how it works. And don’t over simplify it—we already know that doesn’t work.

Let’s look at the example of a sword swallowing illusionist. If you don’t know what I’m talking about, it’s exactly how it sounds—a person puts a sword the length of their torso in their mouth down to the handle. How one figures out they have a proclivity for this talent, I don’t want to know. But what’s the explanation? Don’t worry, I already googled it for you, and it’s simple: “the illusionist positions their head up so that his throat and stomach make a straight line.” Oh, is that it? I’m suddenly unimpressed. So now that you too know how to swallow a sword are you going to go and do it? I fucking doubt it. That would be stupid—about as stupid as reading a few sentence description of some “explanation” of quantum computing and then declaring you understand it.

Lesson 3: don’t place your analogy at the level of explanation—place it at the level of the phenomenon. Let your analogy do the work of explanation for you.

If you like figures, I have prepared a lovely summary for you.

Well there you go. Quantum computing isn’t magic, but it can put on a good show. You can learn about how to do the tricks yourself and even perform a few with a little more effort. I suggest starting with the IBM Quantum Experience. Or, start where the real magicians do with Quantum Computing for Babies 😂

One is the loneliest prime number

You can’t prove 1 is, or is not, prime. You have the freedom to choose whether to include 1 as a prime or not and this choice is either guided by convenience or credulity.

I occasionally get some cruel and bitter criticism from an odd source. I’m putting my response here for two reasons: (1) so I that I can simply refer them to it and not have to repeat myself or engage in the equally impersonal displeasure of internet arguments, and (2) I think there is something interesting to be learned about mathematics, logic, and knowledge more generally.

It all started when I wrote a very controversial book about an extremely taboo topic: mathematics. In my book ABCs of Mathematics, “P is for Prime”. The short, child-friendly description I gave for this was:

A prime number is only divisible by 1 and itself.

I thought I did a pretty good job of reducing the concept and syllables down to a level palatable by a young reader. Oh, boy, was I wrong. Enter: the angriest group of people I have met on the internet.

You see, by the given definition, I had to include 1 as a prime number since, as we should all agree, it is divisible only by 1 and itself.

Big mistake. Because, apparently, it has been drilled into people’s heads that this is a grave error, a misconception that can eventually lead young impressionable minds to a life of crime and possibly even death! It might even end up on a list of banned books!

By a vast majority, people love the book. I am generally happy with the reponse. The baby books I write are not for everyone—I get that. And I do try to take advice from all the feedback I receive on my books. There is always room for improvement. But the intense emotions some people have with the idea of 1 being a prime number is truly perplexing. Here are some examples:

I actually love the book, but there is a big mistake. The number 1 is not a prime number! The book should not be sold like this and needs to be reprinted.

and

1 IS NOT PRIME! How could a supposed math book have an error like this in it? I am disgusted!

Yikes. So what gives? Is 1 prime, or not? The answer is: that’s not a valid question.

Let me explain.

First, let’s look at a typical definition. Compare to, for example, Wikipedia’s entry on prime numbers:

A prime number (or a prime) is a natural number greater than 1 that cannot be formed by multiplying two smaller natural numbers.

Much more precise—no denying that. It’s grammatically correct, but probably hard to parse. I wanted to avoid negative definitions as much as I could in my books. But that’s beside the point. The reason 1 is not a prime is that the definition of prime itself is contorted to exclude it!

OK, so why is that? Well, the answer is probably not as satisfying as you might like: convenience. By excluding 1 as prime, one can state other theorems more concisely. Take the Fundamental Theorem of Arithmetic, for example:

Every integer greater than 1 either is a prime number itself or can be represented as the product of prime numbers and that, moreover, this representation is unique, up to (except for) the order of the factors.

Now, this statement would not be true if 1 were a prime since, for example, 6 = 2 × 3 but also 6 = 2 × 3 × 1 and also 6 = 2 × 3 × 1 × 1, etc. That is, if 1 were prime, the representation would not be unique and the theorem would be false.

However, if we do chose to include 1 as a prime number, all is not lost. Then the Fundamental Theorem of Arithmetic would still be true if it were stated as:

Every integer is a prime number itself or can be represented as the product of prime numbers and that, moreover, this representation is unique, up to (except for) the order of the factors and the number of 1’s.

Which version do you prefer? In either case, both the definition and theorem treat 1 as a special number. I’d argue that in this context, the number 1 is more of an annoyance that gets in the way of the deeper concept behind the theorem. But in mathematics you must be precise with your language. And so 1 must be dealt with as an awkward special case no matter which way you slice it.

So, is 1 prime, or not? Well, it depends on how you define it. But in the end it doesn’t really matter, so long as you are consistent. And understanding that is a much bigger lesson than memorizing some fact you were told in grade school.

The definition given in ABCs of Mathematics is not wrong” any more than all of the other simplifications and analogies I have made are “wrong”. But, in case you were wondering, the second printing will be modified with the hope that everyone can enjoy the book. Even the angry people on the internet deserve to be happy.

Estimation… with quantum technology… using machine learning… on the blockchain

A snarky academic joke which might actually be interesting (but still a snarky joke).

Abstract

A device verification protocol using quantum technology, machine learning, and blockchain is outlined. The self-learning protocol, SKYNET, uses quantum resources to adaptively come to know itself. The data integrity is guaranteed with blockchain technology using the FelixBlochChain.

Introduction

You may have a problem. Maybe you’re interested in leveraging the new economy to maximize your B2B ROI in the mission-critical logistic sector. Maybe, like some of the administration at an unnamed university, you like to annoy your faculty with bullshit about innovation mindshare in the enterprise market. Or, maybe like me, you’d like to solve the problem of verifying the operation of a physical device. Whatever your problem, you know about the new tech hype: quantum, machine learning, and blockchain. Could one of these solve your problem? Could you really impress your boss by suggesting the use of one of these buzzwords? Yes. Yes, you can.

Here I will solve my problem using all the hype. This is the ultimate evolution of disruptive tech. Synergy of quantum and machine learning is already a hot topic1. But this is all in-the-box. Now maybe you thought I was going outside-the-box to quantum agent-based learning or quantum artificial intelligence—but, no! We go even deeper, looking into the box that was outside the box—the meta-box, as it were. This is where quantum self-learning sits. Self-learning is protocol wherein the quantum device itself comes to learn its own description. The protocol is called Self Knowing Yielding Nearly Extremal Targets (SKYNET). If that was hard to follow, it is depicted below.

hypebox
Inside the box is where the low hanging fruit lies—pip install tensorflow type stuff. Outside the box is true quantum learning, where a “quantum agent” lives. But even further outside-the-meta-box is this work, quantum self-learning—SKYNET.

Blockchain is the technology behind bitcoin2 and many internet scams. The core protocol was quickly realised to be applicable beyond digital currency and has been suggested to solve problems in health, logistics, bananas, and more. Here I introduce FelixBlochChain—a data ledger which stores runs of experimental outcomes (transactions) in blocks. The data chain is an immutable database and can easily be delocalised. As a way to solve the data integrity problem, this could be one of the few legitimate, non-scammy uses of blockchain. So, if you want to give me money for that, consider this the whitepaper.

Problem

 

99probs
Above: the conceptual problem. Below: the problem cast in its purest form using the formalism of quantum mechanics.

The problem is succinctly described above. Naively, it seems we desire a description of an unknown process. A complete description of such a process using traditional means is known as quantum process tomography in the physics community3. However, by applying some higher-order thinking, the envelope can be pushed and a quantum solution can be sought. Quantum process tomography is data-intensive and not scalable afterall.

The solution proposed is shown below. The paradigm shift is a reverse-datafication which breaks through the clutter of the data-overloaded quantum process tomography.

fuckyeahquantum
The proposed quantum-centric approach, called self-learning, wherein the device itself learns to know itself. Whoa. 

It might seem like performing a measurement of \{|\psi\rangle\!\langle \psi|, \mathbb I - |\psi\rangle\!\langle \psi|\} is the correct choice since this would certainly produce a deterministic outcome when V = U. However, there are many other unitaries which would do the same for a fixed choice of |\psi\rangle. One solution is to turn to repeating the experiment many times with a complete set of input states. However, this gets us nearly back to quantum process tomography—killing any advantage that might have been had with our quantum resource.

Solution

quantumintensifies
Schematic of the self-learning protocol, SKYNET. Notice me, Senpai!

This is addressed by drawing inspiration from ancilla-assisted quantum process tomography4. This is depicted above. Now the naive looking measurement, \{|\mathbb I\rangle\!\langle\mathbb I |, \mathbb I - |\mathbb I\rangle\!\langle \mathbb I|\}, is a viable choice as

|\langle\mathbb I |V^\dagger U \otimes \mathbb I |\mathbb I\rangle|^2 = |\langle V | U\rangle|^2,

where |U\rangle = U\otimes \mathbb I |\mathbb I\rangle. This is exactly the entanglement fidelity or channel fidelity5. Now, we have |\langle V | U\rangle| = 1 \Leftrightarrow U = V, and we’re in business.

Though |\langle V | U\rangle| is not accessible directly, it can be approximated with the estimator P(V) = \frac{n}{N}, where N is the number of trials and n is the number of successes. Clearly, \mathbb E[P(V)] = |\langle V | U\rangle|^2.

Thus, we are left with the following optimisation problem:
\min_{V} \mathbb E[P(V)] \label{eq:opt},

subject to V^\dagger V= \mathbb I. This is exactly the type of problem suitable for the gradient-free cousin of stochastic gradient ascent (of deep learning fame), called simultaneous perturbation stochastic approximation6. I’ll skip to the conclusion and give you the protocol. Each epoch consists of two experiments and a update rule:

V_{k+1} = V_{k} + \frac12\alpha_k \beta_k^{-1} (P(V+\beta_k \triangle_k) - P(V-\beta_k \triangle_k))\triangle_k.

Here V_0 is some arbitrary starting unitary (I chose \mathbb I). The gain sequences \alpha_k, \beta_k are chosen as prescribed by Spall6. The main advantage of this protocol is \triangle_k, which is a random direction in unitary-space. Each epoch, a random direction is chosen which guarantees an unbiased estimation of the gradient and avoids all the measurements necessary to estimation the exact gradient. As applied to the estimation of quantum gates, this can be seen as a generalisation of Self-guided quantum tomography7 beyond pure quantum states.

To ensure integrity of the data—to make sure I’m not lying, fudging the data, p-hacking, or post-selecting—a blochchain-based solution is implemented. In analogy with the original bitcoin proposal, each experimental datum is a transaction. After a set number of epochs, a block is added to the datachain. Since this is not implemented in a peer-to-peer network, I have the datachain—called FelixBlochChain—tweet the block hashes at @FelixBlochChain. This provides a timestamp and validation that the data taken was that used to produce the final result.

Results

results
SKYNET finds a description of its own process. Each N is a different number of bits per epoch. The shaded region is the interquartile range over 100 trials using a randomly selected “true” gate. The solid black lines are fits which suggest the expected 1/\sqrt{N} performance.

Speaking of final result, it seems SKYNET works quite well, as shown above. There is still much to do—but now that SKYNET is online, maybe that’s the least of our worries. In any case, go download the source8 and have fun!

Acknowledgements

The author thanks the quantum technology start-up community for inspiring this work. I probably shouldn’t say this was financially supported by ARC DE170100421.


  1. V. Dunjko and H. J. Briegel, Machine learning and artificial intelligence in the quantum domain, arXiv:1709.02779 (2017)
  2. N. Satoshi, Bitcoin: A peer-to-peer electronic cash system, (2008), bitcoin.org. 
  3. I. L. Chuang and M. A. Nielsen, Prescription for experimental determination of the dynamics of a quantum black box, Journal of Modern Optics 44, 2455 (1997)
  4. J. B. Altepeter, D. Branning, E. Jerey, T. C. Wei, P. G. Kwiat, R. T. Thew, J. L. O’Brien, M. A. Nielsen, and A. G. White, Ancilla-assisted quantum process tomography, Phys. Rev. Lett. 90, 193601 (2003)
  5. B. Schumacher, Sending quantum entanglement through noisy channels, arXiv:quant-ph/9604023 (1996)
  6. J. C. Spall, Multivariate stochastic approximation using a simultaneous perturbation gradient approximation, IEEE Transactions on Automatic Control 37, 332 (1992)
  7. C. Ferrie, Self-guided quantum tomography, Physical Review Letters 113, 190404 (2014)
  8. The source code for this work is available at https://gist.github.com/csferrie/1414515793de359744712c07584c6990