Bitcoin

One of the reasons I write blog posts is that I find it interesting to take snapshots of my thinking as a way of documenting how my viewpoints evolve over time. Sometimes they stay relatively stable, and at others they shift quite dramatically. For example, in July 2020 I wrote my thoughts on coronavirus, at a time when vaccines still hadn’t been rolled out, and many countries (including my own) were in some form of hard lockdown. Questions of waves, variants, masks, and so on were going through their first oscillations with public, scientific, and official sentiment wavering back and forth. The debates were just starting to take on political hues. Arguments about censorship, misinformation, disinformation, and so on, were only just beginning to warm up.

When I look back at that post now I find it all to be pretty reasonable, given what we knew at the time (generally, not as much as we would have liked to). The tone is one of prudence and caution, of provisionality born out of ignorance. If I were to summarize my feelings about COVID and the pandemic now, I’d tell you how I’m quite a bit more relaxed about the virus these days, and how my most significant concerns are actually about what we have learned (or failed to learn) about how to respond to extreme incidents. As a society, I think we’ve come out of this traumatic collective experience with our sensibilities, and our institutions, somewhat deranged. I think we now face the twin risks of having primed ourselves to overreact when something happens in the future, while paradoxically also having attained such levels of fatigue that we may not react enough the next time we face a threat.

But this isn’t a post about COVID. It’s about Bitcoin and the topic of cryptocurrency[1] in general. This feels like another one of those highly contentious topics where interested parties, whether they be "crypto maximalists" or "crypto sceptics", are debating an incredibly complex subject with an essentially unknowable future — only time will tell. As I have only engaged with this whole domain in the most superficial manner over the years, I haven’t felt like I had anything noteworthy to write down until recently, and even now, I’ve only just dipped my toes in the water. But of late, Elon Musk has been in the news a lot, and where Elon treads, discussion of crypto usually follows (and precedes) him. Likewise, Lex Fridman has had a couple of prominent Bitcoin intellectuals on his show for multi-hour discussions (examples: Michael Saylor, Saifedean Ammous) that were pretty interesting, and so I’ve been thinking about crypto in a deeper way than I had until now.

Bitcoin first came onto my radar around 2011. Back then, a single Bitcoin cost about $1. I actually tried to buy a couple, just because I found the idea of a recently invented, purely digital, cryptographically-based currency to be nerdily amusing. Tried, I say, because I couldn’t muster the will to actually overcome the various sources of friction and inconvenience that purchasing Bitcoin actually entailed back then; I can’t even remember what the concrete obstacle was that stopped me from pursuing it all the way to the end. I wasn’t going to buy these coins to spend, or as an investment, but simply to have them, because I thought it was cool.

Since then, a lot has changed. Bitcoin’s price shot through the roof, increasing by thousands (or tens of thousands) of percent from its early days, depending on where you start measuring from. It became an object of speculation. The idea was ripped off, copied, modified by — again, literally — thousands of hucksters, would-be innovators, and crypto nerds. Many a fortune was made or lost, both individual and institutional investors made profits or got fleeced, as coins and financial instruments flashed in and out of existence. Heavy hitters bought huge amounts of digital currency — the most prominent example I’m aware of being Tesla, which currently holds over $1B in Bitcoin — and governments and regulators started to show more interest in monitoring and taxing crypto currencies and securities. In a way that is mostly invisible to the average citizen, crypto mining started consuming an "interesting" proportion of energy production, and in a way that was very definitely visible to anybody trying to build a computer during the last few years, it dramatically impacted the availability of GPUs.

As I watched all this play out, my evolving stance was basically that of a crypto sceptic. As an investor, I’m a "buy-and-hold" type, I believe in holding productive assets (like shares in valuable companies), and I’ve never had any interest in speculation. I don’t like complicated financial vehicles; anything more complex than an index fund isn’t really my cup of tea. I couldn’t care less about puts and calls, synthetic instruments, leverage, and so on. To somebody who’s not that interested in the world of cryptocurrencies, Bitcoin itself seemed unpalatably volatile and not that useful for any of the Big Three reasons that are generally cited as "what money is" (that is, a medium of exchange, a store of value, and a unit of accounting). All of the other coins just seemed like absurd Ponzi schemes, mostly developed by people with far too recently acquired knowledge of programming, cryptography, and finance. I was utterly unimpressed by all the companies small and large scrambling to find a way to cram "blockchain" into their product offerings; never did the phrase "a solution looking for a problem" apply so readily. And obviously, the fact that human activity is driving the planet into a climate crisis sure makes the energy consumption of crypto activity look rather dubious. To my eyes, it seems at best frivolous, and at worst downright deleterious.

Anyway, I’ve done a bit of reading of late trying to understand if there are any intelligent arguments out there for why Bitcoin actually is a good idea for humanity, as opposed to just being an elegant technological construct. There sure as heck are intelligent arguments for it not being a good idea. And when you start talking about stuff like NFTs, which to me look patently absurd, you don’t have to look all that far to find some absolute demolitions penned by smarter and more informed people than I. Nevertheless, I’ve been wrong about lots of things in my life — and I’m adding to the list of things I’ve gotten wrong all the time — so it makes sense that I should at least try to find some of the best arguments in favor of crypto, so that I can develop a more nuanced point of view.

I’m not going to list all the things I’ve picked up here in this blog post. If you want to hear some articulate and well-informed advocacy for Bitcoin just start with the two podcast episodes I linked to earlier and go from there. But I do want to describe a couple of interesting points that they made which I don’t think I would have arrived at on my own. These points aren’t necessarily facts as such because this whole terrain is contested territory: economists, intellectuals, and other "experts" don’t even necessarily agree how economies work and how they are best run, even in the present moment. And the area of dispute here is even harder than that — it’s not just about what is but what will be. The future is clouded with uncertainty, and while we’re talking about "crypto" here, all of this is embedded in complex social and economic systems whose future is yet to be seen. We make predictions, we can even try glimpse the future by modeling systems in more or less rigorous ways, but this business of predicting the future is fundamentally speculative by nature.

So, with that proviso out the way, these are the things that I’ve found interesting in my recent explorations.

The first is about Bitcoin as money and not as an investment. Saylor would phrase this as the distinction between "property" (a thing you buy, like a house) and a "security" (like a stock). Ammous would make an argument about the Big Three: it’s not yet true that Bitcoin indisputably checks all the boxes, but he would say that, of all the cryptocurrencies that have existed so far, it is the one that will. In fact, he seems so certain that it will, that if your time horizon is long enough (ie. that you’re planning to hold the coins for long enough, on the scale of years) that you may as well consider it to be true. For both of them, the fact that the quantity of Bitcoins in existence is only growing slowly and, ultimately, is capped, is key here, because it makes Bitcoin extremely resilient against inflationary forces. In the short term, the market is still small and volatility is high, but the thinking is that in the long term, Bitcoins will be much better at holding their value than any fiat-based currency (or even a gold-backed one or similar) can ever be, because the supply is categorically constrained.

An additional point they make, and which I am inclined to agree with, is that describing inflation in terms of a single "scalar" value that is regularly redefined to suit the whims of the agencies responsible for measuring it, is woefully inadequate, and that the real inflation that we all experience is quite a bit higher than what this number tells us. As an investor in the stock market, I’ve long felt like I’m barely treading water with respect to inflation, despite the reportedly low figures quoted by official entities; there is indeed the sense that one’s money is evaporating before one’s eyes, and it’s only by exposing it to significant risk that one can have any hope of actually storing that value for later consumption. I haven’t held any bonds for a while, and in the current panorama I don’t see anything which would make it compelling to do so. People most often hold bonds in their portfolios as a way of dampening volatility. Right now, crypto swings up and down so violently that one couldn’t really expect it to serve a dampening role like that in a portfolio; it behaves much more like a risky tech stock than a reserve currency. But if folks like Ammous are right, though, that may not remain the case: Bitcoin may one day make sense as the "cash" component in a portfolio. That would be cool because existing forms of cash kind of, well, suck when it comes to being a stable store of value.

The second argument that I found to be rather surprising is the one about proof of work (ie. the thing which drives the energy usage in the Bitcoin algorithm) as actually being a good thing. When asked what he thought about Bitcoin using more energy than some countries, Ammous said, "It’s worth it". That one floored me. From my point of view, all those kilowatt-hours had struck me as egregiously profligate. For Ammous, Bitcoin mining is intrinsically valuable, because it is what makes Bitcoin work (ie. it is what makes it a truly distributed system without any privileged actors able to impose arbitrary modifications or do nefarious things like change the rules of the game or abscond with the funds of the beguiled), and Bitcoin working is a good thing in the world because it is the most perfect form of money (again, referencing the Big Three) yet invented, standing to free us all from the maladies of inflation and crude Keynesian economics. Since the advent of electricity we’ve been using it for all sorts of things that aren’t strictly necessary for survival, but which make our lives better. Ammous claims that Bitcoin is just one more life-bettering technology whose benefits justify its costs; you just have to know a bit about economics and monetary theory to connect the dots and see why.

I wasn’t expecting that. I was expecting something along the lines of proof of stake being the solution; like, just as John the Baptist paved the way for the coming of Jesus H. Christ, Bitcoin with its proof of work is a premonition for some improved cryptocurrency with proof of stake at its heart instead. Maybe a layer on top of Bitcoin, or maybe some kind of successor. But no, that’s not the argument at all. In fact, Ammous thinks proof of stake is totally bogus. It might solve the energy problem, but it does so at the cost of jettisoning the single most important property of Bitcoin and proof of work; namely, that nobody controls Bitcoin — there is no way for any single entity to fuck things up because it really is distributed (ie. no central authority) and there’s no way of changing the rules. You can hard-fork, but that’s not changing the rules, it’s just changing a copy of them, and network effects make Bitcoin’s preeminent status effectively unassailable. As thousands of failed coins demonstrate, none of these copies have gotten critical mass behind them.

The other defense he proffered was that most energy powering Bitcoin mining is cheap (ie. plentiful, abundant) energy, precisely because competitive forces will naturally drive miners towards the cheapest energy. As such, a lot of it ends up being green. Now, I’m aware that this is a contested point, kind of like "vaccines are worth the downside risks", but it is a reasonable point. I couldn’t help but thinking of the Neal Stephenson novel, Fall; or, Dodge in Hell where, spoiler alert, humans end up building massive solar panel arrays in orbit (with the eventual trajectory leading humankind in the direction of building a Dyson sphere, completely enclosing the sun and capturing most of its energy output). In the novel, this energy ends up being used to power a computationally expensive simulated reality in which the scanned-in souls of the dead end up running in the cloud, enjoying a kind of "life" after physical death. Now, that irreal existence seems rather pointless on some level — not unlike endlessly computing block hashes to keep a virtual currency running — but the dividing line between biological and computational processes is rather blurry in this book. Just say you figure it’s worth burning all that energy on eternal digital life, then maybe it’s not unreasonable to also consider it worth burning a huge quantity to enable "eternal" digital currency. Fiat currencies are already pretty divorced from actual circulating specie, as most of the monetary supply is created out of nothing via credit; it’s not clear that that is any more "real" than Bitcoin is. At least Bitcoin is the product of mathematics, a well-defined algorithm, and an element of randomness. Fiat currencies, on the other hand, are driven by utterly arbitrary and unpredictable dynamic systems comprised of incomprehensibly complicated configurations of individual agents, institutions, governments, and markets.

In conclusion, I find these arguments to be quite reasonable, but not compelling, because they’re contingent on the future unfolding in a particular way in which more and more participants in the system converge on Bitcoin as being the ideal hard currency and store of value in the long term. All the frothy shitcoins, Ponzi schemes, and "smart" contracts[2] need to stay where they belong, on the fringes, such that they don’t destabilize the rest of the system. There can be no crisis of confidence, or at least, no fatal one. This is a system that as much requires solid psychological support as it does utterly robust technical foundations. Bitcoin is already a reasonably good store of value, and even a not-bad medium of exchange; for any more complex demands other structures can be layered on top — the bit that really matters is that it needs to be a durable store of value. If we can really liberate ourselves from the ever-present menance of wealth-destroying inflation, that would be pretty darn neat. Companies need to stop trying to ram blockchain-shaped pegs into round holes where they don’t belong; they’re just creating a distraction. More and more agents need to start treating Bitcoin as a long-term, inflation-resistant alternative to fiat cash, enough of them such that Bitcoin starts to take over from bonds as a means to dampen volatility; note that to do this, participants in the system need to treat Bitcoin less like a high-risk speculative investment and more like ballast — unglamorous, uninteresting, predictable. We need to create more and cheaper sources of clean energy, and while we’re at it, find ways to meet consumer demand without crypto hardware purchases drastically distorting the market. Lots of things can and may go wrong on the way.

Overall, if everything turns out the way the proponents proclaim, I think that would be great. Right now I’d put the odds at 50% (plus or minus 25%) that things do head in this direction over the next couple of decades. But those aren’t very certain odds, and not the kind that I would want in order to sink my life-savings into this stuff. I’m pretty conservative when it comes to my investment choices, so I wouldn’t put anything but "play money" into Bitcoin, and nothing into any other crypto. In any case, I’ve never had any play money in my investing life, so this is all hypothetical. And even if things do go according to the rosiest of predictions, there remains the fact that securing crypto assets feels like a step backwards into some kind of dark age prior to modern banking and federally insured deposits; keeping your private keys safe from theft, damage, or loss feels not unlike securing physical gold. I don’t relish the idea of having many thousands of dollars’ worth of digital assets under my physical custodianship any more than I like the idea of trying to somehow safely stash a bunch of gold coins under my bed. There is something comforting about the various checks and balances implemented by old banks to make sure that the numbers tallied against your name in their ledgers stay securely and exactly where you put them, without tampering. I couldn’t care less about crypto’s touted benefits of anonymity and its illusory freedom from government vigilance and expropriation. Those risks are real in the current fiat-based system, but seem so utterly distant and unreal to me in my cozy European life in the bosom of the EU. In short, the idea of having personally to safeguard any significant amount of crypto assets scares the crap out of me, and I can’t begin to imagine how elderly or non-tech-savvy folks might navigate these waters if crypto assets end up expanding to occupy a significant chunk of the monetary and financial pie in the future.

So there you have it — my mid-2022 snapshot of what I think about Bitcoin. I’m still very early on in my research of both the for and against arguments in this arena, but I intend to keep studying. It beats reading about COVID, or Ukraine, or the Depp-Heard trial, and I’ll be interested to revisit this post some years from now and see if the world, or I, have moved on at all.


  1. And out of economy, I’ll be using the shorthand "crypto" a bunch of times in this post, even though it always irked me that it’s an appropriation of a word that was previously used to designate the broader field of "cryptography" itself. Sometimes you’ve just got to accept when these linguistic battles are won and lost, and go with the flow. I still remember how I used to call blogs "weblogs" for years after everyone else stopped doing so, simply because I stubbornly hated the laziness of the contraction… ↩︎

  2. In my experience, few things that people tack the prefix of "smart" onto end up being so. ↩︎

Epistemology

We come into this world as babies awash in a flood of sensory input, aware of sensations but not of ourselves. At first, we don’t even perceive something as simple as the boundary between our selves (our physical bodies), and the rest of the universe. There is no theory of mind, no object persistence, no ability to form durable memories. With time, we start to "know" things. We accumulate knowledge based on the immediate physical properties of objects (eg. "sharp things cut", "dropped objects fall"). As we grow, we become capable of communication, symbolic manipulation, and abstract thought. We come to know things that we may not be able to directly see with our senses or reliably verify (eg. "the world is round", "inflation is 8%"). Some of the things we think we know aren’t actually true (eg. "Santa is an immortal omnipotent being who delivers presents to basically every child in the world on Christmas Eve") and we latter learn them to have been false. But as we progress through our education there is a sense of gradual illumination, of coming out into the light, acquiring an ever more accurate and extensive knowledge of the world we live in as we rid ourselves of successively more and more falsehoods and misconceptions.

I’m no philosopher, but like most people, I’ve had plenty of time since attaining adulthood to reflect on the nature and limits of knowledge. My formal education is mostly behind me now, but I continue to learn things, mostly by "self-teaching" or simply doing things that require me to develop my skills. The sum of human knowledge accessible to me is effectively boundless because it is accumulating at a rate which far outstrips my ability to assimilate it. So, paradoxically, even as the human race collectively creates an ever greater body of knowledge, the predominant sensation I experience as an individual is that of relative ignorance. I come to share in an ever smaller percentage of humanity’s achievements, even if I commit myself to lifelong learning. And that is to say nothing of the difficulty of distinguishing "good" information from "bad", or correctly identifying the bits of knowledge that are actually worth having.

That’s the baseline situation: once you get beyond the trivially verifiable results (eg. "sharp things cut"), you find yourself dealing with an unmanageably large repository of claims in the realms of science, technology, history, and countless other fields. The way we deal with this is to work in terms of abstractions, symbols, and generalizations. We delegate the work of sorting information according to varying degrees of "true" and "false", "useful" and "useless", "interesting" and "unremarkable" to other people. We simplify things (or more accurately, we rely on other people to simplify things), reduce them, distill them to their essence. This always involves leaving something out, but the apparent loss of fidelity seems to be a pretty good trade-off in practical terms. Rather than getting stuck trying to deal with irreconcilable complexity, incomplete data, and crippling uncertainty, we come up with an imperfect approximation and use that instead. Based on what we observe in society around us, a lot of this simplifying, delegating behavior seems to have a pretty clear evolutionary basis; at least, it sure looks like it comes naturally to us. And trying to look at things as objectively as possible, it’s hard to see any other way that we could do it, given the limitations imposed on us in the forms of cognitive resources, memory, and lifespan. An individual can figure some things out from first principles, but far from all things.

With all that in mind, there are a number of prompts that might lead you to think about how you know what you know, what you know, and indeed, what you even mean when you speak of knowing something at all.

Technology

In my field[1], there are a couple of tropes that circulate involving seniority, experience, and knowledge, but they both basically boil down to the same thing.

The first is the Dunning-Kruger effect, commonly described as a cognitive bias in which beginners tend to overestimate their ability, while more experienced folk know enough to "know what they don’t know" (and may in fact underestimate their ability).

The second is the notion that, the more senior an engineer is, the more likely they are to answer "it depends" in response to any question. One imagines an enthusiastic junior developer confidently stating that the solution to all our problems will be found in a microservice architecture built using a hyper-modern technology stack that just came out six months before, while a wizened, gray-haired one won’t even commit to answering a simple question like "do we have enough unit tests?".

Having written my first computer program about 38 years ago now, I can definitely relate to both of these tropes. From an epistemological perspective, I feel like the amount of "actually useful" stuff that I know only built up slowly over time, painstakingly accumulated with great and long effort. At least for me, gains have only compounded slowly; breaking my career up into five-year chunks, I probably got twice as good every chunk for the first few stages, but after that progress slowed to an at-best linear climb. I’ve forgotten at least as many things as I now remember, and I can only hope that there aren’t too many "actually useful" things among the lost bits. Much of the most valuable "knowledge" I now wield has less to do with my corpus of concrete "facts" that I can dredge up out of my memory (or find quickly with the aid of a search), and more to do with my general facilities for pattern recognition, and identifying when problems and solutions viewed in different contexts may actually be fruitfully compared and end up being close enough to the same thing such that the analysis can provide a way forward. But, if this is "knowledge", it is a very intangible and almost mystical form of it. I hesitate to call it "wisdom" because that implies a value judgment that it is invariably good, when I think it is actually closer to instinct or intuition: sometimes a remarkable and helpful shortcut, and at others woefully inadequate.

Information bubbles

By 2016 when Trump got elected, it seemed dismayingly clear that society was divided into two barely-interacting bubbles that didn’t seem to occupy the same reality any more. Fed by contradictory news sources (whether they be cable TV channels, media conglomerates, independent journalists, social media influencers, or bots), the world seemed to be notably more polarized. The two main political camps no longer seemed to differ merely in how they thought things ought best be done, but rather, they no longer agreed upon even the simplest set of common facts that only years before had appeared to be undisputed.

I actually think the level of polarization here may be a bit overstated. 50 years ago, you also would get a different view of the world depending on which newspaper you bought. The difference is that today, the internet provides immediate access for anybody to see what "the other side" is saying. And there’s a meta-current going on that means that people can’t stop talking about what "the other side" is saying. As such, I don’t think we’re more divided now than we were, or at least, not much more divided. Rather, we’re more aware of how divided we are. Unsurprisingly, one of the reasons why we’re so keenly aware of how divided we are, is that people can’t stop talking about how divided we are… Evidently, panic sells. Algorithms that foster engagement only exacerbate the underlying tendencies.

Apparently, we’re supposed to be scared of how strongly the internet amplifies these effects. That may be true, but I’m not convinced. As much as the internet might sow division, it also provides people with an endless stream of distracting entertainment that keeps most of them totally absorbed, ever looking at their devices’ screens, to the exclusion of everything else in the outside world around them. Not all that inclined to go outside and start a violent uprising. And here’s another thing: where I live (Spain), there actually was a civil war not all that long ago where people were literally killing each other for political reasons without any help from the internet. And the country I lived in immediately prior to coming here (the USA), also had a civil war without the internet. Remarkably, in both cases, these civil wars happened even though both sides had the same color skin and believed in the same god.

To partly inoculate myself from the effects of these bubbles, I started building a habit a few years ago of drawing my news from multiple sources: authors spanning the political spectrum (ie. I read coverage of major events in a "left" newspaper and then a "right" one), working in multiple forms (newspapers, podcasts, blogs, tweets etc), operating at different scales (independents, corporations etc). The idea isn’t to take multiple samples and somehow average them, but rather, to maintain an awareness that knowledge is contested, and to not lose sight of the fact that everything is provisional. So, I might read a left-wing newspaper that reports on an event and interprets it as a warning of in-progress climate catastrophe, and a right-wing one reporting on the same event as nothing to worry about. The "truth" in all this is probably not to be found by constructing a mid-point between the two poles ("this event is evidence of climate change that will be 50% catastrophic"). I find the practice useful even if the work of settling upon "the actual truth" is left as an "exercise for the reader". Sometimes it is a reminder to go to primary sources (and if you do, to subject those to the same level of critical reading that you subject the newspaper to). Sometimes it is just a reminder that beliefs tend to cluster in ways that have more to do with ideology than evidence. And evidence remains the fundamental bedrock where all these discussions must eventually find a basis.

Science

Climate change is a good example because it’s one of those things that seemed to be "just science" back in the 1990s, but since then has become a matter of dispute in a way that caught me off guard. I mean, I get why the religious right and the feminist left will argue about abortion. I get why liberals and conservatives will differ on how to tax (and who, and how much), what to spend money on, and how big government should be. But I didn’t expect to see a political divide spring up around climate change, given that its reality appears to be so overwhelmingly obvious. You know, I can imagine an alternate reality where the left says, "Humans are warming the climate and the consequences are going to be catastrophic unless we take - and perhaps even if we do take - drastic action immediately", and where the right responds, "Yeah, it’s true that climate change is concerning, but we’ve got to keep the economy running because if we don’t then that will be even worse; in any case, we believe that the free market will naturally converge on solutions to climate change before the problem gets really bad". We could then have a debate about how bad the problem really is, and what should be done about it.

Instead, what we have is the left behaving as predicted, and a highly visible segment of the right (I have no idea as to their actual numerical weight) going far beyond "climate change is bad, but…" all the way to "climate change related to human activity isn’t even a thing". Seeing that there was a real double-take moment for me, as somebody who came of age soon after the fall of the Berlin Wall and who had unwittingly internalized a narrative of human progress as un unstoppable upwards march to ever greater heights of unity, justice, technology, and so on.[2]

So climate change is curiously and particularly fraught when it comes to establishing truth. But even when we keep politics mostly out of it, there are lots of areas in science where it is remarkably hard to sort out wrong from right. Take nutrition for instance. For pretty much any claim you might wish to evaluate (eg. "eating meat is bad for you") you can find ample evidence both in favor and against. Real evidence, too, not just random Instagram influencer posts but supposedly vetted scientific studies in peer-reviewed academic journals. Politics, too, comes into the science of nutrition, but not so obviously along party lines. That is, large industries pump money into political, educational, and research systems to clear the way for their profit-making activities, but you tend not to find that much of a red-blue correlation when it comes to questions such as "what causes diabetes?"; I’m not saying there is no correlation, but rather that it’s not significant enough to matter, because these industries shower both sides of the political aisle with funds in order to ensure their interests are adequately protected.

Overall, surprisingly large fields within science are battlegrounds in which agreement is less broad than you might think. I expect there’s probably overwhelming consensus about, say, orbital mechanics at the scale of our solar system, but as you venture farther afield into the realms of genetics, biology, medicine, and so on, things get complicated real fast.

The pandemic

At the intersection of epistemology, politics, the role of the internet, science, and information bubbles, we find fascinating material for study in the form of the pandemic. What was known about the virus, about the efficacy of masking, new vaccines, and other countermeasures, has been hotly contested and rapidly changing, sometimes on the scale of weeks or even days, in ways that have played out differently from country to country, and fascinatingly mediated by political factors and immensely powerful pharmaceutical sectors, lobby groups, and organizations.

Despite the way the pandemic has pervaded, even dominated, numerous aspects of my life in very palpable ways (for example, via home confinement, being forced into remote work, home schooling, mask mandates, obligatory vaccines, and so on), it’s remarkable to me how indirect my experience of and knowledge about the actual virus has been through this all. The virus itself is invisible to the naked eye, so I’m basically taking it on trust that the thing actually exists, having bought into the germ theory of disease (something which I have never personally verified, but which is effectively an article of faith that I profess along with a host of other post-enlightenment scientific discoveries). Nobody in my family had COVID, at least knowingly. I never went to a hospital nor saw the patients. While the news media and the government reported the death toll, nobody I personally knew passed away. To me COVID was a set of numbers published on a website, a collection of photos and videos, a stream of talking heads, and tweets. Of course, I’m not actually questioning that any of this happened; I’m just pointing out that, like so many other things I claim to "know" in this life, all the evidence I have for it is indirect, a copy of a copy of a copy, passed along a chain of trust that spans individuals, institutions, businesses, politicians, and media organizations.

War

An interesting corollary is to be had with the war in Ukraine. Just like the virus, I don’t have any direct sensory confirmation that the war is happening; almost my entire "experience" of it is mediated by evidence transmitted from journalists and government sources. But one might be forgiven for asking, might this war have anything in common with the perpetual war portrayed in Orwell’s 1984? That is, somehow exaggerated, manufactured or perhaps even totally fabricated for some nefarious manipulative ends? Like lots of things you see online nowadays, some of the footage has been shown not to be of Ukraine at all (having been posted or re-posted by well-intentioned but mistaken social media users, or maliciously by mischief makers), although the overwhelming majority of it does appear to be accurate and repeatedly confirmed. Again it is needless to say, of course, that I am not seriously questioning the war’s reality; I’m merely pointing out the limits of certainty we can have about anything that isn’t the result of direct first-hand experience. Just because we haven’t seen something doesn’t mean it’s definitely not true; we construct beliefs on this basis all the time. Sometimes we’re wrong about them, sometimes right.

It is my local interactions with refugees that have provided me with the closest thing to direct confirmation of the war obtainable without jumping on an aircraft and seeing it with my own eyes. Otherwise, even the most committed tin-foil hat conspiracy theorist would find hard to believe the quality of the child actors that have been recruited to play the role, unceasingly over many weeks, never falling out of character, of refugees feigning limited knowledge of Spanish and sporting thick eastern European accents in the schools and playgrounds of Madrid of late.

While I think that seeing conspiracy lurking in every shadow is clearly a maladjustment, it does seem that maintaining an open mind and a willingness to question the basis for our beliefs from time to time is something healthy. As an example, prior to the Snowden revelations I would have dismissed a lot of the speculation about NSA data collection as at best wildly optimistic about the capacity and ambitions of the organization, and at worst downright infeasible crackpot conspiracy theorizing. What do you mean, I scoffed, that the NSA would compromise entire hardware supply chains to further their eavesdropping capability at a global scale? The NSA are just a bunch of crypto nerds! The documents, repeatedly authenticated since then, showed my initial position to be rather naive. They did all that and much more.

History

If one can’t be sure of what’s happening right now, what then, can we say about things that happened decades, centuries or millennia ago? If you look back far enough we don’t have many options beyond digging things out of the ground, or looking at things above ground that are big enough and enduring enough to tell us something about the distant past. By these means, we have knowledge that humans built pyramids in Egypt thousands of years ago, and others made flint arrow heads or clay pots in other parts of the world. That kind of knowledge is probably more interesting than "useful" as such.

But looking much more recently, even in the last century, it can be surprisingly difficult to get a handle on what happened. There is no one "history" waiting in the library for any reader whose interest is piqued to go and access. Rather, multiple histories compete to establish both the factual record and the interpretation of that record in so many dimensions (in terms of morality, rights, power, economics, and grand narratives of "progress", "struggle", "meaning" and so on). Consider how a history of Europe written in the Soviet Union might differ from one penned in England, in Italy, or across the pond in the USA. Our own internal biases might lead us to feel that there is one "real" history that we can pick out from among the rest just by looking at them. What might an alien visitor make of it all when offered a bunch of contradictory explanations of the same time period? I guess aliens have historiographers too.

In between the earliest civilizations just out of prehistory and our most recent past, there are some rather significant historical periods somewhere in the middle, such as the time when Jesus, or Muhammad, or the Buddha supposedly walked the Earth. We have ancient sacred texts that purport to tell us what happened concerning these figures, and a lot of blood has been spilled over the years (and continues to be spilled today) in the name of carrying out the divine purposes spelled out in those tomes. It fascinates me that a human might bend their life, or even sacrifice it, based on something they believe to be true from these books. You know, 2,000 years ago there was apparently this guy and there was this one time he turned water into wine (etc… I’m leaving some details out here) and you should therefore literally structure your entire life around that narrative despite the absence of any confirmatory evidence in the intervening millennia, and the much more plausible explanation for all of this that these texts were cooked-up by pre-scientific society desperately trying to construct a philosophical edifice that would imbue their brief existences with something approaching meaning. But I digress…

Fundamental physics

From religion, let’s jump straight to physics, an arena in which religious apologists don’t hesitate to point out that our modern beliefs often come to resemble articles of faith. And yes, I can see the point they’re making there. There are a lot of experiments that I actually can perform using high-school mathematics and materials available in my household (or neighborhood), but beyond those, I am indeed taking it on faith that our best and most advanced models have mostly been worked out by a bunch of smart people who have dedicated their lives to studying this stuff, using mathematics that is presently far beyond my reach, and experimental equipment obtainable only through staggeringly large investments that only the most affluent governments can afford.[3]

So, I "know", or at least believe, that we’re all made out of atoms, although I’ve never seen one with my naked eye, and I haven’t even seen much larger assemblies of atoms (atoms into molecules, and molecules into cellular building blocks, and then cells). My most fundamental, concrete knowledge of the world only begins with the larger structures that I actually can see, like a human hair, or skin and so on. I’m basically trusting that a lot of science that I haven’t personally verified has been carried out more or less correctly, in good faith, to give us a reasonable understanding of how a lot of things work in the universe, from the tiniest scales right up to the largest (cosmological) ones. So, I think the earth is round because scientists tell me so, and what I see when I look out a plane window when flying at high altitude seems more compatible with that than with a flat-earth explanation. I could try and confirm this further via experiment, but I’m already satisfied by the weight of the informal evidence. Likewise, I think the earth spins on its axis and goes around the sun because — wait for it — scientists tell me so, and when I look at the way objects move through the sky, the way the seasons turn, and so on, all of that seems to be quite neatly explained by that model of the solar system.

The funny part is, there lies no technological innovation awaiting us in the future that would ever allow us to pierce the veil that separates us from the smallest subatomic scales. We can see "pictures" of lattice-like crystalline structures in which you purportedly can "see" individual atoms, but they’re really just tricks of visualization. The atoms surely are there, but they’re not comprised of the little round billiard balls of hard stuff that constitute their nuclei, nor the zippy little electrons neatly orbiting around them in orderly shells. At that scale, stuff starts to get really weird. Our best quantum theories tell us that reality at that scale is fundamentally different; it’s fruitful to dwell on the particle-like aspects of matter and energy for some purposes, and for others it’s useful to focus on the wave-like aspects. But what is really going on down there? The answer, it seems, is that it is somehow both.

And as such, we’ll never have a "photo" of an electron. Not only won’t you ever see such a thing with your naked eye, but it’s meaningless to even think of what it would look like if you could actually "zoom in" on one. I see macroscopic objects because my sensory apparatus is sensitive to the aggregate affect of quadrillions upon quadrillions of photons bouncing off the objects (or being otherwise emitted from them) and impinging on my sight organs. But what would I hope to see by sending a bunch of photons to interact with an electron? It would be like trying to divine the structure of an egg by "touching" it with a vast host of bullets fired out of a machine gun[4].

Likewise for things like quarks that the theories tell us have immense predictive power, and whose predictions have been experimentally confirmed over and over again, but which we can only "observe" indirectly by way of their side effects because they can’t exist in isolation. For these subatomic particles (again thinking of them as "particles" is at best a kind of analogy that provides us with a label for a vibration in a quantum field) it doesn’t even make sense to produce an image by bouncing visible light off them. At this level, a layperson like me claiming "knowledge" of what is happening requires me to operate at such a low precision, with such an enormous loss of fidelity and accuracy compared to that wielded by physicists who actually deeply understand the theories, the maths behind them, the nature of the experimental mechanisms, and so forth, that it’s all really quite ridiculous. And even for them, their understanding often rests on such towering pillars of abstractions, deductions, and inferences, that it’s difficult to really assign the label of "knowing" to the thing that they do with their subject matter.

"Meta" stuff

Grant me the hypothetical in which I somehow found the time and had the ability to actually arrive at a much greater degree of certainty about these questions of fundamental physics. I study for years and years. I repeat experiments. I immerse myself in the material until it is as intuitive and natural to me as is the parabolic arc[5] of a baseball sweeping through the air towards the glove of a seasoned player.

Even after all this, there would remain an inaccessible realm of unknowable but possibly true claims about existence and the universe which would forever remain beyond all possibility of verification by me or anybody else inhabiting this universe. For example, do we live in a multiverse? Eliding some of the nuances here, let’s take the "many worlds" interpretation of quantum mechanics as a version of the multiverse idea. In this interpretation, measurement effects are explained by positing that the apparent collapse of the wave function in response to measurement events is actually just the universe splitting into multiple universes that are distinguished by different measurement outcomes. This sounds weird, right (because "measurements" happen with mind-boggling frequency, and before you know it, you have "10 to the power of some number with a ridiculous number of digits" universes)?. The math, however, checks out: energy is conserved because each new universe is not a full "copy" of the original universe created out of "nothing"; rather, the average energy across all the "copies" ends up being the same as it was prior to the split.[6]

This sounds a bit "counter-intuitive", to say the least, because by definition, any such splitting would have no discernible effect on us; the universe splits, "we" continue on in our universe without any access to the universes cleaved away from us (the "we" in the other universes is distinct from and unconnected to us, other than having shared a common origin with us). There are no experimental predictions that a "many worlds" interpretation makes that we could use to affirm the existence of these other universes; about the only thing we can say is that the interpretation is at least falsifiable, because if the quantum bases that the interpretation is founded on are ever falsified experimentally, then the interpretation of the bases will have to go away with them.

Likewise, there’s that fun old question of whether all of this might just be a big simulation. Again, this is effectively unknowable, because all we can do is make intrinsic measurements inside the universe that we live in, looking to see whether they are consistent with our understanding of the laws that describe that universe. What we can’t do is step outside of the universe to observe it in operation, extrinsically, because the universe is, you know, "everything that exists" (and which we have access to).

Conclusion

So after all this, the conclusion looks pretty grim. We can’t trust what we’re told about reality as it exists today, nor any state in which it has existed in the past. The human race produces "knowledge" at a rate that exceeds our ability to absorb and assimilate it, and even if it stopped producing new knowledge right now, all but the purest mathematical constructs are embedded in social systems that make establishing their actual truth a difficult prospect, characterized as these systems are by political, financial, and ideological interests. Oh, and let’s not forget that even in a hypothetical system utterly sterilized of all such conflicts and concerns, there’s the baseline fact that humans aren’t perfect, our cognition is limited, and information gets lost or corrupted; even in a system populated only by the sincerest actors, working together with homogenous intent and in good faith, obtaining certain knowledge would be a fraught process.

Our scientific theories are only ever provisional, a good chunk of the world is still in disagreement with other chunks of the world about which monotheistic religion is The One True Religion, and a lot of the stuff that seems like it should be a matter of fact is actually about stuff that we can’t perceive with our sense organs and which we wouldn’t personally verify even if we could.

Finally, we might actually be computer programs, or somehow even less impressively, we might be higher-order emergent phenomena growing out of an absurdly simple set of automata[7].

The weird part is, none of that actually bothers me. I feel the ground beneath my feet, the taste of coffee on my tongue, and the bass notes coming out of my headphones somehow please me in a way that makes me suspect some signalling hormone is being secreted somewhere in my body to say "this feels good". This is enough.


  1. I work as a software engineer at a large technology company called GitHub. ↩︎

  2. In the interests of balance, you can find examples of the opposite kind of thing happening in other areas. That is, instances where the right has stayed more or less where it was, and the left has run way off into the fringes. A recent example that comes into mind is the right making perfectly reasonable claims in defense of free speech, which is one of those rare areas where we actually had rather broad agreement across the political spectrum, and the left pivoting hard towards censorship and compelled speech. ↩︎

  3. The standard rejoinder for these arguments that science is, in fact, no less requiring of faith than is religion, is that people who actually understand the scientific method make no such demands for faith at all. Religion demands faith on the basis of divine authority: "this is so and true because I, the ultimate power in the universe beyond all knowing and human ken, have deigned it to be so". Science asks for no faith; it seeks only to develop an increasingly accurate explanation for observable phenomena based on mathematical formalisms that can make predictions. Even the most ardent defender of something like Quantum Field Theory would abandon it — and will abandon it — as soon as something comes along that can explain things more accurately (ie. more in line with experimental observations) or more completely. ↩︎

  4. This analogy isn’t perfect I know, because the bullets evidently obliterate most of the structure of the egg, and what makes an egg an egg. The point is, though, that firing photons at an electron doesn’t give you a "picture" of the electron any more than the bullets do of the egg. Based on what I remember from high-school physics, the electron might absorb energy from the photon (moving to a higher valence level), or it might scatter it, but bombarding the system with photons clearly has more to do with modifying the system than with capturing a meaningful visual representation of it. "Seeing" simply doesn’t make a lot of sense at this scale. ↩︎

  5. Accounting for friction and other aerodynamic and local meteorological effects. ↩︎

  6. Naturally, I haven’t verified the calculations implied by this claim. But I believe that people who are better at math than me have done so. ↩︎

  7. Shout-out to my homie, S. Wolfram! ↩︎

Why I don't like mocking (much)

This comes from something I wrote internally at FB some years ago, back when Jest used to default to auto-mocking (ie. providing automatically generated stubs for every imported module). Seeing as it doesn’t contain any sensitive internal information, I’m turning it into a blog post. I still feel pretty much the same way about mocking, but the good news is that Jest changed its defaults, so I no longer have to persuade people about the downsides of auto-mocking.


A lot of the problems I have with auto-mocking are really just extensions of the problems I have with mocking. There are a couple of really obvious, compelling use cases for mocks:

  • Verification of interaction behavior of the system at a well-defined, stable service boundary.
  • Isolation from unwanted or extraneous side-effects (often because they are expensive and so make things slow, or because they are hard to rollback).

Note that the definition of "service boundary" here permits some flexibility. It could be a service on a network, but it could also be a central module in the system that serves some kind of coordinating role. The important part is actually the "stable" qualifier. If the interaction is well-defined and stable over time, then using a mock probably won’t hurt you, even for a small module. In these cases you can build a robust, well-vetted fake, and sub it in for the real service. The fact that you do this for stable abstractions means that the "well-vetted" part won’t stop being true.

Once you stray from that well-trodden path, mocks can start to cost more than they’re really worth. It’s very easy to fall in one of two complementary pits of failure:

  • The mock itself is brittle and dependent on the (often irrelevant) implementation details of the system, requiring it to be constantly updated. It’s basically like a duplicate implementation expressed in a less convenient form.
  • The mock isolates too well, allowing tests to continue passing when they should fail.

Of course, by virtue of the fact that auto-mocking is, er, automatic, you end up having to deal with these deleterious effects all over the place. Or, you dontMock everywhere to get control back and apply mocking deliberately in specific cases. In the absence of deep white-listing, auto-mocking is a particularly heavy-handed hammer.

Before coming to FB, I spent a lot of time exploring different testing strategies, particularly varying along the axes of test-before/test-after and mock (verify interactions) vs black box (verify state). I really tried to give each one a fair shot, but I came down strongly preferring tests that bias towards integration, minimize the use of mocks, and do a lot of state-based verification. Ultimately the only thing that matters about a system is its externally visible behavior (side effects); so asking it to do something and then inspecting the result is more likely to deliver you a test that actually verifies the property that you’re interested in (ie. the behavior) and is less likely to break because of an unrelated change in implementation details (which by definition should not matter). Note that this is a "lesser of two evils" thing, where black-box/state-based testing ends up sucking less than mock-centric/interaction-based testing, but it’s still not a panacea. The way to elevate your code above this axis entirely is to frame as much of it as possible in purely functional code (not always possible of course, but still desirable).

We noticed on Relay that we spent a lot of time dontMock-ing in our tests, because for better of for worse, stuff is interrelated in complex ways. Maintaining accurate mocks would have us dancing near those two pits of failure that I mention above. On the other hand, we have a well-defined external API boundary. Most of the things that Relay itself depends on are low-level enough to not be mocked or to have good fakes available. This means that we can effectively live in a no-mock bubble, where we treat everything in Relay as an integration test with every other part of Relay. The system is small enough that there’s no significant speed penalty, and the absence of indirection and magic make test failures easier to troubleshoot.

We do use mocks where it makes sense (ie. where verifying an interaction between components is more important to us, or easier, than verifying a stateful side effect). But these are generally manual mocks that we explicitly set up. When we create new modules, our default is to create a "passthrough" mock using require.requireActual and stick it under __mocks__, which gives us a place to later embellish with special behavior if we need (we don’t often need to). This is definitely boilerplate, but it keeps the mocks out of the way and allows us to work everywhere the code is synced to without special configuration.