dod
xrisk
ai

Notes on Vitalik Buterin's techno-optimism

Michael Nielsen
Astera Institute
November 29, 2023

Vitalik Buterin has recently published a thoughtful essay describing some of his general stance toward technology and techno-optimism. The following are a few quick notes on Buterin's essay, written to help me process it more deeply. They really are quick notes: if I'd taken more time, my notes would be briefer. If you're interested in the impact of technology on humanity then I strongly suggest reading his essay carefully.

Buterin's essay articulates a broad stance toward technology that synthesizes many ideas often thought to be in tension. It is not solely one of: "stop", "slow down", "go faster", "maximize profit", "maximize equity and fairness", "maximize safety", "reduce growth", "eliminate inequality", or any of the other usual messages. It attempts to harness market forces, without lionizing them; it takes both safety and equity seriously, without regarding them as the sole goals worth fighting for; it understands the enormous power technology has to make humanity better, but does not embrace a foolish technological utopianiasm1. Put another way: Buterin's synthesis combines many elements often taken to be in opposition. The result is a thoughtful articulation of what it means to take differential technology development seriously. One can reasonably disagree with much in the essay, but it's much deeper and better thought through than many other visions in a similar vein. While I have many points of considerable sympathy, naturally my notes focus in part on points of difference.

Onto the detailed notes!

I believe in a future that is vastly brighter than the present thanks to radically transformative technology, and I believe in humans and humanity. I reject the mentality that the best we should try to do is to keep the world roughly the same as today but with less greed and more public healthcare. However, I think that not just magnitude but also direction matters. There are certain types of technology that much more reliably make the world better than other types of technology. There are certain types of technlogy that could, if developed, mitigate the negative impacts of other types of technology. The world over-indexes on some directions of tech development, and under-indexes on others. We need active human intention to choose the directions that we want, as the formula of "maximize profit" will not arrive at them automatically… Our rapid advances in technology are likely going to be the most important social issue in the twenty first century, and so it's important to think about them carefully.

  1. Something I really enjoy about Buterin's essay: it's not a totalizing vision. This is already visible in this opening. Totalizing visions are often attractive (and even useful) as simple models of principle, but they tend to work horribly in practice. This is something I find myself disliking about a range of ideologies from Marxism-Leninism to e/acc's techno-capitalism. By contrast, Buterin's essay manages tensions. This doesn't have the simplicity of either e/acc or AI doomerism, but it seems to me a much more sensible frame for optimism. I particularly note his comment that "the formula of 'maximizing profit' will not" necessarily result in a good allocation of investment in tech development. What, then, are the market alterations we need? Buterin doesn't really specify, either now or later in the essay, leaving it as a key problem. But, c.f. things like the price of carbon, and the genral problem of supplying public goods – indeed, safety and security are often examples of such undersupplied public goods2.
  2. A question that I really enjoy is why technology does so much to help us live better lives. It's a very simple question, but all the answers I've heard (or have thought of) tend to be rather shallow and question-begging. Technology is a way of gradually doing more with the same resources, an actual free lunch3: it really, genuinely, creates wealth from nothing! Where is that free lunch coming from? What are its upper limits? We know a few very basic things – things like the Bekenstein bound and Einstein's mass-energy relation. But we don't have very good theories on the ultimate limits of technology. People sometimes tend toward doctrinaire answers – consider Julian Simon's book "The Ultimate Resource", which is sometimes trotted out to mean that innovation is unbounded – but while this point of view has its attractions, it's doctrine, not a well thought through answer. That said, while there really are limits to growth, they're far, far beyond what pessimists like Malthus, Ehrlich, and the Club of Rome ever thought.
  3. Much progress is produced not by technology, but by social ideas – new norms, new forms of organization, and so on. Voting, civil rights, suffrage, and so on – these were some of the biggest increments ever in human civilization. Technology sometimes helps (and sometimes hinders) the development of such norms – often it creates abundance that helps enable such social advances. Buterin certainly understands this, but I wanted to make it more explicit.
  4. Some broad background framing about how I think about all the issues under discussion: I believe the transition to posthumanity is likely-but-not-certain to be the most important occurrence of the 21st century. It won't just be a single posthumanity, but a plurality of posthumanities. And so the big question is how to avoid messing it up. Note that this framing does not discount the challenge of things like climate change, AI safety, inequality, and others; rather, I view them as early manifestations of this problem.

In some circles, it is common to downplay the benefits of technology, and see it primarily as a source of dystopia and risk. For the last half century, this often stemmed either from environmental concerns, or from concerns that the benefits will accrue only to the rich, who will entrench their power over the poor… I worry that we have overcorrected, and many people miss the opposite side of the argument: that the benefits of technology are really friggin massive, on those axes where we can measure it the good massively outshines the bad, and the costs of even a decade of delay are incredibly high… The "limits to growth" thesis, an idea advanced in the 1970s arguing that growing population and industry would eventually deplete Earth's limited resources, ended up inspiring China's one child policy and massive forced sterilizations in India. In earlier eras, concerns about overpopulation were used to justify mass murder. And those ideas, argued since 1798, have a long history of being proven wrong. It is for reasons like these that, as a starting point, I find myself very uneasy about arguments to slow down technology or human progress.

  1. I am similarly uneasy about arguments to slow down technology.
  2. It's a mistake to equate progress in technology to human progress. I certainly don't think Buterin believes this, but it's surprisingly common in certain circles. Sometimes, progress may consist of giving up or restricting technology – think of asbestos, lead-in-oil, CFCs, DDT, and many other examples. We humans are actually quite good at deciding to give up dangerous technologies, even when those are associated to enormously profitable industries.
  3. Buterin gives quite a number of examples of the benefits of technology, both in detail, and in statistical aggregates (increased longevity, decreases in child mortality, and so on). I won't quote them, but of course: this is the miracle of technology. One problem with the examples is that such statistical aggregates are hard to directly feel. But you feel them keenly when you talk with someone who has lost a child, or who suffered polio, or malnutrition, and realize that millions or billions of people have been in this position, but the proportion has just kept going down, down, down. In general: I'm struck by how much Buterin leans on Our World in Data. It's a fantastic resource, and suffers badly from underfunding. You can donate here (I'm friendly with several members of the team, and have donated, but have no direct connection).
  4. It seems extremely likely that we're still in the very early days of technology. There are children born into poor households today whose lives in some-though-far-from-all important respects are better than John Rockafellar. This is simply incredible; if we manage the future well, I believe the same thing will apply, mutatis mutandis in the future. I very much hope that a poor child born in 2100 will have a life of opportunity and plenty that in many ways exceeds what is available to a billionaire today. Without threatening the Earth's environment or other people. That may sound unrealistic, but I believe the historical precedent is strong.
  5. The environmental and inequality concerns ought to be taken very seriously. This may be (?) a point at which I would have a different opinion than Buterin. A question I like, and don't have a good answer to: what should the distribution of wealth be? How much inequality begins to demand a rebalancing? These are underspecified questions, but I notice that when I ask people the question, the more uncomfortable use that underspecification as an excuse to avoid answering it. I find that discomfort interesting.

AI is fundamentally different from other tech, and it is worth being uniquely careful

A lot of the dismissive takes I have seen about AI come from the perspective that it is "just another technology"… But there is a different way to think about what AI is: it's a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans' mental faculties and becoming the new apex species on the planet. The class of things in that category is much smaller… it feels like we are walking on much less well-trodden ground.

One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction. This is an extreme claim… This is all a speculative hypothesis, and we should all be wary of speculative hypotheses that involve complex multi-step stories. But even if you're not worried about literal extinction, there are other reasons to be scared as well… Even if we survive, is a superintelligent AI future a world we want to live in? A lot of modern science fiction is dystopian, and paints AI in a bad light. Even non-science-fiction attempts to identify possible AI futures often give quite unappealing answers. And so I went around and asked the question: what is a depiction, whether science fiction or otherwise, of a future that contains superintelligent AI that we would want to live in….

[On Iain Banks' /Culture/] When we look deeper, however, there is a problem: it seems like the Minds are completely in charge, and humans' only role in the stories is to act as pawns of Minds, performing tasks on their behalf.

Quoting from Gavin Leech's "Against the Culture":

The humans are not the protagonists. Even when the books seem to have a human protagonist, doing large serious things, they are actually the agent of an AI. (Zakalwe is one of the only exceptions, because he can do immoral things the Minds don't want to.) "The Minds in the Culture don't need the humans, and yet the humans need to be needed." (I think only a small number of humans need to be needed - or, only a small number of them need it enough to forgo the many comforts. Most people do not live on this scale. It's still a fine critique.)

  1. This is already true today: we are all to a considerable extent dominated by causal forces much larger than ourselves. William Whyte's "Organization Man" made the point beautifully. We respond to systemic forces whose causal origin is often in systems much larger than any human being. Sometimes it seems reasonable to think of those systems as being where agency resides; sometimes it does not, but that level still seems the causal origin. Indeed, perhaps ironically, one of the most common arguments in favour of unfettered work on AGI and ASI is that it historically inevitable. Anyone justifying their work on AGI with the argument "but what if China did it first?" has already ceded a lot of agency to a superhuman entity.
  2. Tangentially: a question I find interesting is the relationship between narrative plausibility and correctness. Good story is often false; what is true often makes for poor story. It's easy to write science fiction with faster-than-light travel or perpetual motion machines or machines that can solve the halting problem (or equivalents), and so on. And many true things are hard to write convincingly or even in a way that is comprehensible – this is true of many consequences of relativity, for instance. I always think of this when I see idyllic pictures of life on hypothetical future space stations or in future cities: those depictions may be narratively plausible, but in many cases I suspect they violate fairly basic principles of human psychology or economics. Just because you can paint it doesn't mean you can live it. Anyways, this is just to note that I wish I understood better the relationship between story and truth. I will say one strong conviction I have: nature is more imaginative than we. The deep principles that govern the universe seem to be more surprising and interesting than the stories our poor minds come up with, except insofar as those stories are rooted in those principles. I don't understand why this is the case very well, I'm just observing that it is.

A human giving orders to a superintelligent machine would be far less intelligent than the machine, and it would have access to less information. In a universe that has any degree of competition, the civilizations where humans take a back seat would outperform those where humans stubbornly insist on control. Furthermore, the computers themselves may wrest control. To see why, imagine that you are legally a literal slave of an eight year old child. If you could talk with the child for a long time, do you think you could convince the child to sign a piece of paper setting you free? I have not run this experiment, but my instinctive answer is a strong yes. And so all in all, humans becoming pets seems like an attractor that is very hard to escape.

This is plausible, but I believe there's some caveats omitted. Chess computers are vastly better than humans; chess is very competitive in many ways; and yet humans are still in control, and so far the chess computers show no signs of taking over the world championship. I think the quoted paragraph does have some core truth to it, but it needs more work, and has a narrower scope than it seems.

In the twentieth century, modern transportation technology made limitations of distance a much weaker constraint on centralized power than before; the great totalitarian empires of the 1940s were in part a result. In the twenty first, scalable information gathering and automation may mean that attention will no longer be a constraint either. The consequences of natural limits to government disappearing entirely could be dire. Digital authoritarianism has been on the rise for a decade, and surveillance technology has already given authoritarian governments powerful new strategies to crack down on opposition: let the protests happen, but then detect and quietly go after the participants after the fact. More generally, my basic fear is that the same kinds of managerial technologies that allow OpenAI to serve over a hundred million customers with 500 employees will also allow a 500-person political elite, or even a 5-person board, to maintain an iron fist over an entire country. With modern surveillance to collect information, and modern AI to interpret it, there may be no place to hide.

It's tangential, but a question that bothers me a lot is why Apple-Google-Microsoft aren't a lot worse than they are. They're all operating extraordinary panopticons4. When I stop to think about how much of my life is under Apple's eye I'm horrified. I won't pretend everything they do with that knowledge of my life is benign, but they aren't the Stasi either, despite knowing far more about me than the Stasi did about East Germans.

There are standard retorts to this – things like "Apple doesn't have state force" and "Oh, you're voluntarily entering into that agreement with Apple, if you don't like it you'd switch to something else". But those answers are in many (not all) ways quite unconvincing. It's true that Apple can't put me in prison, but they can legally do some pretty bad things. And while in principle I can switch or opt out, in practice our society is run in such a way that it's very hard to opt out of AGM entirely5. Anyway, I just want to note this as an ongoing puzzle to me.

Today, the "human in the loop" serves as an important check on a dictator's power to start wars, or to oppress its citizens internally… If armies are robots, this check disappears completely. A dictator could get drunk at 10 PM, get angry at people being mean to them on twitter at 11 PM, and a robotic invasion fleet could cross the border to rain hellfire on a neighboring nation's civilians and infrastructure before midnight.

I'd largely missed this point about starting wars: such actions have historically always required (some degree of) political legitimacy, especially when continued over months or years. The consent of the governed is (mostly?) a good thing, and it applies considerable force even to the actions of dictators. This may go away.

On to Buterin's core summary of the philosophy of technology he's proposing. Incidentally, while I won't cut-and-paste it, it includes an interesting solarpunk-style image.

d/acc: Defensive (or decentralization, or differential) acceleration… Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc. The "d" here can stand for many things; particularly, defense, decentralization, democracy and differential. First, think of it about defense, and then we can see how this ties into the other interpretations.

  1. The labelling is interesting: "d/acc". All the visions which survive have been well labelled. They've also tended to accrue tribes; I don't know quite what I think about that – tribe is often the enemy of good thought. One thing which is fascinating is the extent to why they are tribal affiliations versus coherent philosophies versus areas of study.
  2. I am struck by how many recent (and not-so-recent) visions are all versions of technocratic capitalism: EA, e/acc, d/acc, techno-optimism, progress studies all fit. As Neil Postman put it (in reference to C. P. Snow's "Two Cultures"): "the argument is not between humanists and scientists but between technology and everybody else". This point was also made rather well by H. G. Wells in his vision of the Eloi and the Morlocks. People may benefit enormously from technology, but they may also end up subject to technology. Technology is something done to them, no matter how beneficial, not something they do. This is common across all those visions. Still, Buterin's vision is different in being much less High Modernist.

Defense-favoring worlds help healthy and democratic governance thrive One frame to think about the macro consequences of technology is to look at the balance of defense vs offense. Some technologies make it easier to attack others, in the broad sense of the term: do things that go against their interests, that they feel the need to react to. Others make it easier to defend, and even defend without reliance on large centralized actors.

A defense-favoring world is a better world, for many reasons. First of course is the direct benefit of safety: fewer people die, less economic value gets destroyed, less time is wasted on conflict. What is less appreciated though is that a defense-favoring world makes it easier for healthier, more open and more freedom-respecting forms of governance to thrive.

An obvious example of this is Switzerland. Switzerland is often considered to be the closest thing the real world has to a classical-liberal governance utopia…

In fact, the combination of ease of voluntary trade and difficulty of involuntary invasion, common to both Switzerland and the island states, seems ideal for human flourishing.

I discovered a related phenomenon when advising quadratic funding experiments within the Ethereum ecosystem: specifically the Gitcoin Grants funding rounds. In round 4, a mini-scandal arose when some of the highest-earning recipients were Twitter influencers, whose contributions are viewed by some as positive and others as negative. My own interpretation of this phenomenon was that there is an imbalance: quadratic funding allows you to signal that you think something is a public good, but it gives no way to signal that something is a public bad. In the extreme, a fully neutral quadratic funding system would fund both sides of a war. And so for round 5, I proposed that Gitcoin should include negative contributions: you pay $1 to reduce the amount of money that a given project receives (and implicitly redistribute it to all other projects). The result: lots of people hated it.

This seemed to me to be a microcosm of a bigger pattern: creating decentralized governance mechanisms to deal with negative externalities is socially a very hard problem. There is a reason why the go-to example of decentralized governance going wrong is mob justice. There is something about human psychology that makes responding to negatives much more tricky, and much more likely to go very wrong, than responding to positives. And this is a reason why even in otherwise highly democratic organizations, decisions of how to respond to negatives are often left to a centralized board.

In many cases, this conundrum is one of the deep reasons why the concept of "freedom" is so valuable. If someone says something that offends you, or has a lifestyle that you consider disgusting, the pain and disgust that you feel is real, and you may even find it less bad to be physically punched than to be exposed to such things. But trying to agree on what kinds of offense and disgust are socially actionable can have far more costs and dangers than simply reminding ourselves that certain kinds of weirdos and jerks are the price we pay for living in a free society.

  1. I wonder how innovation/creativity versus defense trade off?
  2. I've loved my visits to Switzerland, and have a very positive impression of the country.
  3. Crypto has come in for a lot of derision over the past year or so. But this example nicely illustrates its power: it's enabling experiments (and learning) about different ways of designing markets.
  4. Market shorts are another example of the point about negative externalities. I don't know that the negative view of shorts is inevitable: the shorts were the heroes of "The Big Short". But that took a lot of narrative skill on Michael Lewis's part. I think Buterin is right about human nature in general abhorring people who take stuff away.
  5. This point about the value of the notion of "freedom" is marvelous, and something I understand only very poorly.
  6. Buterin gives a long list of specific examples of defensive technology, which I (mostly) won't quote here – the list could be made arbitrarily long, of course! But the list is extremely interesting and stimulating as an evocation of "defensive technology", broadly construed.

There are inevitably going to be imperfections in classifying technologies as offensive, defensive or neutral. Like with "freedom", where one can debate whether social-democratic government policies decrease freedom by levying heavy taxes and coercing employers or increase freedom by reducing average people's need to worry about many kinds of risks, with "defense" too there are some technologies that could fall on both sides of the spectrum. Nuclear weapons are offense-favoring, but nuclear power is human-flourishing-favoring and offense-defense-neutral.

This classification is going to be very tricky, in my opinion. Many countries with nuclear weapons described them as though they were for defensive use. It might seem as though you can solve this problem pretty easily: "if group A with technology T can kill people in country B, then T is offensive"; "if group A with technology T can save people's lives, then it is defensive". But group A can withold a vaccine (say) and kill people in group B. Does that make the vaccine offensive? Obviously not. But I think classification is surprisingly difficult, because most technologies are in some sense dual use. The question "whose interest is this in?" is often clearer, although even there it's tricky. We see this with misinformation, conspiracy theories and so on: "I'm telling them this for their own good" is something people often genuinely believe. And today's misinformation can be tomorrow's conventional wisdom.

So what are the paths forward for superintelligence?

The above is all well and good, and could make the world a much more harmonious, safer and freer place for the next century. However, it does not yet address the big elephant in the room: superintelligent AI.

The default path forward suggested by many of those who worry about AI essentially leads to a minimal AI world government. Near-term versions of this include a proposal for a "multinational AGI consortium" ("MAGIC"). Such a consortium, if it gets established and succeeds at its goals of creating superintelligent AI, would have a natural path to becoming a de-facto minimal world government. Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

The main practical issue that I see with this so far is that people don't seem to actually trust any specific governance mechanism with the power to build such a thing. This fact becomes stark when you look at the results to my recent Twitter polls, asking if people would prefer to see AI monopolized by a single entity with a decade head-start, or AI delayed by a decade for everyone… In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it's a corporation, government or multinational body. In seven out of nine cases, delay won by at least two to one. This seems like an important fact to understand for anyone pursuing AI regulation.

I have quite a bit of sympathy for this point of view. But: there's a big selection effect. Buterin's Twitter followers tend to be people interested in decentralized alternatives to state-run currency. The fact they're opposed to centralized solutions is of course what one would expect.

The main approach preferred by opponents of the "let's get one global org to do AI and make its governance really really good" route is polytheistic AI: intentionally try to make sure there's lots of people and companies developing lots of AIs, so that none of them grows far more powerful than the other. This way, the theory goes, even as AIs become superintelligent, we can retain a balance of power.

This philosophy is interesting, but my experience trying to ensure "polytheism" within the Ethereum ecosystem does make me worry that this is an inherently unstable equilibrium… My experience within Ethereum is mirrored by learnings from the broader world as a whole, where many markets have proven to be natural monopolies. With superintelligent AIs acting independently of humans, the situation is even more unstable. Thanks to recursive self-improvement, the strongest AI may pull ahead very quickly, and once AIs are more powerful than humans, there is no force that can push things back into balance.

This seems to me likely to be true, and important. I am very skeptical of "multiple poles" arguments. Information economies are very often winner-take-all or winner-take-most.

A happy path: merge with the AIs?

A different option that I have heard about more recently is to focus less on AI as something separate from humans, and more on tools that enhance human cognition rather than replacing it… Directions like [brain-computer interface] are sometimes met with worry, in part because they are irreversible, and in part because they may give powerful people more advantages over the rest of us. Brain-computer interfaces in particular have dangers - after all, we are talking about literally reading and writing to people's minds. These concerns are exactly why I think it would be ideal for a leading role in this path to be held by a security-focused open-source movement, rather than closed and proprietary corporations and venture capital funds. Additionally, all of these issues are worse with superintelligent AIs that operate independently from humans, than they are with augmentations that are closely tied to humans. The divide between "enhanced" and "unenhanced" already exists today due to limitations in who can and can't use ChatGPT.

If we want a future that is both superintelligent and "human", one where human beings are not just pets, but actually retain meaningful agency over the world, then it feels like something like this is the most natural option. There are also good arguments why this could be a safer AI alignment path: by involving human feedback at each step of decision-making, we reduce the incentive to offload high-level planning responsibility to the AI itself, and thereby reduce the chance that the AI does something totally unaligned with humanity's values on its own.

I'm more pessimistic than Buterin, I think. To me, this goes back to technocracy-versus-everyone-else. Technology centralizes control over humanity in the hands of the toolmakers. This is true even of open source approaches. Someone may say "How neat that anyone can hack on [open source project] and modify it to their will"; in practice, only a tiny fraction of the world's population can do this in principle, and resource limitations mean that an even smaller fraction of the world's population can do it in practice. In principle my browser is open source and I can modify it; in practice, I'm stuck with what I'm given. The more technology influences our lives – the more posthuman we become – the more true this is. Humans ending up as techno-chattel to toolmakers, no matter how benevolent.

Now, an interesting retort to this is that maybe this is something we get back from ASI. Tell your ASI to rewrite your browser so etc. It's an interesting possibility. I suspect we will get some of it back, but not that much. Not just implementing but merely specifying what you want is a deeply technical and virtuosic skill as any Movie Director or Art Director can tell you. And so this will limit our use of ASI to modify the technology that defines our extended selves.

Still, I certainly haven't taken this idea seriously enough yet. The following is pretty compelling:

One other argument in favor of this direction is that it may be more socially palatable than simply shouting "pause AI" without a complementary message providing an alternative path forward. It will require a philosophical shift from the current mentality that tech advancements that touch humans are dangerous but advancements that are separate from humans are by-default safe. But it has a huge countervailing benefit: it gives developers something to do. Today, the AI safety movement's primary message to AI developers seems to be "you should just stop". One can work on alignment research, but today this lacks economic incentives. Compared to this, the common e/acc message of "you're already a hero just the way you are" is understandably extremely appealing. A d/acc message, one that says "you should build, and build profitable things, but be much more selective and intentional in making sure you are building things that help you and humanity thrive", may be a winner.

It's not clear to me it's compelling enough. It's largely a values-based argument, trying to address a public goods problem (safety and security). And those things are still undersupplied. To come back to my earlier question: how does the market need to be modified, exactly? The Covid-19 case is, I think instructive: the makers of the vaccines captured almost none of the value. The world would actually be better off (long run) if they'd made 10x the profits.

I loved Buterin's conclusion:

We are the brightest star

I love technology because technology expands human potential. Ten thousand years ago, we could build some hand tools, change which plants grow on a small patch of land, and build basic houses. Today, we can build 800-meter-tall towers, store the entirety of recorded human knowledge in a device we can hold in our lands, communicate instantly across the globe, double our lifespan, and live happy and fulfilling lives without fear of our best friends regularly dropping dead of disease.

I believe that these things are deeply good, and that expanding humanity's reach even further to the planets and stars is deeply good, because I believe humanity is deeply good. It is fashionable in some circles to be skeptical of this: the voluntary human extinction movement argues that the Earth would be better off without humans existing at all, and many more want to see much smaller number of human beings see the light of this world in the centuries to come. It is common to argue that humans are bad because we cheat and steal, engage in colonialism and war, and mistreat and annihilate other species. My reply to this style of thinking is one simple question: compared to what?

Yes, human beings are often mean, but we much more often show kindness and mercy, and work together for our common benefit. Even during wars we often take care to protect civilians - certainly not nearly enough, but also far more than we did 2000 years ago. The next century may well bring widely available non-animal-based meat, eliminating the largest moral catastrophe that human beings can justly be blamed for today. Non-human animals are not like this. There is no situation where a cat will adopt an entire lifestyle of refusing to eat mice as a matter of ethical principle. The Sun is growing brighter every year, and in about one billion years, it is expected that this will make the Earth too hot to sustain life. Does the Sun even think about the genocide that it is going to cause?

And so it is my firm belief that, out of all the things that we have known and seen in our universe, we, humans, are the brightest star. We are the one thing that we know about that, even if imperfectly, sometimes make an earnest effort to care about "the good", and adjust our behavior to better serve it. Two billion years from now, if the Earth or any part of the universe still bears the beauty of Earthly life, it will be human artifices like space travel and geoengineering that will have made it happen.

We need to build, and accelerate. But there is a very real question that needs to be asked: what is the thing that we are accelerating towards? The 21st century may well be the pivotal century for humanity, the century in which our fate for millennia to come gets decided. Do we fall into one of a number of traps from which we cannot escape, or do we find a way toward a future where we retain our freedom and agency? These are challenging problems. But I look forward to watching and participating in our species' grand collective effort to find the answers.

I think it's a terrific vision. To reiterate some of my concerns: I think it undervalues the extent to which technocracy has already taken away agency; it doesn't specify how the market is to be altered; I think it undervalues the plurality of posthumanities, which may make a brighter star; I think it undervalues everyday human actions and ideals and experience. Things I'm curious to ponder more concretely: are questions like to what extent are institutions like OpenAI, Anthropic, Alphabet, VC, EA, philanthropy more broadly, and so on actually aligned with this vision? Where do they differ? How could they be brought into alignment? And which existing institutions really are aligned?

Update, November 30, 2023: Reflecting some more on how I respond to Buterin's notes, it strikes me that I have one major overall concern, a concern which makes me thinks d/acc needs some creative extension before I'd be willing to embrace it. In prior notes I have discussed the ways we've dealt with threatening technologies in the past:

[…] we deal with this using governance feedback loops. As an example: I grew up thinking humanity would likely suffer tremendously from: acid rain; the ozone hole; nuclear war; peak oil; climate change; and so on. But in each case, humanity responded with tremendous ingenuity – things like the Vienna and Montreal protocols helped us deal with the ozone hole; many ideas (Swanson's Law, carbon taxes, and so on) are helping us deal with climate change; and so on. The worse the threat, the stronger the response; the result is a co-evolution: while we develop destructive technologies, we also find ways of governing them and blunting their negative effects.

This governance feedback loop is a major thing distinguishing us from other animals, and making our extinction less likely. You can view Buterin's vision as one of finding ways to strengthen the governance feedback loop. Of course, I highly approve of this as a goal, and think it's a great call-to-action. My concern is that it's not nearly sufficient. In particular, elsewhere I've discussed recipes for ruin, civlization-threatening, trivial-to-make technologies which we are too ignorant to make today, but which are inherently ungovernable except through ignorance. If such recipes for ruin don't exist – if we live in a fundamentally friendly world – then d/acc seems very sensible to me. But if such recipes for ruin do exit – if we live in what Nick Bostrom has called a vulnerable world – then d/acc seems to me a good start, but does not address a core long-run issue: how to avoid developing knowledge of those recipes for ruin. That ignorance seems likely to me to be the only sensible governance strategy for such recipes.

Summing up my main takeaways from Buterin's essay:

  1. d/acc offers an attractive and optimistic vision of the future development of technology, setting out to manage many genuine tensions which are not acknowledged or denied in many other visions. It also harnesses many of humanity's best institutional, social, and technological inventions.
  2. d/acc leaves open a fundamental question about differential technology development, which is how to make safety and security relatively more profitable. Those both have (in some respects) the character of public goods, and have been historically undersupplied.
  3. d/acc is fundamentally grounded in technocratic capitalism, and does not address the increasing division between those who have agency over technology, and those who do not. I wonder how ASI will change that division.
  4. d/acc does not address the (non)-development of inherently ungovernable technologies, how to avoid recipes for civilizational ruin if we live in a fundamentally vulnerable world.
  5. I don't understand very well how d/acc relates concretely to many existing issues – to particular companies, for instance, or to issues like (say) wealth inequality, and so on. These are worth further reflection.

Footnotes


  1. I can't resist saying: I'd be more skeptical on this front than he is!↩︎

  2. It's interesting to think about an AI safety tax, in response to harms done by AI, perhaps funding a UBI.↩︎

  3. I once had a conversation with the sociologist of science Harry Collins. Harry offended me by comparing scientists to priests or magicians. It took me quite some time to realize how much truth there was in what he was saying. By better understanding the world, scientists are able to produce effects that sometimes appear miraculous to those ignorant of that knowledge.↩︎

  4. At a different level, so are Intel, Nvidia, TSMC, Cisco, Verizon, and so on.↩︎

  5. Cue people explaining their open source setups, which required them to invest a few thousand hours of their time, and required very extensive technical knowledge and ability. In practice this is not an option for most.↩︎