Prefatory remarks on 'Oppenheimer'

Michael Nielsen
Astera Institute
August 30, 2023

I was recently asked to speak briefly before a private screening of the movie "Oppenheimer". The audience contained many people who work on AI, and the organizer asked me to speak because I have worked both on AI and also at Los Alamos National Laboratory (in 1997 and 1998). Although my work at Los Alamos was unrelated to weapons, I did get to know some nuclear physicists a little, including one of the early pioneers of the hydrogen bomb. The following are a cleaned-up and slightly extended version of my remarks. They are, of course, far too short to do justice to the topic.

I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, "in the meantime I get to have a nice house and car".

This was an unusually stark and confronting version of a conversation I've had several times. Certainly, I often meet people who claim to sincerely believe (or at least seriously worry) that AI may cause significant damage to humanity. And yet they are also working on it, justifying it in ways that sometimes seem sincerely thought out, but which all-too-often seem self-serving or self-deceiving. (I've also, of course, had many conversations with AI people who sincerely believe safety is a minor problem; I won't engage with that point of view today.) People working on AI but deeply worried about AI risks seem to be in a situation akin to participants in the Manhattan Project. What moral choices are available to someone working on a technology they believe may have very destructive consequences for the world?

Part of what makes the Manhattan Project interesting is that we can chart the arcs of moral thinking of multiple participants. It's tempting to adopt a stance of mostly judging those participants, but I think it's more useful to focus first on empathetically understanding what the arc was, from the inside; in the light of such understanding we may be better able to decide our own relationship to science and technology. Here are four caricatures. They are historically incomplete, but nonetheless useful as models to consider in the light of AI.

Today, of course, people often justify the Manhattan Project using after-the-fact reasoning: by appeal to the "success" of MAD3; as saving American or Japanese lives in ending the war; or as simply inevitable for humanity. These arguments are certainly worth consideration. But for most participants in the Manhattan Project these reasons seemed to play little or no role in their early thinking, and so I don't think they're much help in understanding their choices.

I was asked today to comment on connections between the Manhattan Project and AI. I found myself surprisingly uncomfortable with the request: my own thinking is still forming, and I'm not yet confident in what choices to espouse. But that said, I strongly believe it's beneficial to try to understand deeply the many very different choices taken by people involved in nuclear weapons research, and to reflect on what that means for our personal involvement in technology today.


Thanks to Jason Benn for the invitation to speak, and to Laura Deming for many generous related conversations, and much insight on this topic.


  1. See, for instance, Oppenheimer's farewell address when he left Los Alamos, November 2, 1945, not yet 3 months after the bombing of Hiroshima and Nagasaki:↩︎

  2. Rotblat has written a superb brief memoir:↩︎

  3. An incredibly short-sighted point of view, in my opinion. We've had nuclear weapons less than 80 years – less than a single human lifespan! Get back to me if MAD works for a thousand years! It'll still be a drop in the bucket of humanity, of course, but at least a multi-lifespan time period will be rather more interesting.↩︎