ongoing
misc

Sporadica

2024

Framing stories (October 30, 2024)

Framing stories are very common in fiction and film. They're almost never memorable in their own right. I don't much care about old Rose deWitt Bukater or the treasure hunt (in Titanic), or about weather reporting (in Groundhog Day) , or etc etc etc. Often I can barely remember them, even in stories I love. I know there was a framing story in Life of Pi, but I don't remember much – something about a zoo, and going to India from Canada. And I can't say that Rob Reiner (or William Goldman) wasn't correct to throw out most of the framing in the film version of The Princess Bride. I suspect the novel's framing mattered to Goldman – all that stuff about life not being fair, and the contrast to the story. But the framing fades entirely compared to the fairy tale, which is where his heart is.

This doesn't mean framing stories don't have a point. But the point is not actually story per se, it's usually a bridging device to help bring us (the reader or viewer) from our world into the story world. Sometimes the transport is quite memorable – think of Dorothy being transported to Oz, or Lucy going into the wardrobe and emerging in Narnia. But we still don't much care about the framing. It's infrastructure for the story, not story.

I recently enjoyed both Piranesi and I Who Have Never Known Men, and in both cases there's no framing story, though the worlds are strange enough that the authors must have been tempted. But I think they made the right choice by having no framing. Indeed, in Piranesi we have a reverse framing, where eventually we re-enter our "real" world. The fantasy world frames the real world. But it has the effect of making both the fantasy world and the real world much more interesting. I remember the whole story arc. I wonder if Titanic, Groundhog Day, and so on would have been better with no framing?

I said above that I don't care about the framing in Titanic. Upon reflection that's not true. I remember two bits strongly. One is the images of the actual Titanic, today. And the other is of the treasure hunter saying at the end that "I never got it, I never let it in", realizing that despite years of obsession he'd never really understood the story of the sinking. And maybe that framing really is worth it, better than starting with some story about Jack and Rose's lives before boarding the boat. Or perhaps it would have been better to start with Jack saving Rose as she was about to jump? You could make that story work, though probably some flashbacks would be needed, to convey the majesty of the ship millieu.

The novel Cloud Atlas tries to make framing stories work as actual story: it's almost all framing story. Indeed, the central story, "Sloosha's Crossin' an' Evrythin' After", is in my opinion the least interesting and memorable of the stories. But this construction was thrown out in the film version, which simply presents six interwoven stories, to much greater effect.

What I've written above is pretty negative about framing stories. It's worth reflecting on what they enable, beyond the bridging already mentioned. I think mostly they give the writer much more narrative control. In something like Star Maker or Last and First Men, Stapledon's framing device lets him go almost anywhere and show us anything. It's similar in The Magician's Nephew and the (common) device of a world-between-worlds. Magic realist stories sometimes achieve the same narrative control, typically by asserting one or more absurdities early on – Karl Marx joins the conversation, or an alien bounty hunter, or etc. Once a reader has accepted that, they can accept a much more flexible narrative. But the writer gives up a lot, too – much of the sense of what Tolkien has called subcreation – living now in a dreamworld, rather than a subcreated real world.

Teaching to discover (June 25, 2024)

I've heard a story that the mathematician Kolmogorov once decided to teach a class on one of Hilbert's (unsolved) problems, with the loose aspirational goal of solving the problem. He hoped this would help organize his approach to the class, while much lowering the pain of failure. In the event, Kolmogorov and one of the students in the class, Vladimir Arnold, actually solved the problem.

I've decided to call this pattern "teaching to discover" – running classes with some strong creative agenda driving the class. Much of my creative work has been done this way. It's usually fun and extremely rewarding (and, hopefully, good for class members; sometimes those class members become collaborators, like Arnold). I've been doing it nearly all my creative life, but the terminology only occurred to me today. It seems useful to have such terminology.

The phrase is an adaption of a useful phrase I learned in 2009 from Louise Dennys, then the executive publisher at Random House Canada: "writing to discover". This is also something I frequently do, but is quite distinct.

2023

What do you think about the idea of a Manhattan Project for AI Safety? (September 14, 2023)

(Much condensed version of a detailed note, which I may clean up and make public.)

Many people have proposed a "Manhattan Project for AI Safety". Precisely what this means differs from proposal to proposal. But a common feature is to suggest governments or philanthropists put aside billions (or more) dollars for some centralized project meant to "solve safety". The idea is usually to run it out of one location, and to recruit "all the top experts" so they can work on the problem intensively. Variations on this idea have been endorsed, for instance, by Judea Pearl, Gary Marcus, Samuel Hammond, and others. The idea exerts enough hold over people that it's worth thinking carefully about conditions under which a Manhattan Project approach to problems is promising, and the conditions under which it will fail, or even retard progress.

I won't venture a thorough analysis here, but do want to register one narrow opinion: the centralized, top-down, all-encompassing nature of the project seems a terrible idea. Those qualities made sense for the Manhattan Project, since physicists already had good and often very good models of nearly all the relevant physics, and a good idea of how to carry everything out. While much refinement was still necessary, they understood a lot about the broad picture, and a centralized approach made sense. For AI Safety, by contrast, we understand very little about even the basics. If you want to make progress, it's better to foster lots of parallel efforts, pursuing very different ideas, not centralization in one place. (This does not mean some of those efforts shouldn't be quite large, just not all-encompassing.) Even that has problems: there is no clean division between safety and capabilities work, and much safety work is capabilities work. But at least as a basic point about making progress on safety it seems clear to me.

Immediate meta-reflections on an LLM/transformer workshop (August 14, 2023)

Thinking at the mercy of professionals (June 3, 2023)

One alarming thing about social media is that it pits you and your friends against – or, at least, not clearly with – an army of very smart, very well resourced, co-ordinated people who want to help sculpt your attention. They want you to be engaged. Twitter or Instagram or TikTok can employ a thousand behavioural psychologists, marketers, data scientists, product managers, and so on to bend your attention to their will.

I said "against", but that's not quite right. In fact, there may be some overlap between what you consider a good use of your attention and what they do. But there may also be large differences, and that's pretty alarming.

A similar phenomenon occurs in politics. I know a lot of very smart, thoughtful people who are casually engaged in politics, mostly as partisans of the Democratic party in the US. And periodically I'll note that they strongly believe something that upon investigation turn out to be plain wrong. (They also believe a lot of things which seem to be correct.)

The reason, as far as I can tell, is pretty much the same as in the case of social media: they are unknowingly engaged in an adversarial war with a very large, very smart group of people who are (collectively) carefully sculpting misleading narratives about the world. This has nothing to do with the Democratic party, of course – it just happens most of my friends are Democrats – if you're a casual partisan of any political group your thoughts tend to be somewhat at the mercy of the professionals within that group.

These effects hold even if most of the professionals are great people with a commitment to truth – indeed, I believe this is true of most, though certainly not all, people in most political parties, and of most, though certainly not all, people at social media companies.

I wonder about how much both effects sculpt my own thinking. I'll bet it's a lot more than I think, and I think it's likely to be quite a bit.

Reflections on "task lists" in pure research (June 3, 2023)

The manager's or engineer's orientation [*] is toward a list of tasks; the creative researcher's is toward a list of emotional provocations and half-baked hunches.

I've sometimes composed a task list, and then found it getting badly in the way of my work. No amount of "improvement" of the list changes the situation: the problem seems to be with having a task list at all. When that's the case it usually means I'm better off with a list of weird-ass provocations which I feel strongly about.

Furthermore: "strong" can be with almost any valence. Fear or anger often work just as well as inspiration or curiosity or fascination! (Though I would be miserable if that's mostly what my list contained.)

And you don't check 'em off. You just keep going back for fuel. (Well, sometimes they lose their emotional valence.)

This doesn't mean I don't use task lists. In fact, there's a lot of chop wood, carry water type work, and I find task lists invaluable there. But there's also a lot of stuff where if I'm on-task, I'm somehow failing. And I need to keep the primary orientation toward weird hunches, things which fascinate me, strange connections, and so on, not toward the task list.

(Inspired by a conversation with Sebastian Bensusan in late 2020.)

[*] This is all in the vein of "consider a spherical cow". I'm not a manager or engineer, and I certainly can't speak for all creative researchers. It's a fun speculative model.

Musings on the early history of molecular nanotech and quantum computing (June 3, 2023)

A question I've wondered a fair bit about: why did quantum computing take off somewhat smoothly, while molecular nanotechnology did not?

There's many similarities:

Some differences:

The story I've heard (over and over and over) from early MNT believers is that the combination of these last two points, plus the Smalley critique, killed MNT. I doubt it. In 1993, say, I think MNT was in a significantly better state than QC. But I think the big differences were likely:

I think a pretty fair summary of much of the last 40 years of physics and chemistry and biology is that it's been working toward molecular nanotechnology. But they've done so piecemeal, bottom-up, motivated by taking next experimental steps: improve control here a bit, improve sensitivity there, what new things can we control, what new things can we sense? That journey has been absolutely astoundingly successful. And it puts us at a point where MNT (or something more powerful) starts to look like a sensible top-down goal.

The severe limits of process, and the taboo around taste (June 3, 2023)

Discussions of metascience often focus heavily on process. Identifying better granting strategies, better incentives, better values, better hiring practices and so on. The qualities of individuals and the actual details of human knowledge are often treated as a black box. It's a curious focus, because process only takes you so far. You can't bureaucratize your way to blazing insight; there are no scaled-out chain Michelin three star restaurants. We don't have a production line for Albert Einsteins. And I rarely hear the limits of process discussed explicitly.

I'm reminded of a feature common in discussions of taste: there's often a tacit polite assumption that participants in the discussion all have good taste. This seems highly unlikely in most cases; the quality of taste varies so wildly between people. This is often the unspoken high-order-bit in such conversations. I suspect it would often be helpful to have a frank and very concrete discussion of how each person thinks about their own taste, not trying to come to a Kumbaya "we all have good taste" agreement, but rather identifying points of stark disagreement between participants, and the reasons for those disagreements.

Postscript: As a practical matter you must believe in your own taste. After all: you have no-one else's judgment to fall back on. Of course, you may elect to defer to others (sometimes good, sometimes bad), but in so doing you are still exercising taste in who you defer to. It's a bit like the fact that having no exercise program is, in fact, a choice of exercise program. Of course, you may choose to try to improve your taste. But that's another subject!

Outlining

As a kid I was told often in class to "write an outline". We'd practice it. And I could never really make it work. As an adult, I sometimes hear from researchers and writers I admire that they make outlines. And I wonder: how do they do it?

Like seemingly everyone else in the world, I'm currently writing an essay on AI existential risk. The section I'm finishing right now discusses some problems with the commonly-used terms "artificial general intelligence" and "artificial superintelligence". And it sometimes feels tedious to write: "Oh, this should be easy, why am I bothering, this has all been said before". Except, when I look at other discussions of this point, I think many are outright wrong, and none seem fully satisfactory.

And when I actually write, I can see why: I'll write a sentence that seemed obvious, then realize it's a little off, not quite true. Then realize that it's wrong in an important way. And I'll improve my understanding, and write something more true. And it'll change the whole rest of the argument, sometimes in subtle ways, sometimes completely changing it. Not just later bits of the argument, either: often it changes everything. And the thing is: I can only discover this by getting right down into the guts of the issue1. If I'd outlined, I would have had to throw out my outline at this point. And it happens over and over and over and over and… you get the idea. The trouble with outlining is that writing is a transformative process; I write in order to transform my own understanding. And no outline survives contact with such transformation. If an outline "worked" it would mean my understanding hadn't been transformed; while the outline would have worked, the writing would have failed2.

(This is why I'm also somewhat suspicious of tedious-seeming topics. Sometimes that means you should omit the topic. Sometimes, though, there's an opportunity waiting: you have an illusion of understanding caused by not really having understood at every level of abstraction. And what you want is to break that illusion, improving your understanding.)

So: I'm not a fan of outlines. I do, however, do something closely adjacent. I sketch a lot. I'll braindump many rough ideas, organize them, put them in hierarchies, riff on them, mash them up, try opposites, try the weirdest stuff I can, try the most conventional stuff I can. I suppose it looks quite a bit like outlining. But it's at every level of abstraction, from the highest right down to very extended riffing on points that might not merit a single sentence in a shippable essay. Sometimes parts of this sketching eventually function as an outline. But that's an accidental byproduct of improved understanding. The outline isn't the point,the improved understanding is.

Nine Observations About ChatGPT

  1. Using ChatGPT (especially with GPT4) is an open-ended skill. You can get much, much better at it, in an open-ended way. It's much like learning to play guitar: you can get a lot better.
  2. Getting better is best done with a lot of imagination and experimentation and learning from others.
  3. Many of the people saying "Oh, it's no good for anything" are merely revealing that they're no good at it.
  4. Most of the patterns of use will be discovered socially. Even if no more chatbots were ever released, I believe we'd still be getting better at using this tool in 20 years time.
  5. One thing I like very much is that it teaches you to ask better questions, and to be curious. This is a key skill for any human being to develop, and ChatGPT provides an environment in which you can get much better at asking questions, very rapidly. I'm especially pleased for people who are not much rewarded for asking such questions in their current lives.
  6. Much of the benefit is in expanding intent. I am an occasional hobbyist programmer, who will occasionally write a hundred-line script. That'll typically take me a few hours, unless I'm already familiar with all the relevant libraries. Now I "write" such scripts far more often, often taking 30-60 minutes, with ChatGPT. It's gradually expanding the range of things I consider doing; again, it's expanding my ability to ask good questions.
  7. Another example of expanded intent – many more could be given – I will go out for a walk, and brainstorm aloud (in a voice recognition app), then get ChatGPT to extract and clean up all kinds of information. As I do this I'm finding that I'm giving ChatGPT more and more verbal instructions, "Oh ChatGPT3, that point was really important, can you make sure to highlight it?"
  8. I'm surprised how much I want to thank it: "Oh, great job ChatGPT!"
  9. Scolds will point out drawbacks in all of the above points. E.g., if this was the only way to learn to ask questions that would be bad. But it seems to me that a scold views such commentary as an end in itself; sometimes, they merely seem to be enjoying the opportunity to parade their superiority. Wise use uses criticism in service of creative growth and development, not as a primary end.

Four observations about DeepMind

  1. From the outside, it appears they have a thesis, a set of beliefs that give them a sustained competitive advantage: (a) AI can be an enormously powerful tool for solving fundamental problems; (b) the window of time for demonstrating that includes today; (c) the benefits of a non-traditional structure (capital + compute + large groups of experts working together) will enable them to solve problems which academic groups who believe (a) and (b) cannot; (d) any given project may fail, and so they need sufficient scale and commitment to take a portfolio approach.
  2. It's interesting they have identified and (apparently) deeply believe in a very high leverage thesis. Academic research groups often don't, relying instead on simply finding a niche, or adopting a generalized strategy (work harder! work smarter! raise more money!)
  3. The thesis could, of course, have been wrong. I expect it involved a lot of work to find, and a lot of reflection and self-belief to execute on, especially before they had achieved much in the way of success. It's tempting to say it's "obviously" true now, but this was less true a decade ago, especially to have the confidence to actually execute, raising capital, recruiting people, and so on.
  4. Now that the thesis has been shown successful, its differentiating power is much less. Less, but I suspect still not nil, and related theses may be differentiators. OpenAI and Anthropic have related-but-different theses.

AI overload / interface shock (02-26-2023)

Just musing about the next few years of AI systems. Even that short term seems likely to get pretty weird, never mind the long term.

It's challenging to think about, for many reasons. In part because: there's a lot of noise right now, due to the influx of capital and concomitant hype and grifters and naysayers. And despite the hype it's still hard to point to a crucial widely-used product where AI is inarguably the decisive element. But it still seems important to try to understand.

Of course, the future isn't yet written: it's up to us to collectively decide. But it's fun to come up with both generative and analytic models to think about what may happen. Here's one such model. It's not meant as a prediction, just a useful model for thinking about possible futures.

It's this: there will be a widespread sense of AI overload or interface shock4: the shock and sense of confusion and bewilderment which occurs as a rapidly growing chunk of society has to keep learning and relearning new AI systems, over and over… and over and over and over and over and over and over.

Copilot, Midjourney, DALL-E, StableDiffusion, GPT3, ChatGPT, Claude, GPT4, Bing, are the very beginning.

Indeed, they're mostly still curiosities, not yet near-essential for knowledge work. But it seems plausible that their near-term successors will be near-essential for knowledge work, perhaps even much more important than Google search is today. Getting the most out of those systems will be like learning to play an easy-to-pick-up musical instrument: satisfying for beginners, but increasing mastery will pay increasing returns. The result: a strong incentive to get better with such systems; and a sense that one should be getting better, indeed, even that one must get better. Furthermore: there's going to be a rapidly changing cast of such systems, over more and more domains; and those systems won't be fixed targets: they will rapidly co-evolve with their respective user bases. It's as though the musical instrument will change and mutate, as fast as you're learning it.

Something like this already happens to programmers: they suffer a kind of API overload: every year, they must pick up a steady stream of frameworks and libraries. I've heard many programmers talk about how overwhelming (and sometimes bewildering) it is. Only it won't just be programmers feeling this overwhelm and bewilderment: this type of interface shock will spread widely, to everyone for whom mastery of such AI systems offers a real advantage in their lives. And it will be done under twin emotional shadows: threats to livelihood; and a perhaps even deeper sense of identity threat, as people re-evaluate their feelings about intelligence and its role in their sense of self. In this model, what we think of as intelligence may change significantly: it will move from "solving the problem" directly to "rapidly exploring and mastering interfaces". A similar change has already occurred in programming, but it'll be across a much broader class of creative and knowledge work.

The poverty of Moral Systems (01-20-2023)

In a recent set of notes I wrote of my general suspicion of deductive Moral Systems, and in favor of a more exploratory / bricolage approach. I characterized this as being about two competing traditions. One tradition is focused on developing Moral Systems:

…this seems inspired in many ways by mathematics. Figure out some pretty reasonable basic axioms or models. And then try to explore and understand their consequences. It's not quite proving theorems – the Systems are rarely unambiguous enough for that. But "making arguments" to figure out what is right. And then there's the second tradition, which is what 99.99% of human beings use: just muddling through, talking with friends and family, watching other people, trying to figure out how to live in the world, to be a good person, to do right by others and by oneself.

I think the second tradition is usually much more powerful and reliable. And the reason is that the world is immensely complicated, and as a result experience is much richer than any such System. It's a situation where, for now, simply exploring reality is in many respects far more challenging than such a System. It's the difference between attempting to deduce biology from a few simple ideas, and determinedly exploring the biosphere. The actual biosphere – the biosphere we can explore – is immensely complicated, and exploring it has (so far) been much more rewarding than attempting to understand things from theoretical first principles. With that said, a benefit of the Moral Systems is that one can push them in ways you can't (easily) in the world. Clever thought experiments, unusual questions – those are genuine generative benefits. But while I have no doubt that generates novel moral ideas, I have my doubts about whether it generates reliable moral insight.

That's a lot of throat-clearing to say: I'm fundamentally very suspicious of any strong notion of a "Moral System" at all, or even of notions like consistency and implication in such a System… many people want to take them very seriously as a basis for extended lines of reasoning and action. I think that's usually a mistake, resulting in complex (and sometimes selfishly motivated) justifications for actions which would seem obviously wrong to any intelligent 10 year old. When I say this, people interested in Moral Systems sometimes want to debate: such debate seems to me (with rare exceptions) a bad use of my time. They're insisting on using the first tradition, rather than the second. And it's much healthier to relate primarily to people's actual experiences, and only secondarily to Moral Systems or theories of what is good.

Upon reflection, I regret the missed opportunity to name this phenomena: it's pointing out the poverty of Moral Systems. In particular: the poverty compared with exploring (moral) reality. Deduction and consistency are fine tools, but they're only a small part of what is needed, and if you insist on relying on them, you're in very poor shape. Emerson was right: a foolish consistency is the hobgoblin of small minds; yet many Moral Systems seem to rely on consistency as a basic value.

Honesty and fear (01-12-2023)

It is very hard to be honest when you are afraid of the opinion of your community. This occurs all the time in life. Perhaps most often: it is often difficult to ask the basic question or express basic ignorance or confusion when it seems that everyone else knows more than you. It's a constant temptation in my work: the pretense of understanding, the omniscient view, when I am ignorant of so much, even things others may regard me as knowledgeable about, or where I feel I "should" know. It's embarrassing to say "I don't know" or to express my naive opinion, especially when it seems at odds with what my peers perhaps expect. But provided it's done with humility, this is often where growth lies.

2022

Technological progress is instrumental; actual progress is internal to human experience (10-07-2022)

In a book review on his website "The Roots of Progress", Jason Crawford writes:

Through maybe the 1950s, visions of the future, although varied, were optimistic. People believed in progress and saw technology as taking us forward to a better world. In the span of a generation, that changed, with the shift becoming prominent by the late 1960s. A “counterculture” arose which did not believe in technology or progress: indeed, a major element of the counterculture was the environmentalist movement, much of which saw technology and industry as actively destroying the Earth.

Later in the review he states, apparently approvingly, that "social activism [like that done by the environmentalist movement] is a drain on human capital".

It's a curious point of view, which seems to equate technological advance with progress, and considers any inhibition of such advance as a drain on progress. What makes it curious is that most environmentalists of my acquaintance also believe in progress, in the sense that they want a better life for the next generation, and have ideas about how best to achieve it. Indeed, it was due in part to such activists that modern environmental legislation like the Clean Air Act was passed. Such legislation has likely saved millions of lives, and considerably improved billions of lives. It's had a cost: I have no doubt such legislation has inhibited technological development. Maybe that cost is worse than the harm it has prevented. But it's at least plausible that the benefit has been far more than worth the cost. That is, it's plausible that such regulations and activism are a milestone in human progress, which should be celebrated, not decried as "a drain on human capital".

Technology is often instrumentally useful for improving human lives, but it's not intrinsically good as an end in itself. Ultimately progress is internal: an improvement in the quality of human lives and experience; it doesn't reside directly in technology at all. In that sense, the values expressed by (just to pick two examples) the Sermon on the Mount or the abolitionist movement arguably represent a more intrinsic form of progress than any technology, because they more directly change human experience. More broadly, our most imaginative story-creators and moral entrepreneurs and artists and activists have contributed enormously to human progress.

Of course, science and technology are extremely important enablers of progress. It's far easier to live a good life when you have abundant food and medicine; when you have good housing, and so on. I'm merely making a point about what seems to me some (mistaken) fundamental assumptions I've seen advocated. I'm sympathetic to Effective Altruism's approach of "attempt[ing] to do for the question 'what is the good?' what science has done for the question 'how does the world work?'. Instead of providing an answer [the EA Community] is developing a community that aims to continually improve the answer." Of course, arguably that's what everyone already thinks they're doing for progress: the gung ho technologists and the anti-technology Luddites all think they're arguing for the "correct" form of progress.

Looking at the above, I'm dissatisfied. It seems fine as far as it goes, but badly incomplete. A key thing about science and technology is that they provide a free lunch: as our understanding improves it increases human power and ability to act. That's not always a good thing: certain technologies are mostly just bad. But many seem on net to be positive (albeit with some negative effects). For this reason I can't say I'm a fan of the precautionary principle; it seems mostly like status quo bias. And while I'm very pro science and technology, I instinctively recoil from a certain type of myopic technologist who is always in favor of new tech. But I don't yet have a better principled way of thinking about these things.

The role of "explanation" in AI (09-30-2022)

There's a critique of current work on AI expressed as variations on the argument: "Look, some such systems are impressive as demos. But the people creating the systems have little detailed understanding of how they work or why. And until we have such an understanding we're not really making progress on AI." This argument is then sometimes accompanied by (often rather dogmatic) assertions about what characteristics science "must" have.

I have some instinctive sympathy for such arguments. My original field of physics is full of detailed and often rather satisfying explanations of how things work. So too, of course, are many other fields. And historically new technologies often begin with tinkering and intuitive folk models, but technological progress is then enabled by greatly improved explanations of the underlying phenomena. You can build a sundial with a pretty hazy understanding of the solar system; to build an atomic clock requires a deep understanding of many phenomena.

Work on AI appears to be trying to violate this historic model of improvement. Yes, we're developing what seem to be better and better systems in the tinkering mode. But progress in understanding how those systems work seems to lag far behind. Papers often contain rather unconvincing just-so "explanations" of how the systems work (or were inspired). But the standards of such explanation are often extremely low: they really are just-so stories. Witnessing this, some people conclude that work in AI is not "real" scientific progress, but is rather a kind of mirage.

But I wonder. I'm inclined to suspect we're in a Feyerabendian "Anything Goes" moment here, where prior beliefs about how science "must" proceed are being overthrown. And we'll wonder in retrospect why we held those prior beliefs.

The underlying thing that's changed is the ease of trying and evaluating systems. If you wanted to develop improved clocks in the past you had to laboriously build actual systems, and then rigorously test them. A single new design might take months or years to build and test. Detailed scientific understanding was important because it helped you figure out which part of the (technological) design space to search in. When each instance of a new technology is expensive, you need detailed explanations which tell you where to search.

By contrast, much progress in AI takes a much more agnostic approach to search. Instead, of using detailed explanations to guide the search it uses a combination of: (a) general architectures; (b) trying trillions (or more) of possibilities, guided by simple ideas (like gradient descent) for improvement; and (c) the ability to recognize progress. This is a radically different mode of experimentation, only made possible by the advent of machines which can do extremely rapid symbol manipulation.

In the clock case it'd be analogous to having a universal constructor capable of extremely rapidly trying trillions of different clock designs, recognizing progress in timekeeping, and iteratively making tiny improvements. I'm not sure "trillions" would be nearly enough there – three-dimensional material design is very combinatorially demanding! But the point should be clear: at some scale of trying new designs, a new mode of progress would likely be possible, quite different from prior modes, and perhaps violating prior theories of how progress "should" happen. People would complain about these new horologists and how they didn't understand what they were doing; in the meantime, the new horologists would be building clocks of unprecedented accuracy.

It's easy to critique this story. You can come back with: "Oh, but search-and-recognize will inevitably be bottlenecked at crucial points that need genuine understanding". But: (a) notice how this has shifted from an argument-of-principle to an argument-of-practice citing ad hoc bottlenecks; and (b) the people making this argument don't seem to be the ones actually building better systems, which is the argument-of-practice we ultimately care about. I'm sympathetic to it, but only up to a very limited point.

Postscript on intent: No intent to claim originality here. And I certainly don't expect this to convince anyone who doesn't want to be convinced. This is just me working a few thoughts out.

Stated versus revealed preference for risk

Funders at maybe half a dozen science funders have told me variations on: "Oh, I'd love to fund more high-risk projects, but scientists won't submit them, even though I strongly encourage it. They're so conservative!"

I am near certain that at each of those funders a safe, incremental grant application is far more likely to be funded than something that's actually high risk. The funders are a bit like people who tell their local grocer that they want green beans, but only ever purchase tomatoes. A sensible grocer would rapidly stop offering the beans, and just set out the tomatoes.

If funders genuinely want high-risk projects, they must ensure such projects are more likely to be funded than low-risk projects. They need unsuccessful applicants who submitted low-risk work to walk away feeling "gosh, my project wasn't daring enough".

None of this addresses when and whether risk is good, which is a separate issue. I'm just observing that there seems to often be a large gap between stated and revealed preferences. They don't purchase what they say they want to buy.

The things you can't say (07-03-2022, revised 07-13-2022)

There's a set of generative questions that I find unusually helpful: the rude questions. It's to ask myself: what would be considered rude or offensive or in bad taste to ask here? What would people be upset if it were true? It's difficult to be clear here, since there is a related idea that I (emphatically) am not talking about: I don't mean in the standard political sense or about social issues, where people who fancy themselves as bold truth-tellers are often just harking back to tired old prejudices. I mean something very different: when you start to become deeply familiar with a nascent field, you can start to ask yourself "what would really bother me if it were false?" or "what would really upset the presumed applecart here?" I find the most useful versions of the question are "what am I most afraid of here?" and "what would be considered offensive or rude here?" I don't mean offensive in a personal sense, I mean it in a sense of violating shared intellectual presumptions.

Meaning as the violation of expectations

I find probabilistic language models surprisingly irritating in some ways. Surely a big part of thinking is to create meaning, by finding ways of violating expectations. The language models can seem instead like ways of rapidly generating nearly content-free cliches, not expectation-violating meaning.

Who will own AI?

An observation: many (not all) of the top AI labs are run by people who do not seem to be themselves top talent at working directly on the technical aspects of AI. I'm a little surprised by this. When truly challenging technical problems are being solved, there is usually unique advantage in being best on the technical problem solving side. The ability to tell a story, obtain capital, and assemble people with essentially commodity skills is, by comparison, rather commonplace. This is why Intel and Google were started by very capable technical people, not by business people who had determined that they should own semiconductors or search.

(A clarification: Page and Brin were not technical in the sense the Valley sometimes means it – they weren't great commodity engineers, capable of pouring out code. But they understood the technical problem of search better, in some ways, than anyone else in the world. The latter is the sense in which I mean technically strong.)

It's interesting to contrast with quantum computing. Some quantum computing startups are run by very strong technical people. And others are run by people whose capabilities lie in raising money, telling stories, hiring, and so on. So far, the first type seems to be doing much better, at least, as measured by progress toward building a quantum computer.

I am, as I say, a little surprised by the way things have unfolded with AI. It may be that I've simply misunderstood the situation. Certainly, I'm sure Demis Hassabis is extremely technically capable (DeepMind's early hiring speaks to that, for instance). But overall it suggests to me that many of the strongest technical people don't understand how strong a position they are in. It's true: many know they would be unhappy running things, and quite rightly don't put up their hands to do so. But some subset would enjoy running things too. I won't be surprised to see more splinter labs, started by strong technical people.

Curious: would you vote for any of the people running the top AI labs? Can you imagine them as great and wise leaders of history?

The note-taking impulse (04-03-2022)

Something I find difficult to understand about much of the current vogue for note taking is that it often seems to be done devoid of any genuine creative sense. When I work I'm usually (ultimately) trying to make something shippable. It'll be sourced in my own curiosity, but there's also a strong sense of trying to make something of value for others, even if it's only a few people. It's both satisfying and also useful as a test of whether I'm actually doing something useful (or not).

By contrast, some (not all) people's note taking seems to be oriented toward writing notes that don't have much apparent use for anyone, including themselves. They're not building to anything; indeed, sometimes there just seems to be a sense that they "should" be taking notes. I'm reminded of many of the people who complain about memory systems; often, they don't actually have any good use for them. It's like cargo culting creative work.

I've put all that as strongly as I can. It's doubtless unfair. But it does really puzzle me. It often seems like elaborately practicing the piano in order to play chopsticks, or scales.

Of course, none of this is any of my business. But the reason I got interested is because writing is such an incredibly powerful tool to improve one's own thinking. The note-taking vogue seemed like it should be part of that. But: are people writing great screenplays or books or essays or making important discoveries with crucial help from these tools? I'd love to know who, if so!

I think it's distinctly possible that they're simply being used in ways I don't instinctively understand. For instance, I tend to find engineering and company-building difficult to understand. While I certainly admire the people doing these things well, I have little impulse in these directions myself. And so maybe the tools are useful there, but I simply don't see how. That'd also be interesting.

Addendum: This isn't a very firmly held opinion. My sense is that there ought to be some immensely powerful use for such systems. But most actual uses don't seem to me to be very interesting. I suspect I'm simply not seeing the interesting uses.

The tool-building impulse (04-01-2022)

I'm currently reading a book about a tool builder/connoisseur. It's quite a good book, but it's written by someone who doesn't really grok, in their gut, why people build tools.

I don't find it easy to say in words what that gut sense is. But let me attempt anyway. It's a sense of creating a new kind of freedom for people. You make this thing, and it enables new opportunities for action. You're expanding everyone's world, including your own. That's really intoxicating!

I'm more naturally a scientist, someone who understands, than a tool-builder. But I have enough of the impulse that I feel comfortable opining.

AI safety: an oversimplified model (03-17-2022)

A very oversimplified 3-level model of AI Safety. Certainly not intended to be new or especially insightful, it's just helpful for me to write down to make my thinking a tiny bit more precise.

  1. The narrow alignment problem, which seems to be where a lot of technical AI safety work focuses ("let's make sure my nuke only goes off when I want it to go off, explodes where I want it to explode etc"). For AGI this has the additional challenge (beyond the nuke analogy) that human intent is often ambiguous and difficult to debug. We think we want A; then realize we want A'; then A'' etc; then we discover that even once it's clear to us what we want, it's hard to express, and easy to misinterpret. This problem is a combined generalization of interpersonal communication and programming debugging, with the additional problem of massive unexpected emergent effects thrown in (of the type "we didn't realize that optimizing our metrics would accidentally lead to authoritarian dictatorships").

  2. But even if you can magically solve the narrow alignment problem, you still have the problem that evil actors will have bad intent ("let's make sure bad guys / bad orgs / rogue nations don't have nukes"). In this case, Moore's Law & progress in algorithms means if anyone has AGI, then pretty soon everyone (left) will have AGI. In this sense, AI safety is a special case of eliminating evil actors.

  3. The problem of agency: if AGIs become independent agents with goals of their own, and they're (vastly) more powerful than humans, then humanity as a whole will be in the standard historic situation of the powerless next to the powerful. When the powerful entities' goals overlap with the powerless, usually the powerful entities get what they want, even if it hurts the powerless.

No doubt there's all kinds of things wrong with this model, or things omitted. Certainly, I'm very ignorant of thinking about this. Still, I have the perhaps mistaken sense that much work on AI safety is just nibbling round the edges, that the only thing really likely to work is to do something like non-proliferation right now. That's hard to do – many strong economic and defense and research interests would, at present, oppose it. But it seems like the natural thing to do. Note that ridicule is a common tactic for dismissing this argument: proof-of-impossibility-by-failure-of-imagination. But it's not much of an argument. To be clear: I don't think the standard AI safety arguments are very good, either. But they don't need to be: this is a rare case where I'm inclined to take the precautionary principle very seriously.

"You should"

Reasonably often someone will tell me: "You should [do such-and-such a project]".

It's well meant. But it rankles a surprising amount. Often the suggestion is for a project that would take several years. Often it's a suggestion for a project that seems dull or poorly conceived in some way. And the underlying presumption seems to be that I have a dearth of project ideas and a surfeit of time. (I have a list of about 5,000 project ideas last time I checked numbers).

My suggestion is: if you like the idea enough to suggest I take my time, you should do it. And if you don't want to, you should you reflect on why not.

In the meantime, a less irritating phrasing is: "A really fun project I'd love to see someone do is…"

2021

A brief note on Melodysheep's "Timelapse of the Entire Universe" (12-25-2021)

I enjoyed Melodysheep's beautiful video, "Timelapse of the Entire Universe". It's a visualization of the entire history of the universe, 22 million years per second. All of humanity's history occupies less than one tenth of a second at the end of a nearly 11-minute video.

There is a transition moment 9 minutes and 38 seconds into the video where it becomes about the emergence of complex lifeforms on Earth. While I enjoyed the first 9 minutes and 38 seconds, I found the final minute glorious. A few thoughts about what changes in that minute:

Putting the onus on the visuals and music is a daring creative choice. The visuals, in particular, need to reward scrutiny. In many respects they're not literally "accurate". Yet they are strongly evocative of something immensely important, and hard to access. I found it very beautiful and moving.

Affect in science for a wide audience (12-24-2021)

Reflecting on the affective quality of presentations of science for a wide audience, especially television and YouTube.

A few common affects I dislike:

The affective qualities I enjoy and admire: curious, appreciative, exploring, humorous, trying to deepen understanding, while aware our understanding is usually very incomplete, even when we're very confident. The best work often reveals hidden beauty, making unexpected relationships vivid. This is very enlarging.

Fortunately, it's a golden age for work with these qualities.

There's a common boundary case that particularly bugs me. Carl Sagan's "Cosmos" has a fair sprinkling of the flowery affect ("billions and billions" etc). But it's usually part of the expression of a core idea which is deeply insightful. Many other people use a similar flowery affect – sometimes taken to 11 – but with much weaker core ideas. And that just doesn't work.

On the invention of discovery, and of experiment

(Riffing on my read of David Wootton's book "The Invention of Science")

In chess, a grandmaster and a total beginner may well play the same move, or even the same series of moves, in a position. And yet the underlying reasons for the action – the theory and context of the moves – will be completely different.

Magnus Carlsen playing pawn to e4 is not the same – not remotely the same! – as me playing pawn to e4.

An analogous phenomenon occurs with many actions: ostensibly the "same" action may have quite a different meaning, due to the surrounding context. And sometimes we want a verb (or its corresponding noun) to refer to the literal action, and sometimes we want a verb/noun to refer to the fuller context.

I've been thinking about this in connection with the notions of discovery and of experiment. It's difficult to make precise the sense in which they are modern notions. Certainly, I have no doubt that even going back to prehistoric times people occasionally carried out literal actions very similar to what we would today call discovery or experiment. Yet, while there may well be a literal similarity to what we call discovery or experiment today, the fuller context was very different.

In both cases, there are differences in both the surrounding theory (how people think about it, what it means), and in the surrounding context.

I won't try to enumerate all those – it could easily be a book!

But I am particularly fascinated by the irreversible nature of discovery – a discovery in the modern sense must involve a near-irreversible act of amplification, so knowledge is spread around the world, becoming part of our collective memory.

This may accidentally have occurred in pre-historic times – it probably happened with fire, for instance. But today we have many institutions much of whose role is about making this irreversible act of amplification happen.

Reflecting on what I've learned from David Wootton's "The Invention of Science" (12-21-2021)

Perhaps the most important thing so far is that a major part of the scientific revolution was that discovery became near-irreversible. Many people prior to Brahe or Galileo or etc may have had some of their insights. But those insights for the most part did not spread throughout the culture.

This spread was accomplished by a combination of novel technologies (journals, the printing press, citation) + a new political economy (norms about citation, priority, replicability, and the reputation economy). The result was a powerful amplifier for new ideas. And amplification is a peculiarly one way process: it's difficult to put an idea genie back in the bottle.

I forget who said it (Merton?) but the discoverer of something is not the first person to discover it, but the last, the person whose discovery results in the spread of the idea throughout the culture. That requires an idea amplifier, an irreversible process by which an idea becomes broadly known.

Information overload as a consequence of the benefits of information (12-21-2021)

A common story is that information overload is the result of too much knowledge being produced too quickly. It harks back to an (imagined, nostalgic) history in which much less knowledge was being produced.

There's an error implicit in this story.

"Information overload" is fundamentally a feeling. If you're a person with little interest in or use for ideas and information, you won't feel much information overload.

But suppose you're a person who benefits a great deal from ideas. Every extra hour you spend reading or imbibing good lectures or [etc] you find of great benefit. This may be true especially if you're in a competitive intellectual field, a field where mastery of more information gives you a competitive advantage. You'll naturally feel a great deal of pressure to choose well what you read, and to read more. That's the feeling of information overload.

In other words, the feeling of information overload isn't produced by there being too much knowledge. It's produced by the fact that spending more time imbibing knowledge may produce very high returns, creating pressure to spend ever more time on it; furthermore, there is no natural ordering on what to imbibe.

One reason this matters is because people often think that if they use just the right tool or approach or system, information overload will go away. In fact, better tooling often improves the returns to imbibing information, and so can actually increase information overload. This isn't universally true – tools which increase your sense of informational self-efficacy may reduce the sense of overload. But there's more than a grain of truth to it.

(Parenthetically, let me point to Ann Blair's book Too Much to Know, an account of information overload before the modern age.)

The "experimental program" as a first-class object in how we improve our understanding of the world (12-21-2021)

There is a caricature of the history of science in which the notion of comparing theory to experiment originated in the 16th century. (Often ascribed to Bacon). Of course, this is a (gross) caricature and oversimplification; obviously, our prehistoric ancestors learned from experience! I'd be shocked if it's not possible to draw pretty much a straight line from such an ancestor (or, say, a mystic like Pythagoras), to a modern scientist with very sophisticated ideas about how they devise their experimental program.

This discrepancy bugs me. It's been bugging me since I first heard the story told, probably at about age 7 or 8. There's something ever-so-slightly off.

I suspect that the main transition is in having an experimental program at all, some notion of a (theory!) of how to explore nature. Talk to a scientist and they'll have hundreds of ideas for possible experiments, detailed thoughts on costs and benefits and variations and failure modes and so on. What's new in the modern era is the understanding that it's worth thinking about those things. Put a different way: we've always done experiments. It's just that in modern science an experimental programme – a set of ideas about how to explore nature – has been a first-class object in how we improve our understanding of the world.

Internalized goals, not written goals, are valuable (12-21-2021)

A lot has been written about the value of written goals. Increasingly, I think that's a mistake. Goals are only valuable if they've become deeply internalized, part of you. Writing is helpful insofar as it helps achieve that end.

The reflexive nature of funding work on risk (11-27-2021)

In an ideal world, funders would use our best understanding of how to reason about risk in order to make decisions about what research to fund. (We are, of course, in nothing remotely like this world, with funders seemingly mostly using a pre-modern understanding of risk.) An amusing aspect to this situation: of course, the research they fund might then actually change our understanding of how best to think about risk; it would, in an important aspect, be reflexive.

Creative workers should design their own filing system (11-25-2021, revised 11-27-2021)

I'm always shocked by the returns on doing this. It's one value of using org-mode: instead of using the computer's file system (which you have very limited control over) you can instead design your own. And, more importantly, re-design and re-design and re-design that filing system. Both design and re-design are actually creative acts. And they're surprisingly important.

One imagined book title that particularly amuses me is "Better Living Through Filing". In some sense, though, Dave Allen already wrote the book, though he titled it "Getting Things Done". Not aimed at creative workers, however.

The surprising value of merging files (11-25-2021)

I'm sure this is obvious to many people, but it's something I only discovered recently: the unexpected value of collecting up (and often merging) files on closely related subjects.

There are certain topics I come back to over and over again, in files spread all over my hard disk. Systematically collecting them up and in many cases merging them has been surprisingly helpful.

This is something we do routinely in the world of physical objects (all the socks in the sock draw). The value in the world of expressions of creative thought seems to be at least as great.

Programmable money and smart contracts are powerful ideas, and so are likely here to stay (11-21-2021)

There's this strange, huge fault line around the question: is crypto real? Lots of believers, shouting "this is the future". Lots of haters, shouting "it's all a scam". But the question itself is mostly a mistake. A better fulcrum question is: are decentralized programmable money and smart contracts here to stay? In particular, would a functioning system of programmable money and smart contracts enable extraordinary (and very valuable) new forms of co-ordination behavior? I believe the answer is obviously yes. This doesn't mean many or all of today's cryptocurrencies won't fade out; but it does mean the future will almost certainly involve some descendant of some of these ideas.

Tech : basic research (11-17-2021)

In tech, capacity to exert power (or act) is fundamental; understanding is instrumental.

In basic research, understanding is fundamental; the capacity to exert power or act is instrumental.

In each sphere I occasionally meet people who seem to believe their point of view is not only self-evidently correct, but to find it almost unbelievable that anyone could believe otherwise. But it seems to me that both are largely (collectively held) values.

Interesting as an analogue to ancient Rome : ancient Greece.

The relationship between research and writing (11-17-2021)

When writing any kind of essay or book (research or non), I find that I (nearly always) have to write the first draft in linear order.

This is in strong tension with research, where you are trying to improve your understanding as much as possible. That's not something that can be done in linear order. It's more of a stochastic upward spiral. In that sense, it makes sense to bounce backward and forward. You're trying to write snippets that (you hope) will be in the final piece, and trying to find pieces to improve wherever possible.

Ultimately, a work of research requires some strong core insight, some important idea or piece of evidence that is new. Usually you don't have that when you start. And so you are exploring, digging down, trying to understand, unearthing and trying to crystallize out partial nuggets of understanding, until you feel you really have some strong core insight, a foundation for a written report. At that point you can attempt the linear draft.

Memory systems as a way of concentrating your experience (10-11-2021)

In everyday life an astounding number of things happen. A tiny few seem really significant. Memory systems let you distill those things out, so that you will return to them again and again. They're a way of concentrating your experience.

Feedback rules, selection effects, end states (Oct 2021)

What feedback rule does a person, organization, or ecosystem follow to govern change and learning and growth? In particular, what does that feedback rule select for? What it selects for determines much about what one gets.

An infinite number of examples may be given.

(1) Does the political media follow a feedback rule that rewards improvements in the quality of people's understanding of government? No, except incidentally and in certain narrow ways; as far as I can tell, people who follow political media often end up more informed on a small subset of (very narrow) issues, and less informed in many crucial ways. In particular, they're much more likely to believe misconceptions that serve the feedback rule. This applies strongly to all parties, and requires little in the way of dishonesty, just ordinary muddleheadedness.

An example which personally I find amazing: a surprising number of people (including people in the media) genuinely believe Facebook caused Trump 2016. This is despite the fact that Trump spent only a small fraction of his budget on Facebook, and most of that late in the election cycle. The mainstream media did far more to cause Trump. I don't mean just Fox, I mean CNN-NYT-etc-etc-etc, the entire set, including, of course, a significant role for Fox. It's certainly true that Facebook played a role, at the margin (as did many, many things). But a much more significant effect was Trump knowing how to manipulate the mainstream media. And the mainstream media seem to have no way of understanding that – it's not inside the feedback loop that governs how they change. In fact, quite the reverse: Trump almost certainly drove revenue for them; they are incented to have a candidate like Trump. Most members of the media seem to understand this point – it's been emphasized by some of the most prominent executives – but then don't connect it the fact that "Facebook caused Trump" is a false narrative. It's not that they're lying. They believe the narrative because of systemic incentives.

I realize the last paragraph will be treated by many as evidence I'm off my rocker. I'm certainly not trying to say Facebook didn't play a role; but it was one of many factors; it was almost certainly a much smaller role than the mainstream media; and the mainstream media doesn't understand this, in considerable part because of their incentives. Certainly, this story violates conventional narratives. And I haven't provided a detailed argument, merely my conclusions. Maybe I'm wrong. But I don't think so. And when you hear most people try to give their argument for why Facebook caused Trump, it quickly dissolves into assertions which are either wrong, or based on extremely weak evidence6.

(2) In universities, grant overhead is inside the feedback loop. And the result is that universities systematically select for whatever grant agencies select for. This is a (massive) centralizing force. It's weak in most individual given instances. But over decades the effect is cumulative, and enormous. It's too strong to say research is centrally controlled, but there's certainly strong tendencies in that direction.


  1. The modern approach to optical quantum computing has – as far as I follow it these days! – its origins in a fusion of optics with the cluster-state model of quantum computation. This was noticed by myself and (independently) Yoran and Reznik, 20 or so years ago. I don't know how Yoran and Reznik discovered this fusion was possible, but for me a crucial element was noticing a coincidence between two sets of bases for a particular vector space. Those bases didn't have any particular reason to be related, as far as I knew, but they were the same, and I realized I could use that to make universal quantum computation possible. I apologize for the self-indulgent story, but for me this is a prototypical story of how tiny details can be utterly crucial. There was no way I could have anticipated this fusion in advance, or planned it top-down. Rather, I simply noticed it one day – I remember exactly where I was sitting, and what I was doing – and 2 minutes later I was certain I could cut 6 or (probably many) more orders of magnitude off the complexity of optical quantum computing.↩︎

  2. Something similar is true with research students: it's always just a tiny bit disappointing if they come back having done what they said they were going to do. Ideally, you want them to have done something surprising.↩︎

  3. Sadly, it's not clear it recognizes personal references like this. But it's useful to me, as part of my emotional makeup.↩︎

  4. The terms are, of course, by analogy with the terms "information overload" and "future shock". The latter term was coined by Alvin and Heidi Toffler to describe the sense of confusion when society begins to change sufficiently rapidly.↩︎

  5. Bill Bryson did this beautifully in his "Short History of Nearly Everything".↩︎

  6. Happy to hear explanations of what I've missed. But you better have an explanation for why Trump spent so comparatively little on Facebook.↩︎