Sporadica

Sporadica

The things you can't say (07-03-2022, revised 07-13-2022)

There's a set of generative questions that I find unusually helpful: the rude questions. It's to ask myself: what would be considered rude or offensive or in bad taste to ask here? What would people be upset if it were true? It's difficult to be clear here, since there is a related idea that I (emphatically) am not talking about: I don't mean in the standard political sense or about social issues, where people who fancy themselves as bold truth-tellers are often just harking back to tired old prejudices. I mean something very different: when you start to become deeply familiar with a nascent field, you can start to ask yourself "what would really bother me if it were false?" or "what would really upset the presumed applecart here?" I find the most useful versions of the question are "what am I most afraid of here?" and "what would be considered offensive or rude here?" I don't mean offensive in a personal sense, I mean it in a sense of violating shared intellectual presumptions.

Meaning as the violation of expectations

I find probabilistic language models surprisingly irritating in some ways. Surely a big part of thinking is to create meaning, by finding ways of violating expectations. The language models can seem instead like ways of rapidly generating nearly content-free cliches, not expectation-violating meaning.

Who will own AI?

An observation: many (not all) of the top AI labs are run by people who do not seem to be themselves top talent at working directly on the technical aspects of AI. I'm a little surprised by this. When truly challenging technical problems are being solved, there is usually unique advantage in being best on the technical problem solving side. The ability to tell a story, obtain capital, and assemble people with essentially commodity skills is, by comparison, rather commonplace. This is why Intel and Google were started by very capable technical people, not by business people who had determined that they should own semiconductors or search.

(A clarification: Page and Brin were not technical in the sense the Valley sometimes means it – they weren't great commodity engineers, capable of pouring out code. But they understood the technical problem of search better, in some ways, than anyone else in the world. The latter is the sense in which I mean technically strong.)

It's interesting to contrast with quantum computing. Some quantum computing startups are run by very strong technical people. And others are run by people whose capabilities lie in raising money, telling stories, hiring, and so on. So far, the first type seems to be doing much better, at least, as measured by progress toward building a quantum computer.

I am, as I say, a little surprised by the way things have unfolded with AI. It may be that I've simply misunderstood the situation. Certainly, I'm sure Demis Hassabis is extremely technically capable (DeepMind's early hiring speaks to that, for instance). But overall it suggests to me that many of the strongest technical people don't understand how strong a position they are in. It's true: many know they would be unhappy running things, and quite rightly don't put up their hands to do so. But some subset would enjoy running things too. I won't be surprised to see more splinter labs, started by strong technical people.

Curious: would you vote for any of the people running the top AI labs? Can you imagine them as great and wise leaders of history?

The note-taking impulse (04-03-2022)

Something I find difficult to understand about much of the current vogue for note taking is that it often seems to be done devoid of any genuine creative sense. When I work I'm usually (ultimately) trying to make something shippable. It'll be sourced in my own curiosity, but there's also a strong sense of trying to make something of value for others, even if it's only a few people. It's both satisfying and also useful as a test of whether I'm actually doing something useful (or not).

By contrast, some (not all) people's note taking seems to be oriented toward writing notes that don't have much apparent use for anyone, including themselves. They're not building to anything; indeed, sometimes there just seems to be a sense that they "should" be taking notes. I'm reminded of many of the people who complain about memory systems; often, they don't actually have any good use for them. It's like cargo culting creative work.

I've put all that as strongly as I can. It's doubtless unfair. But it does really puzzle me. It often seems like elaborately practicing the piano in order to play chopsticks, or scales.

Of course, none of this is any of my business. But the reason I got interested is because writing is such an incredibly powerful tool to improve one's own thinking. The note-taking vogue seemed like it should be part of that. But: are people writing great screenplays or books or essays or making important discoveries with crucial help from these tools? I'd love to know who, if so!

I think it's distinctly possible that they're simply being used in ways I don't instinctively understand. For instance, I tend to find engineering and company-building difficult to understand. While I certainly admire the people doing these things well, I have little impulse in these directions myself. And so maybe the tools are useful there, but I simply don't see how. That'd also be interesting.

Addendum: This isn't a very firmly held opinion. My sense is that there ought to be some immensely powerful use for such systems. But most actual uses don't seem to me to be very interesting. I suspect I'm simply not seeing the interesting uses.

The tool-building impulse (04-01-2022)

I'm currently reading a book about a tool builder/connoisseur. It's quite a good book, but it's written by someone who doesn't really grok, in their gut, why people build tools.

I don't find it easy to say in words what that gut sense is. But let me attempt anyway. It's a sense of creating a new kind of freedom for people. You make this thing, and it enables new opportunities for action. You're expanding everyone's world, including your own. That's really intoxicating!

I'm more naturally a scientist, someone who understands, than a tool-builder. But I have enough of the impulse that I feel comfortable opining.

AI safety: an oversimplified model (03-17-2022)

A very oversimplified 3-level model of AI Safety. Certainly not intended to be new or especially insightful, it's just helpful for me to write down to make my thinking a tiny bit more precise.

  1. The narrow alignment problem, which seems to be where a lot of technical AI safety work focuses ("let's make sure my nuke only goes off when I want it to go off, explodes where I want it to explode etc"). For AGI this has the additional challenge (beyond the nuke analogy) that human intent is often ambiguous and difficult to debug. We think we want A; then realize we want A'; then A'' etc; then we discover that even once it's clear to us what we want, it's hard to express, and easy to misinterpret. This problem is a combined generalization of interpersonal communication and programming debugging, with the additional problem of massive unexpected emergent effects thrown in (of the type "we didn't realize that optimizing our metrics would accidentally lead to authoritarian dictatorships").

  2. But even if you can magically solve the narrow alignment problem, you still have the problem that evil actors will have bad intent ("let's make sure bad guys / bad orgs / rogue nations don't have nukes"). In this case, Moore's Law & progress in algorithms means if anyone has AGI, then pretty soon everyone (left) will have AGI. In this sense, AI safety is a special case of eliminating evil actors.

  3. The problem of agency: if AGIs become independent agents with goals of their own, and they're (vastly) more powerful than humans, then humanity as a whole will be in the standard historic situation of the powerless next to the powerful. When the powerful entities' goals overlap with the powerless, usually the powerful entities get what they want, even if it hurts the powerless.

No doubt there's all kinds of things wrong with this model, or things omitted. Certainly, I'm very ignorant of thinking about this. Still, I have the perhaps mistaken sense that much work on AI safety is just nibbling round the edges, that the only thing really likely to work is to do something like non-proliferation right now. That's hard to do – many strong economic and defense and research interests would, at present, oppose it. But it seems like the natural thing to do. Note that ridicule is a common tactic for dismissing this argument: proof-of-impossibility-by-failure-of-imagination. But it's not much of an argument. To be clear: I don't think the standard AI safety arguments are very good, either. But they don't need to be: this is a rare case where I'm inclined to take the precautionary principle very seriously.

"You should"

Reasonably often someone will tell me: "You should [do such-and-such a project]".

It's well meant. But it rankles a surprising amount. Often the suggestion is for a project that would take several years. Often it's a suggestion for a project that seems dull or poorly conceived in some way. And the underlying presumption seems to be that I have a dearth of project ideas and a surfeit of time. (I have a list of about 5,000 project ideas last time I checked numbers).

My suggestion is: if you like the idea enough to suggest I take my time, you should do it. And if you don't want to, you should you reflect on why not.

In the meantime, a less irritating phrasing is: "A really fun project I'd love to see someone do is…"

A brief note on Melodysheep's "Timelapse of the Entire Universe" (12-25-2021)

I enjoyed Melodysheep's beautiful video, "Timelapse of the Entire Universe". It's a visualization of the entire history of the universe, 22 million years per second. All of humanity's history occupies less than one tenth of a second at the end of a nearly 11-minute video.

There is a transition moment 9 minutes and 38 seconds into the video where it becomes about the emergence of complex lifeforms on Earth. While I enjoyed the first 9 minutes and 38 seconds, I found the final minute glorious. A few thoughts about what changes in that minute:

Putting the onus on the visuals and music is a daring creative choice. The visuals, in particular, need to reward scrutiny. In many respects they're not literally "accurate". Yet they are strongly evocative of something immensely important, and hard to access. I found it very beautiful and moving.

Affect in science for a wide audience (12-24-2021)

Reflecting on the affective quality of presentations of science for a wide audience, especially television and YouTube.

A few common affects I dislike:

The affective qualities I enjoy and admire: curious, appreciative, exploring, humorous, trying to deepen understanding, while aware our understanding is usually very incomplete, even when we're very confident. The best work often reveals hidden beauty, making unexpected relationships vivid. This is very enlarging.

Fortunately, it's a golden age for work with these qualities.

There's a common boundary case that particularly bugs me. Carl Sagan's "Cosmos" has a fair sprinkling of the flowery affect ("billions and billions" etc). But it's usually part of the expression of a core idea which is deeply insightful. Many other people use a similar flowery affect – sometimes taken to 11 – but with much weaker core ideas. And that just doesn't work.

On the invention of discovery, and of experiment

(Riffing on my read of David Wootton's book "The Invention of Science")

In chess, a grandmaster and a total beginner may well play the same move, or even the same series of moves, in a position. And yet the underlying reasons for the action – the theory and context of the moves – will be completely different.

Magnus Carlsen playing pawn to e4 is not the same – not remotely the same! – as me playing pawn to e4.

An analogous phenomenon occurs with many actions: ostensibly the "same" action may have quite a different meaning, due to the surrounding context. And sometimes we want a verb (or its corresponding noun) to refer to the literal action, and sometimes we want a verb/noun to refer to the fuller context.

I've been thinking about this in connection with the notions of discovery and of experiment. It's difficult to make precise the sense in which they are modern notions. Certainly, I have no doubt that even going back to prehistoric times people occasionally carried out literal actions very similar to what we would today call discovery or experiment. Yet, while there may well be a literal similarity to what we call discovery or experiment today, the fuller context was very different.

In both cases, there are differences in both the surrounding theory (how people think about it, what it means), and in the surrounding context.

I won't try to enumerate all those – it could easily be a book!

But I am particularly fascinated by the irreversible nature of discovery – a discovery in the modern sense must involve a near-irreversible act of amplification, so knowledge is spread around the world, becoming part of our collective memory.

This may accidentally have occurred in pre-historic times – it probably happened with fire, for instance. But today we have many institutions much of whose role is about making this irreversible act of amplification happen.

Reflecting on what I've learned from David Wootton's "The Invention of Science" (12-21-2021)

Perhaps the most important thing so far is that a major part of the scientific revolution was that discovery became near-irreversible. Many people prior to Brahe or Galileo or etc may have had some of their insights. But those insights for the most part did not spread throughout the culture.

This spread was accomplished by a combination of novel technologies (journals, the printing press, citation) + a new political economy (norms about citation, priority, replicability, and the reputation economy). The result was a powerful amplifier for new ideas. And amplification is a peculiarly one way process: it's difficult to put an idea genie back in the bottle.

I forget who said it (Merton?) but the discoverer of something is not the first person to discover it, but the last, the person whose discovery results in the spread of the idea throughout the culture. That requires an idea amplifier, an irreversible process by which an idea becomes broadly known.

Information overload as a consequence of the benefits of information (12-21-2021)

A common story is that information overload is the result of too much knowledge being produced too quickly. It harks back to an (imagined, nostalgic) history in which much less knowledge was being produced.

There's an error implicit in this story.

"Information overload" is fundamentally a feeling. If you're a person with little interest in or use for ideas and information, you won't feel much information overload.

But suppose you're a person who benefits a great deal from ideas. Every extra hour you spend reading or imbibing good lectures or [etc] you find of great benefit. This may be true especially if you're in a competitive intellectual field, a field where mastery of more information gives you a competitive advantage. You'll naturally feel a great deal of pressure to choose well what you read, and to read more. That's the feeling of information overload.

In other words, the feeling of information overload isn't produced by there being too much knowledge. It's produced by the fact that spending more time imbibing knowledge may produce very high returns, creating pressure to spend ever more time on it; furthermore, there is no natural ordering on what to imbibe.

One reason this matters is because people often think that if they use just the right tool or approach or system, information overload will go away. In fact, better tooling often improves the returns to imbibing information, and so can actually increase information overload. This isn't universally true – tools which increase your sense of informational self-efficacy may reduce the sense of overload. But there's more than a grain of truth to it.

(Parenthetically, let me point to Ann Blair's book Too Much to Know, an account of information overload before the modern age.)

The "experimental program" as a first-class object in how we improve our understanding of the world (12-21-2021)

There is a caricature of the history of science in which the notion of comparing theory to experiment originated in the 16th century. (Often ascribed to Bacon). Of course, this is a (gross) caricature and oversimplification; obviously, our prehistoric ancestors learned from experience! I'd be shocked if it's not possible to draw pretty much a straight line from such an ancestor (or, say, a mystic like Pythagoras), to a modern scientist with very sophisticated ideas about how they devise their experimental program.

This discrepancy bugs me. It's been bugging me since I first heard the story told, probably at about age 7 or 8. There's something ever-so-slightly off.

I suspect that the main transition is in having an experimental program at all, some notion of a (theory!) of how to explore nature. Talk to a scientist and they'll have hundreds of ideas for possible experiments, detailed thoughts on costs and benefits and variations and failure modes and so on. What's new in the modern era is the understanding that it's worth thinking about those things. Put a different way: we've always done experiments. It's just that in modern science an experimental programme – a set of ideas about how to explore nature – has been a first-class object in how we improve our understanding of the world.

Internalized goals, not written goals, are valuable (12-21-2021)

A lot has been written about the value of written goals. Increasingly, I think that's a mistake. Goals are only valuable if they've become deeply internalized, part of you. Writing is helpful insofar as it helps achieve that end.

The reflexive nature of funding work on risk (11-27-2021)

In an ideal world, funders would use our best understanding of how to reason about risk in order to make decisions about what research to fund. (We are, of course, in nothing remotely like this world, with funders seemingly mostly using a pre-modern understanding of risk.) An amusing aspect to this situation: of course, the research they fund might then actually change our understanding of how best to think about risk; it would, in an important aspect, be reflexive.

Creative workers should design their own filing system (11-25-2021, revised 11-27-2021)

I'm always shocked by the returns on doing this. It's one value of using org-mode: instead of using the computer's file system (which you have very limited control over) you can instead design your own. And, more importantly, re-design and re-design and re-design that filing system. Both design and re-design are actually creative acts. And they're surprisingly important.

One imagined book title that particularly amuses me is "Better Living Through Filing". In some sense, though, Dave Allen already wrote the book, though he titled it "Getting Things Done". Not aimed at creative workers, however.

The surprising value of merging files (11-25-2021)

I'm sure this is obvious to many people, but it's something I only discovered recently: the unexpected value of collecting up (and often merging) files on closely related subjects.

There are certain topics I come back to over and over again, in files spread all over my hard disk. Systematically collecting them up and in many cases merging them has been surprisingly helpful.

This is something we do routinely in the world of physical objects (all the socks in the sock draw). The value in the world of expressions of creative thought seems to be at least as great.

Programmable money and smart contracts are powerful ideas, and so are likely here to stay (11-21-2021)

There's this strange, huge fault line around the question: is crypto real? Lots of believers, shouting "this is the future". Lots of haters, shouting "it's all a scam". But the question itself is mostly a mistake. A better fulcrum question is: are decentralized programmable money and smart contracts here to stay? In particular, would a functioning system of programmable money and smart contracts enable extraordinary (and very valuable) new forms of co-ordination behaviour? I believe the answer is obviously yes. This doesn't mean many or all of today's cryptocurrencies won't fade out; but it does mean the future will almost certainly involve some descendant of some of these ideas.

Tech : basic research (11-17-2021)

In tech, capacity to exert power (or act) is fundamental; understanding is instrumental.

In basic research, understanding is fundamental; the capacity to exert power or act is instrumental.

In each sphere I occasionally meet people who seem to believe their point of view is not only self-evidently correct, but to find it almost unbelievable that anyone could believe otherwise. But it seems to me that both are largely (collectively held) values.

Interesting as an analogue to ancient Rome : ancient Greece.

The relationship between research and writing (11-17-2021)

When writing any kind of essay or book (research or non), I find that I (nearly always) have to write the first draft in linear order.

This is in strong tension with research, where you are trying to improve your understanding as much as possible. That's not something that can be done in linear order. It's more of a sotchastic upward spiral. In that sense, it makes sense to bounce backward and forward. You're trying to write snippets that (you hope) will be in the final piece, and trying to find pieces to improve wherever possible.

Ultimately, a work of research requires some strong core insight, some important idea or piece of evidence that is new. Usually you don't have that when you start. And so you are exploring, digging down, trying to understand, unearthing and trying to crystallize out partial nuggets of understanding, until you feel you really have some strong core insight, a foundation for a written report. At that point you can attempt the linear draft.

Memory systems as a way of concentrating your experience (10-11-2021)

In everyday life an astounding number of things happen. A tiny few seem really significant. Memory systems let you distill those things out, so that you will return to them again and again. They're a way of concentrating your experience.

Feedback rules, selection effects, end states (Oct 2021)

What feedback rule does a person, organization, or ecosystem follow to govern change and learning and growth? In particular, what does that feedback rule select for? What it selects for determines much about what one gets.

An infinite number of examples may be given.

(1) Does the political media follow a feedback rule that rewards improvements in the quality of people's understanding of government? No, except incidentally and in certain narrow ways; as far as I can tell, people who follow political media often end up more informed on a small subset of (very narrow) issues, and less informed in many crucial ways. In particular, they're much more likely to believe misconceptions that serve the feedback rule. This applies strongly to all parties, and requires little in the way of dishonesty, just ordinary muddleheadedness.

An example which personally I find amazing: a surprising number of people (including people in the media) genuinely believe Facebook caused Trump 2016. This is despite the fact that Trump spent only a small fraction of his budget on Facebook, and most of that late in the election cycle. The mainstream media did far more to cause Trump. I don't mean just Fox, I mean CNN-NYT-etc-etc-etc, the entire set, including, of course, a significant role for Fox. It's certainly true that Facebook played a role, at the margin (as did many, many things). But a much more significant effect was Trump knowing how to manipulate the mainstream media. And the mainstream media seem to have no way of understanding that – it's not inside the feedback loop that governs how they change. In fact, quite the reverse: Trump almost certainly drove revenue for them; they are incented to have a candidate like Trump. Most members of the media seem to understand this point – it's been emphasized by some of the most prominent executives – but then don't connect it the fact that "Facebook caused Trump" is a false narrative. It's not that they're lying. They believe the narrative because of systemic incentives.

I realize the last paragraph will be treated by many as evidence I'm off my rocker. I'm certainly not trying to say Facebook didn't play a role; but it was one of many factors; it was almost certainly a much smaller role than the mainstream media; and the mainstream media doesn't understand this, in considerable part because of their incentives. Certainly, this story violates conventional narratives. And I haven't provided a detailed argument, merely my conclusions. Maybe I'm wrong. But I don't think so. And when you hear most people try to give their argument for why Facebook caused Trump, it quickly dissolves into assertions which are either wrong, or based on extremely weak evidence2.

(2) In universities, grant overhead is inside the feedback loop. And the result is that universities systematically select for whatever grant agencies select for. This is a (massive) centralizing force. It's weak in most individual given instances. But over decades the effect is cumulative, and enormous. It's too strong to say research is centrally controlled, but there's certainly strong tendencies in that direction.


  1. Bill Bryson did this beautifully in his "Short History of Nearly Everything".↩︎

  2. Happy to hear explanations of what I've missed. But you better have an explanation for why Trump spent so comparatively little on Facebook.↩︎