Just musing about the next few years of AI systems. Even that short term seems likely to get pretty weird, never mind the long term.
It's challenging to think about, for many reasons. In part because: there's a lot of noise right now, due to the influx of capital and concomitant hype and grifters and naysayers. And despite the hype it's still hard to point to a crucial widely-used product where AI is inarguably the decisive element. But it still seems important to try to understand.
Of course, the future isn't yet written: it's up to us to collectively decide. But it's fun to come up with both generative and analytic models to think about what may happen. Here's one such model. It's not meant as a prediction, just a useful model for thinking about possible futures.
It's this: there will be a widespread sense of AI overload or interface shock1: the shock and sense of confusion and bewilderment which occurs as a rapidly growing chunk of society has to keep learning and relearning new AI systems, over and over… and over and over and over and over and over and over.
Copilot, Midjourney, DALL-E, StableDiffusion, GPT3, ChatGPT, Claude, GPT4, Bing, are the very beginning.
Indeed, they're mostly still curiosities, not yet near-essential for knowledge work. But it seems plausible that their near-term successors will be near-essential for knowledge work, perhaps even much more important than Google search is today. Getting the most out of those systems will be like learning to play an easy-to-pick-up musical instrument: satisfying for beginners, but increasing mastery will pay increasing returns. The result: a strong incentive to get better with such systems; and a sense that one should be getting better, indeed, even that one must get better. Furthermore: there's going to be a rapidly changing cast of such systems, over more and more domains; and those systems won't be fixed targets: they will rapidly coevolve with their respective user bases. It's as though the musical instrument will change and mutate, as fast as you're learning it.
Something like this already happens to programmers: they suffer a kind of API overload: every year, they must pick up a steady stream of frameworks and libraries. I've heard many programmers talk about how overwhelming (and sometimes bewildering) it is. Only it won't just be programmers feeling this overwhelm and bewilderment: this type of interface shock will spread widely, to everyone for whom mastery of such AI systems offers a real advantage in their lives. And it will be done under twin emotional shadows: threats to livelihood; and a perhaps even deeper sense of identity threat, as people re-evaluate their feelings about intelligence and its role in their sense of self. In this model, what we think of as intelligence may change significantly: it will move from "solving the problem" directly to "rapidly exploring and mastering interfaces". A similar change has already occurred in programming, but it'll be across a much broader class of creative and knowledge work.
In a recent set of notes I wrote of my general suspicion of deductive Moral Systems, and in favor of a more exploratory / bricolage approach. I characterized this as being about two competing traditions. One tradition is focused on developing Moral Systems:
…this seems inspired in many ways by mathematics. Figure out some pretty reasonable basic axioms or models. And then try to explore and understand their consequences. It's not quite proving theorems – the Systems are rarely unambiguous enough for that. But "making arguments" to figure out what is right. And then there's the second tradition, which is what 99.99% of human beings use: just muddling through, talking with friends and family, watching other people, trying to figure out how to live in the world, to be a good person, to do right by others and by oneself.
I think the second tradition is usually much more powerful and reliable. And the reason is that the world is immensely complicated, and as a result experience is much richer than any such System. It's a situation where, for now, simply exploring reality is in many respects far more challenging than such a System. It's the difference between attempting to deduce biology from a few simple ideas, and determinedly exploring the biosphere. The actual biosphere – the biosphere we can explore – is immensely complicated, and exploring it has (so far) been much more rewarding than attempting to understand things from theoretical first principles. With that said, a benefit of the Moral Systems is that one can push them in ways you can't (easily) in the world. Clever thought experiments, unusual questions – those are genuine generative benefits. But while I have no doubt that generates novel moral ideas, I have my doubts about whether it generates reliable moral insight.
That's a lot of throat-clearing to say: I'm fundamentally very suspicious of any strong notion of a "Moral System" at all, or even of notions like consistency and implication in such a System… many people want to take them very seriously as a basis for extended lines of reasoning and action. I think that's usually a mistake, resulting in complex (and sometimes selfishly motivated) justifications for actions which would seem obviously wrong to any intelligent 10 year old. When I say this, people interested in Moral Systems sometimes want to debate: such debate seems to me (with rare exceptions) a bad use of my time. They're insisting on using the first tradition, rather than the second. And it's much healthier to relate primarily to people's actual experiences, and only secondarily to Moral Systems or theories of what is good.
Upon reflection, I regret the missed opportunity to name this phenomena: it's pointing out the poverty of Moral Systems. In particular: the poverty compared with exploring (moral) reality. Deduction and consistency are fine tools, but they're only a small part of what is needed, and if you insist on relying on them, you're in very poor shape. Emerson was right: a foolish consistency is the hobgoblin of small minds; yet many Moral Systems seem to rely on consistency as a basic value.
It is very hard to be honest when you are afraid of the opinion of your community. This occurs all the time in life. Perhaps most often: it is often difficult to ask the basic question or express basic ignorance or confusion when it seems that everyone else knows more than you. It's a constant temptation in my work: the pretence of understanding, the omniscient view, when I am ignorant of so much, even things others may regard me as knowledgeable about, or where I feel I "should" know. It's embarrassing to say "I don't know" or to express my naive opinion, especially when it seems at odds with what my peers perhaps expect. But provided it's done with humility, this is often where growth lies.
In a book review on his website "The Roots of Progress", Jason Crawford writes:
Through maybe the 1950s, visions of the future, although varied, were optimistic. People believed in progress and saw technology as taking us forward to a better world. In the span of a generation, that changed, with the shift becoming prominent by the late 1960s. A “counterculture” arose which did not believe in technology or progress: indeed, a major element of the counterculture was the environmentalist movement, much of which saw technology and industry as actively destroying the Earth.
Later in the review he states, apparently approvingly, that "social activism [like that done by the environmentalist movement] is a drain on human capital".
It's a curious point of view, which seems to equate technological advance with progress, and considers any inhibition of such advance as a drain on progress. What makes it curious is that most environmentalists of my acquaintance also believe in progress, in the sense that they want a better life for the next generation, and have ideas about how best to achieve it. Inded, it was due in part to such activists that modern environmental legislation like the Clean Air Act was passed. Such legislation has likely saved millions of lives, and considerably improved billions of lives. It's had a cost: I have no doubt such legislation has inhibited technological development. Maybe that cost is worse than the harm it has prevented. But it's at least plausible that the benefit has been far more than worth the cost. That is, it's plausible that such regulations and activism are a milestone in human progress, which should be celebrated, not decried as "a drain on human capital".
Technology is often instrumentally useful for improving human lives, but it's not intrinsically good as an end in itself. Ultimately progress is internal: an improvement in the quality of human lives and experience; it doesn't reside directly in technology at all. In that sense, the values expressed by (just to pick two examples) the Sermon on the Mount or the abolitionist movement arguably represent a more intrinsic form of progress than any technology, because they more directly change human experience. More broadly, our most imaginative story-creators and moral entrepreneurs and artists and activists have contributed enormously to human progress.
Of course, science and technology are extremely important enablers of progress. It's far easier to live a good life when you have abundant food and medicine; when you have good housing, and so on. I'm merely making a point about what seems to me some (mistaken) fundamental assumptions I've seen advocated. I'm sympathetic to Effective Altruism's approach of "attempt[ing] to do for the question 'what is the good?' what science has done for the question 'how does the world work?'. Instead of providing an answer [the EA Community] is developing a community that aims to continually improve the answer." Of course, arguably that's what everyone already thinks they're doing for progress: the gung ho technologists and the anti-technology Luddites all think they're arguing for the "correct" form of progress.
Looking at the above, I'm dissatisfied. It seems fine as far as it goes, but badly incomplete. A key thing about science and technology is that they provide a free lunch: as our understanding improves it increases human power and ability to act. That's not always a good thing: certain technologies are mostly just bad. But many seem on net to be positive (albeit with some negative effects). For this reason I can't say I'm a fan of the precautionary principle; it seems mostly like status quo bias. And while I'm very pro science and technology, I instinctively recoil from a certain type of myopic technologist who is always in favor of new tech. But I don't yet have a better principled way of thinking about these things.
There's a critique of current work on AI expressed as variations on the argument: "Look, some such systems are impressive as demos. But the people creating the systems have little detailed understanding of how they work or why. And until we have such an understanding we're not really making progress on AI." This argument is then sometimes accompanied by (often rather dogmatic) assertions about what characteristics science "must" have.
I have some instinctive sympathy for such arguments. My original field of physics is full of detailed and often rather satisfying explanations of how things work. So too, of course, are many other fields. And historically new technologies often begin with tinkering and intuitive folk models, but technological progress is then enabled by greatly improved explanations of the underlying phenomena. You can build a sundial with a pretty hazy understanding of the solar system; to build an atomic clock requires a deep understanding of many phenomena.
Work on AI appears to be trying to violate this historic model of improvement. Yes, we're developing what seem to be better and better systems in the tinkering mode. But progress in understanding how those systems work seems to lag far behind. Papers often contain rather unconvincing just-so "explanations" of how the systems work (or were inspired). But the standards of such explanation are often extremely low: they really are just-so stories. Witnessing this, some people conclude that work in AI is not "real" scientific progress, but is rather a kind of mirage.
But I wonder. I'm inclined to suspect we're in a Feyerabendian "Anything Goes" moment here, where prior beliefs about how science "must" proceed are being overthrown. And we'll wonder in retrospect why we held those prior beliefs.
The underlying thing that's changed is the ease of trying and evaluating systems. If you wanted to develop improved clocks in the past you had to laboriously build actual systems, and then rigorously test them. A single new design might take months or years to build and test. Detailed scientific understanding was important because it helped you figure out which part of the (technological) design space to search in. When each instance of a new technology is expensive, you need detailed explanations which tell you where to search.
By contrast, much progress in AI takes a much more agnostic approach to search. Instead, of using detailed explanations to guide the search it uses a combination of: (a) general architectures; (b) trying trillions (or more) of possibilities, guided by simple ideas (like gradient descent) for improvement; and (c) the ability to recognize progress. This is a radically different mode of experimentation, only made possible by the advent of machines which can do extremely rapid symbol manipulation.
In the clock case it'd be analogous to having a universal constructor capable of extremely rapidly trying trillions of different clock designs, recognizing progress in timekeeping, and iteratively making tiny improvements. I'm not sure "trillions" would be nearly enough there – three-dimensional material design is very combinatorially demanding! But the point should be clear: at some scale of trying new designs, a new mode of progress would likely be possible, quite different from prior modes, and perhaps violating prior theories of how progress "should" happen. People would complain about these new horologists and how they didn't understand what they were doing; in the meantime, the new horologists would be building clocks of unprecedented accuracy.
It's easy to critique this story. You can come back with: "Oh, but search-and-recognize will inevitably be bottlenecked at crucial points that need genuine understanding". But: (a) notice how this has shifted from an argument-of-principle to an argument-of-practice citing ad hoc bottlenecks; and (b) the people making this argument don't seem to be the ones actually building better systems, which is the argument-of-practice we ultimately care about. I'm sympathetic to it, but only up to a very limited point.
Postscript on intent: No intent to claim originality here. And I certainly don't expect this to convince anyone who doesn't want to be convinced. This is just me working a few thoughts out.
Funders at maybe half a dozen science funders have told me variations on: "Oh, I'd love to fund more high-risk projects, but scientists won't submit them, even though I strongly encourage it. They're so conservative!"
I am near certain that at each of those funders a safe, incremental grant application is far more likely to be funded than something that's actually high risk. The funders are a bit like people who tell their local grocer that they want green beans, but only ever purchase tomatoes. A sensible grocer would rapidly stop offering the beans, and just set out the tomatoes.
If funders genuinely want high-risk projects, they must ensure such projects are more likely to be funded than low-risk projects. They need unsuccessful applicants who submitted low-risk work to walk away feeling "gosh, my project wasn't daring enough".
None of this addresses when and whether risk is good, which is a separate issue. I'm just observing that there seems to often be a large gap between stated and revealed preferences. They don't purchase what they say they want to buy.
There's a set of generative questions that I find unusually helpful: the rude questions. It's to ask myself: what would be considered rude or offensive or in bad taste to ask here? What would people be upset if it were true? It's difficult to be clear here, since there is a related idea that I (emphatically) am not talking about: I don't mean in the standard political sense or about social issues, where people who fancy themselves as bold truth-tellers are often just harking back to tired old prejudices. I mean something very different: when you start to become deeply familiar with a nascent field, you can start to ask yourself "what would really bother me if it were false?" or "what would really upset the presumed applecart here?" I find the most useful versions of the question are "what am I most afraid of here?" and "what would be considered offensive or rude here?" I don't mean offensive in a personal sense, I mean it in a sense of violating shared intellectual presumptions.
I find probabilistic language models surprisingly irritating in some ways. Surely a big part of thinking is to create meaning, by finding ways of violating expectations. The language models can seem instead like ways of rapidly generating nearly content-free cliches, not expectation-violating meaning.
An observation: many (not all) of the top AI labs are run by people who do not seem to be themselves top talent at working directly on the technical aspects of AI. I'm a little surprised by this. When truly challenging technical problems are being solved, there is usually unique advantage in being best on the technical problem solving side. The ability to tell a story, obtain capital, and assemble people with essentially commodity skills is, by comparison, rather commonplace. This is why Intel and Google were started by very capable technical people, not by business people who had determined that they should own semiconductors or search.
(A clarification: Page and Brin were not technical in the sense the Valley sometimes means it – they weren't great commodity engineers, capable of pouring out code. But they understood the technical problem of search better, in some ways, than anyone else in the world. The latter is the sense in which I mean technically strong.)
It's interesting to contrast with quantum computing. Some quantum computing startups are run by very strong technical people. And others are run by people whose capabilities lie in raising money, telling stories, hiring, and so on. So far, the first type seems to be doing much better, at least, as measured by progress toward building a quantum computer.
I am, as I say, a little surprised by the way things have unfolded with AI. It may be that I've simply misunderstood the situation. Certainly, I'm sure Demis Hassabis is extremely technically capable (DeepMind's early hiring speaks to that, for instance). But overall it suggests to me that many of the strongest technical people don't understand how strong a position they are in. It's true: many know they would be unhappy running things, and quite rightly don't put up their hands to do so. But some subset would enjoy running things too. I won't be surprised to see more splinter labs, started by strong technical people.
Curious: would you vote for any of the people running the top AI labs? Can you imagine them as great and wise leaders of history?
Something I find difficult to understand about much of the current vogue for note taking is that it often seems to be done devoid of any genuine creative sense. When I work I'm usually (ultimately) trying to make something shippable. It'll be sourced in my own curiosity, but there's also a strong sense of trying to make something of value for others, even if it's only a few people. It's both satisfying and also useful as a test of whether I'm actually doing something useful (or not).
By contrast, some (not all) people's note taking seems to be oriented toward writing notes that don't have much apparent use for anyone, including themselves. They're not building to anything; indeed, sometimes there just seems to be a sense that they "should" be taking notes. I'm reminded of many of the people who complain about memory systems; often, they don't actually have any good use for them. It's like cargo culting creative work.
I've put all that as strongly as I can. It's doubtless unfair. But it does really puzzle me. It often seems like elaborately practicing the piano in order to play chopsticks, or scales.
Of course, none of this is any of my business. But the reason I got interested is because writing is such an incredibly powerful tool to improve one's own thinking. The note-taking vogue seemed like it should be part of that. But: are people writing great screenplays or books or essays or making important discoveries with crucial help from these tools? I'd love to know who, if so!
I think it's distinctly possible that they're simply being used in ways I don't instinctively understand. For instance, I tend to find engineering and company-building difficult to understand. While I certainly admire the people doing these things well, I have little impulse in these directions myself. And so maybe the tools are useful there, but I simply don't see how. That'd also be interesting.
Addendum: This isn't a very firmly held opinion. My sense is that there ought to be some immensely powerful use for such systems. But most actual uses don't seem to me to be very interesting. I suspect I'm simply not seeing the interesting uses.
I'm currently reading a book about a tool builder/connoisseur. It's quite a good book, but it's written by someone who doesn't really grok, in their gut, why people build tools.
I don't find it easy to say in words what that gut sense is. But let me attempt anyway. It's a sense of creating a new kind of freedom for people. You make this thing, and it enables new opportunities for action. You're expanding everyone's world, including your own. That's really intoxicating!
I'm more naturally a scientist, someone who understands, than a tool-builder. But I have enough of the impulse that I feel comfortable opining.
A very oversimplified 3-level model of AI Safety. Certainly not intended to be new or especially insightful, it's just helpful for me to write down to make my thinking a tiny bit more precise.
The narrow alignment problem, which seems to be where a lot of technical AI safety work focuses ("let's make sure my nuke only goes off when I want it to go off, explodes where I want it to explode etc"). For AGI this has the additional challenge (beyond the nuke analogy) that human intent is often ambiguous and difficult to debug. We think we want A; then realize we want A'; then A'' etc; then we discover that even once it's clear to us what we want, it's hard to express, and easy to misinterpret. This problem is a combined generalization of interpersonal communication and programming debugging, with the additional problem of massive unexpected emergent effects thrown in (of the type "we didn't realize that optimizing our metrics would accidentally lead to authoritarian dictatorships").
But even if you can magically solve the narrow alignment problem, you still have the problem that evil actors will have bad intent ("let's make sure bad guys / bad orgs / rogue nations don't have nukes"). In this case, Moore's Law & progress in algorithms means if anyone has AGI, then pretty soon everyone (left) will have AGI. In this sense, AI safety is a special case of eliminating evil actors.
The problem of agency: if AGIs become independent agents with goals of their own, and they're (vastly) more powerful than humans, then humanity as a whole will be in the standard historic situation of the powerless next to the powerful. When the powerful entities' goals overlap with the powerless, usually the powerful entities get what they want, even if it hurts the powerless.
No doubt there's all kinds of things wrong with this model, or things omitted. Certainly, I'm very ignorant of thinking about this. Still, I have the perhaps mistaken sense that much work on AI safety is just nibbling round the edges, that the only thing really likely to work is to do something like non-proliferation right now. That's hard to do – many strong economic and defense and research interests would, at present, oppose it. But it seems like the natural thing to do. Note that ridicule is a common tactic for dismissing this argument: proof-of-impossibility-by-failure-of-imagination. But it's not much of an argument. To be clear: I don't think the standard AI safety arguments are very good, either. But they don't need to be: this is a rare case where I'm inclined to take the precautionary principle very seriously.
Reasonably often someone will tell me: "You should [do such-and-such a project]".
It's well meant. But it rankles a surprising amount. Often the suggestion is for a project that would take several years. Often it's a suggestion for a project that seems dull or poorly conceived in some way. And the underlying presumption seems to be that I have a dearth of project ideas and a surfeit of time. (I have a list of about 5,000 project ideas last time I checked numbers).
My suggestion is: if you like the idea enough to suggest I take my time, you should do it. And if you don't want to, you should you reflect on why not.
In the meantime, a less irritating phrasing is: "A really fun project I'd love to see someone do is…"
I enjoyed Melodysheep's beautiful video, "Timelapse of the Entire Universe". It's a visualization of the entire history of the universe, 22 million years per second. All of humanity's history occupies less than one tenth of a second at the end of a nearly 11-minute video.
There is a transition moment 9 minutes and 38 seconds into the video where it becomes about the emergence of complex lifeforms on Earth. While I enjoyed the first 9 minutes and 38 seconds, I found the final minute glorious. A few thoughts about what changes in that minute:
Putting the onus on the visuals and music is a daring creative choice. The visuals, in particular, need to reward scrutiny. In many respects they're not literally "accurate". Yet they are strongly evocative of something immensely important, and hard to access. I found it very beautiful and moving.
Reflecting on the affective quality of presentations of science for a wide audience, especially television and YouTube.
A few common affects I dislike:
The affective qualities I enjoy and admire: curious, appreciative, exploring, humorous, trying to deepen understanding, while aware our understanding is usually very incomplete, even when we're very confident. The best work often reveals hidden beauty, making unexpected relationships vivid. This is very enlarging.
Fortunately, it's a golden age for work with these qualities.
There's a common boundary case that particularly bugs me. Carl Sagan's "Cosmos" has a fair sprinkling of the flowery affect ("billions and billions" etc). But it's usually part of the expression of a core idea which is deeply insightful. Many other people use a similar flowery affect – sometimes taken to 11 – but with much weaker core ideas. And that just doesn't work.
(Riffing on my read of David Wootton's book "The Invention of Science")
In chess, a grandmaster and a total beginner may well play the same move, or even the same series of moves, in a position. And yet the underlying reasons for the action – the theory and context of the moves – will be completely different.
Magnus Carlsen playing pawn to e4 is not the same – not remotely the same! – as me playing pawn to e4.
An analogous phenomenon occurs with many actions: ostensibly the "same" action may have quite a different meaning, due to the surrounding context. And sometimes we want a verb (or its corresponding noun) to refer to the literal action, and sometimes we want a verb/noun to refer to the fuller context.
I've been thinking about this in connection with the notions of discovery and of experiment. It's difficult to make precise the sense in which they are modern notions. Certainly, I have no doubt that even going back to prehistoric times people occasionally carried out literal actions very similar to what we would today call discovery or experiment. Yet, while there may well be a literal similarity to what we call discovery or experiment today, the fuller context was very different.
In both cases, there are differences in both the surrounding theory (how people think about it, what it means), and in the surrounding context.
I won't try to enumerate all those – it could easily be a book!
But I am particularly fascinated by the irreversible nature of discovery – a discovery in the modern sense must involve a near-irreversible act of amplification, so knowledge is spread around the world, becoming part of our collective memory.
This may accidentally have occurred in pre-historic times – it probably happened with fire, for instance. But today we have many institutions much of whose role is about making this irreversible act of amplification happen.
Perhaps the most important thing so far is that a major part of the scientific revolution was that discovery became near-irreversible. Many people prior to Brahe or Galileo or etc may have had some of their insights. But those insights for the most part did not spread throughout the culture.
This spread was accomplished by a combination of novel technologies (journals, the printing press, citation) + a new political economy (norms about citation, priority, replicability, and the reputation economy). The result was a powerful amplifier for new ideas. And amplification is a peculiarly one way process: it's difficult to put an idea genie back in the bottle.
I forget who said it (Merton?) but the discoverer of something is not the first person to discover it, but the last, the person whose discovery results in the spread of the idea throughout the culture. That requires an idea amplifier, an irreversible process by which an idea becomes broadly known.
A common story is that information overload is the result of too much knowledge being produced too quickly. It harks back to an (imagined, nostalgic) history in which much less knowledge was being produced.
There's an error implicit in this story.
"Information overload" is fundamentally a feeling. If you're a person with little interest in or use for ideas and information, you won't feel much information overload.
But suppose you're a person who benefits a great deal from ideas. Every extra hour you spend reading or imbibing good lectures or [etc] you find of great benefit. This may be true especially if you're in a competitive intellectual field, a field where mastery of more information gives you a competitive advantage. You'll naturally feel a great deal of pressure to choose well what you read, and to read more. That's the feeling of information overload.
In other words, the feeling of information overload isn't produced by there being too much knowledge. It's produced by the fact that spending more time imbibing knowledge may produce very high returns, creating pressure to spend ever more time on it; furthermore, there is no natural ordering on what to imbibe.
One reason this matters is because people often think that if they use just the right tool or approach or system, information overload will go away. In fact, better tooling often improves the returns to imbibing information, and so can actually increase information overload. This isn't universally true – tools which increase your sense of informational self-efficacy may reduce the sense of overload. But there's more than a grain of truth to it.
(Parenthetically, let me point to Ann Blair's book Too Much to Know, an account of information overload before the modern age.)
There is a caricature of the history of science in which the notion of comparing theory to experiment originated in the 16th century. (Often ascribed to Bacon). Of course, this is a (gross) caricature and oversimplification; obviously, our prehistoric ancestors learned from experience! I'd be shocked if it's not possible to draw pretty much a straight line from such an ancestor (or, say, a mystic like Pythagoras), to a modern scientist with very sophisticated ideas about how they devise their experimental program.
This discrepancy bugs me. It's been bugging me since I first heard the story told, probably at about age 7 or 8. There's something ever-so-slightly off.
I suspect that the main transition is in having an experimental program at all, some notion of a (theory!) of how to explore nature. Talk to a scientist and they'll have hundreds of ideas for possible experiments, detailed thoughts on costs and benefits and variations and failure modes and so on. What's new in the modern era is the understanding that it's worth thinking about those things. Put a different way: we've always done experiments. It's just that in modern science an experimental programme – a set of ideas about how to explore nature – has been a first-class object in how we improve our understanding of the world.
A lot has been written about the value of written goals. Increasingly, I think that's a mistake. Goals are only valuable if they've become deeply internalized, part of you. Writing is helpful insofar as it helps achieve that end.
In an ideal world, funders would use our best understanding of how to reason about risk in order to make decisions about what research to fund. (We are, of course, in nothing remotely like this world, with funders seemingly mostly using a pre-modern understanding of risk.) An amusing aspect to this situation: of course, the research they fund might then actually change our understanding of how best to think about risk; it would, in an important aspect, be reflexive.
I'm always shocked by the returns on doing this. It's one value of using org-mode: instead of using the computer's file system (which you have very limited control over) you can instead design your own. And, more importantly, re-design and re-design and re-design that filing system. Both design and re-design are actually creative acts. And they're surprisingly important.
One imagined book title that particularly amuses me is "Better Living Through Filing". In some sense, though, Dave Allen already wrote the book, though he titled it "Getting Things Done". Not aimed at creative workers, however.
I'm sure this is obvious to many people, but it's something I only discovered recently: the unexpected value of collecting up (and often merging) files on closely related subjects.
There are certain topics I come back to over and over again, in files spread all over my hard disk. Systematically collecting them up and in many cases merging them has been surprisingly helpful.
This is something we do routinely in the world of physical objects (all the socks in the sock draw). The value in the world of expressions of creative thought seems to be at least as great.
There's this strange, huge fault line around the question: is crypto real? Lots of believers, shouting "this is the future". Lots of haters, shouting "it's all a scam". But the question itself is mostly a mistake. A better fulcrum question is: are decentralized programmable money and smart contracts here to stay? In particular, would a functioning system of programmable money and smart contracts enable extraordinary (and very valuable) new forms of co-ordination behaviour? I believe the answer is obviously yes. This doesn't mean many or all of today's cryptocurrencies won't fade out; but it does mean the future will almost certainly involve some descendant of some of these ideas.
In tech, capacity to exert power (or act) is fundamental; understanding is instrumental.
In basic research, understanding is fundamental; the capacity to exert power or act is instrumental.
In each sphere I occasionally meet people who seem to believe their point of view is not only self-evidently correct, but to find it almost unbelievable that anyone could believe otherwise. But it seems to me that both are largely (collectively held) values.
Interesting as an analogue to ancient Rome : ancient Greece.
When writing any kind of essay or book (research or non), I find that I (nearly always) have to write the first draft in linear order.
This is in strong tension with research, where you are trying to improve your understanding as much as possible. That's not something that can be done in linear order. It's more of a sotchastic upward spiral. In that sense, it makes sense to bounce backward and forward. You're trying to write snippets that (you hope) will be in the final piece, and trying to find pieces to improve wherever possible.
Ultimately, a work of research requires some strong core insight, some important idea or piece of evidence that is new. Usually you don't have that when you start. And so you are exploring, digging down, trying to understand, unearthing and trying to crystallize out partial nuggets of understanding, until you feel you really have some strong core insight, a foundation for a written report. At that point you can attempt the linear draft.
In everyday life an astounding number of things happen. A tiny few seem really significant. Memory systems let you distill those things out, so that you will return to them again and again. They're a way of concentrating your experience.
What feedback rule does a person, organization, or ecosystem follow to govern change and learning and growth? In particular, what does that feedback rule select for? What it selects for determines much about what one gets.
An infinite number of examples may be given.
(1) Does the political media follow a feedback rule that rewards improvements in the quality of people's understanding of government? No, except incidentally and in certain narrow ways; as far as I can tell, people who follow political media often end up more informed on a small subset of (very narrow) issues, and less informed in many crucial ways. In particular, they're much more likely to believe misconceptions that serve the feedback rule. This applies strongly to all parties, and requires little in the way of dishonesty, just ordinary muddleheadedness.
An example which personally I find amazing: a surprising number of people (including people in the media) genuinely believe Facebook caused Trump 2016. This is despite the fact that Trump spent only a small fraction of his budget on Facebook, and most of that late in the election cycle. The mainstream media did far more to cause Trump. I don't mean just Fox, I mean CNN-NYT-etc-etc-etc, the entire set, including, of course, a significant role for Fox. It's certainly true that Facebook played a role, at the margin (as did many, many things). But a much more significant effect was Trump knowing how to manipulate the mainstream media. And the mainstream media seem to have no way of understanding that – it's not inside the feedback loop that governs how they change. In fact, quite the reverse: Trump almost certainly drove revenue for them; they are incented to have a candidate like Trump. Most members of the media seem to understand this point – it's been emphasized by some of the most prominent executives – but then don't connect it the fact that "Facebook caused Trump" is a false narrative. It's not that they're lying. They believe the narrative because of systemic incentives.
I realize the last paragraph will be treated by many as evidence I'm off my rocker. I'm certainly not trying to say Facebook didn't play a role; but it was one of many factors; it was almost certainly a much smaller role than the mainstream media; and the mainstream media doesn't understand this, in considerable part because of their incentives. Certainly, this story violates conventional narratives. And I haven't provided a detailed argument, merely my conclusions. Maybe I'm wrong. But I don't think so. And when you hear most people try to give their argument for why Facebook caused Trump, it quickly dissolves into assertions which are either wrong, or based on extremely weak evidence3.
(2) In universities, grant overhead is inside the feedback loop. And the result is that universities systematically select for whatever grant agencies select for. This is a (massive) centralizing force. It's weak in most individual given instances. But over decades the effect is cumulative, and enormous. It's too strong to say research is centrally controlled, but there's certainly strong tendencies in that direction.
The terms are, of course, by analogy with the terms "information overload" and "future shock". The latter term was coined by Alvin and Heidi Toffler to describe the sense of confusion when society begins to change sufficiently rapidly.↩︎
Bill Bryson did this beautifully in his "Short History of Nearly Everything".↩︎
Happy to hear explanations of what I've missed. But you better have an explanation for why Trump spent so comparatively little on Facebook.↩︎