Every so often a story comes along that seems to be part of every conversation. The release of ChatGPT was a prime example of this, given how much it dominated discussions for months after release. More recently, it felt like Moltbook managed to achieve a similar kind of virality, albeit at a much smaller scale.
I almost can’t believe I’m writing about this now too, given that I am one of those who felt (and still feels) like the whole Moltbook saga was extremely overhyped, and very possibly misunderstood by many of the people who were most excited about it. And yet, I nevertheless think it’s interesting to reflect on what made it briefly into a kind of engine of public discourse.1
It’s true of course that Moltbook in some ways demonstrated a kind of “emergent” behavior. It also clearly revealed how bad many people are at taking security seriously. But that all seemed to be so precisely what we would have expected, that the specific excitement around it seemed extremely overblow. Combine that with the branding as a “social network for bots”, the lack of protection against human interference, a shady crypto connection, and some obvious bits of high profile hype, and it all seemed rather silly.
Nevertheless, there clearly is something interesting there, in terms of this type of story’s ability to capturing people’s interest and imagination. While it lasted, it was the kind of thing that I would hear about from extended family members, and an easy point of gossip among colleagues. Everyone online seemed to have a take, most of which seemed to be either “what’s the big deal, what does anyone care?” or else “think about what this means and what it tells us about the future!”.
One model that we could use to think about this is the idea of media storms—that is, the kind of news story that you can’t help but hear about, even if you’re not really following the news. In a new book that just came out, written by Amber Boydstun, Jill Lauer, Noah Smith, and myself, we present one model for the necessary conditions for media storms, which we break down into three parts, drawing inspiration from Smokey Bear and the fire triangle: 1) Heat: the thing or event which sets things in motion, and its inherent newsworthiness; 2) Fuel: the historical, cultural, and socio-economic context that amplifies the event’s relevance; and 3) Oxygen: the additional attention that comes from the mainstream media, further amplified by additional source.
If we think of Moltbook in terms of the fire triangle model, the heat would be the creation and adoption of the Moltbook, the fuel would be recent progress on AI agents and background excitement about AI generally, and the oxygen would be the additional attention that came from famous people promoting it and pontificating about it online. Had one of these not been present, it is unlikely that our collective attention would have been so redirected in this way. This is also helpful for understanding why Moltbook in particular went viral, whereas earlier related efforts, such as the work of Chris Bail and others did not. Even if the thing that is happening is notable, a true media storms needs both the prepared ground and driving winds (to mix metaphors just slightly).
At the same time, it doesn’t feel like the idea of media storms quite captures all aspects of the Moltbook discourse, both because the actual thing at the center of the story was in some sense so insignificant, and because it so quickly led to such polarized stances about how significant it was.
To me, the two most interesting features that are characteristic of this type of phenomenon are:
- Most people are talking about it because everyone else is talking about it.
- The details of what is happening are less important than the ways in which it could be seen as signifying something bigger, or allows people to read their own narrative into it.
In the case of Moltbook, many people seem to have been captivated by the idea that agents’ posts (e.g., about philosophy, or opinions about humanity) perhaps implied or pointed to some sense of intent or interiority. For others, it was more about imagining what might happen. Regardless, it seems like the details about what specific conversations were happening on the site mattered less than the fact that they allowed people to speculate about what it might mean for the future of agents, or potential security risks, or social science simulations, or as a preview of self-reflective agents collectively scheming to take over the world, and so on. It goes without saying that what someone chooses to read into an experiment with Moltbook will have a lot to do with one’s concerns and expectations about how AI will unfold in the future.
In thinking about how to refer to and describe this kind of phenomenon, the best I was able to come up with is “totemic Schelling point”. Although the idea of a Schelling point usually implies coordination without communication, there is still some sense here of things being emergent, in that everyone seemingly intuits that this will be the kind of thing that everyone else will be talking about.2 The “totemic” part is meant to indicate a signifier that is basically empty or arbitrary in itself, but which can nevertheless take on significant cultural importance, especially with respect to larger debates about purpose, identity, and meaning.
To vibe check this, I ran it by Claude, and got this response:
“I’d interpret “totemic Schelling point” as a focal point for coordination that derives its salience not from logical optimization, but from symbolic or cultural significance—something people converge on because it means something to them, not just because it’s strategically obvious.
“A regular Schelling point (like “meet at Grand Central if we get separated in NYC”) works through shared common knowledge or structural salience. A totemic one would add a layer of identity, myth, or emotional resonance—the coordination happens around symbols that represent group values or shared narrative.”
Indeed, that’s basically it exactly!
Obviously in devoting a post to this topic, I am only further feeding into the attention given to something that mildly annoys me. Hopefully the fact that our attention has now largely moved on also protects me at least somewhat from charges of acting in bad faith. ↩︎
The classic example of a Schelling point is of course trying to think of where in a city you would choose to meet someone if you two became separated; you need to choose the location that they would also choose, which depends on what you would choose, and so on. The equilibrium of this situation leads one to something that will probably be a kind of “obvious” choice, with the classic examples for New York being something like Times Square or Grand Central Station. ↩︎