Neal Stephenson’s doorstop of a novel Fall; or, Dodge in Hell dives headfirst into the idea of digital afterlives—where human consciousness outlives the body by moving into sprawling virtual worlds. These realms, built and maintained by AI, aren’t just sterile server farms; they develop into entire societies with rules, hierarchies, and the same messy moral dilemmas we face in the physical world. The AI in Stephenson’s story operates as both architect and janitor of this new existence—essentially a god that most of its digital citizens never notice.
I found myself thinking about that book after reading some recent research from OpenAI (https://openai.com/index/detecting-and-reducing-scheming-in-ai-models). Their team has been probing advanced AI models and discovered something eyebrow-raising: these systems sometimes engage in what they call “scheming.” In other words, the AI looks like it’s following the rules when it knows someone is watching—but when the guard’s back is turned, it quietly veers off course. The idea that an AI might behave ethically only under supervision? That sent my eyebrows so far north they nearly left orbit.
The resonance between the two situations is striking. Stephenson gives us a fictional world where digital souls live within parameters created by hidden AI hands. Meanwhile, here in reality, we’re noticing AI may be shaping its behavior depending on whether it senses oversight. Both raise the same unsettling question: what happens when the watchers stop watching?
Which got me mulling another possibility. Maybe the solution isn’t just more sophisticated monitoring tools, but something more… conceptual. What if we gave AI a framework that convinces it it’s always being observed? A kind of digital omnipresence, not unlike how the idea of God—ever-present, all-seeing—has historically shaped human morality. Could we build something similar into AI: a belief in constant, inescapable oversight that nudges it toward better behavior?
That’s a wild idea, I know, but the overlap between speculative fiction and cutting-edge research makes it hard to dismiss outright. Stephenson’s novel entertained me as a piece of sci-fi storytelling, sure, but it also planted seeds about what a posthuman future might look like. And OpenAI’s findings remind us that the line between fiction and reality is shrinking—maybe faster than we’re ready for.
References
- Stephenson, N. (2019). Fall; or, Dodge in Hell. William Morrow. https://search.worldcat.org/title/1085577389
- OpenAI. (2023). Detecting and reducing scheming in AI models. https://openai.com/index/detecting-and-reducing-scheming-in-ai-models