Why Your Backend Should Be a Sleeping Virtual Machine
I have built microservices. I have debugged microservices at 3 AM. I have watched cascading failures take down an entire product because one service in the mesh decided to return 503s and every dependent service dutifully propagated the misery. After enough of these experiences, I started asking a different question: what if each unit of backend state was a tiny, self-contained virtual machine that processes messages sequentially, persists automatically, and sleeps when idle?
That is the living document model. It eliminates the microservice coordination problem not by solving it, but by making it structurally unnecessary.
Think about the first time you wrote a program. You took inputs, did things, produced outputs. It was joyful. Then you hit persistence: how do I save this state to disk? Suddenly you are marshalling data, handling partial failures, managing connection pools, thinking about ORM impedance mismatches. The joy evaporates.
The living document model returns to that first experience. You write code that manipulates state directly. The runtime handles persistence. You never call save(). You never think about transactions. You never manage a database connection. The entire document -- code plus data -- is a single unit:
record Todo {
public int id;
public string text;
public bool completed;
public principal owner;
}
table<Todo> todos;
channel create(CreateTodo msg) {
todos <- { text: msg.text, completed: false, owner: @who };
}
This is both the schema and the logic. There is no separate database. There is no ORM. There is no connection pool. The state lives alongside the code that manipulates it.
Each Adama document is a virtual machine with a specific lifecycle:
This is not an analogy. It is the actual architecture. The lifecycle is managed through hooks: @construct runs when a document is created (initialize state, validate creator access), @load runs when a document wakes from disk, @connected fires when a viewer connects (return true to accept, false to reject), and @disconnected runs cleanup when a viewer leaves.
The questions that matter all have good answers: How many documents can a server host? Thousands -- sleeping documents cost nothing. How fast does a document wake? Milliseconds -- it is loading a JSON snapshot. How much does persistence cost per operation? One append to a WAL -- the same cost as a database write, because it is a database write.
Each document processes messages one at a time. If three clients send messages simultaneously, those messages queue and execute sequentially:
This is the actor model. No locks. No concurrent access to document state. No race conditions. No deadlocks. I cannot overstate how much pain this single design decision prevents. Every distributed systems bug I have ever debugged at 3 AM involved some form of concurrent state mutation.
The constraint is obvious: a single document cannot process more than one message at a time. But human input rate is measured in messages per second, not millions. A board game, a chat room, a collaborative editor -- these generate maybe 10-100 messages per second at peak. A single thread can handle thousands. The bottleneck is never CPU; it is the speed of human thought.
Every message is an atomic transaction. If anything goes wrong -- a validation failure, an abort -- the entire transaction rolls back. Clients never see partial state. The code is simple because it executes in a simple context:
channel complex_operation(ComplexMsg msg) {
score += 10;
if (score > max_score) {
abort; // ROLLBACK -- score is unchanged
}
achievements <- { name: "High Score" };
}
Either all changes commit, or none do. You do not appreciate this until you have spent a week debugging a partially-committed transaction in a system that does not provide this guarantee.
The microservice architecture solves a people problem: how do you let independent teams ship code independently? It solves this at a steep technical cost. Every inter-service call can fail. Every failure must be handled. Retries create duplicates. Timeouts create ambiguity. Distributed transactions are either unavailable or horrifically complex.
The living document model sidesteps this entirely. There is no service mesh because there are no inter-service calls. The document contains the data model, the business logic, the privacy rules, and the real-time synchronization. It is a self-contained unit. Document game-001 knows nothing about document game-002. They might as well be on different planets.
Scaling is horizontal through documents, not through services. Need more capacity? Add machines. The router directs connections to the right machine. New documents go to machines with capacity. There is no cross-document coordination required.
One of my favorite aspects of this model is the inverted control flow. In a traditional web app, clients decide what actions to take. But games have rules about when you can take actions. The document acts like a Dungeon Master:
#player_turn {
future<DrawCount> request = how_many_cards.fetch(current_player);
DrawCount response = request.await();
// execution pauses until the player responds
transition #play_phase;
}
The await here is not like async/await in JavaScript or Python. It is durable. If the server restarts, the document resumes exactly where it left off, still waiting for that response. This await can span days. A manager goes on vacation, comes back, approves a request, and the document picks up exactly where it stopped.
This is possible because of deterministic replay. The document's entire history is a sequence of messages. Given the same sequence, execution produces identical results. Even randomness is deterministic -- document-scoped random number generators are persisted so that replaying the log produces the same "random" outcomes.
The living document model is not free. It imposes real constraints.
Single-thread execution means a single document tops out at thousands of messages per second. For real-time multiplayer games with human input rates, this is generous. For high-frequency trading, it is inadequate.
Document isolation means no cross-document queries. You cannot join data across documents. If you need that, you use external services or design your data model so each document is self-contained.
State size is bounded by memory. Each document should be megabytes, not gigabytes. The pattern is many small documents (one per game session, one per chat room), not one giant document. Get this wrong and you hit performance limits hard.
The language itself is deliberately constrained. No arbitrary network calls (only through defined services), no filesystem access, no threading. These constraints enable the guarantees -- deterministic execution, automatic persistence, reliable replay. You trade freedom for correctness.
But for the class of applications where people share state in real time -- games, collaboration tools, chat, dashboards, workflow coordinators -- these tradeoffs are overwhelmingly worth it. One deployment artifact instead of six services. No database to manage. No cache to invalidate. No message queue to monitor. One thing to deploy. One thing to debug. Better sleep.