Why Runtime Access Checks Are Not Enough
I hate the way most systems handle privacy. You build your data model, expose it through an API, then bolt on access control as an afterthought. Maybe you add middleware. Maybe you scatter if (user.isAdmin()) checks across your codebase. Maybe you write a test that verifies one endpoint and hope the other forty are fine. The result is always the same: bugs, data leaks, and a constant low-grade anxiety about what you might have missed.
After years of building real-time multiplayer systems where privacy is not optional -- card games where each player has a hidden hand, collaborative tools where users see different data -- I built a system where privacy is checked at compile time. If you try to expose private data through a public field, the compiler rejects your code. The data never leaves the server, not because you remembered to check, but because the program cannot compile if you forget.
Here is a typical runtime access control pattern:
app.get('/api/cards', (req, res) => {
const cards = db.getCards(req.gameId);
const filtered = cards.map(card => {
if (card.owner === req.userId) {
return { id: card.id, value: card.value, suit: card.suit };
} else {
return { id: card.id }; // hide value from non-owners
}
});
res.json(filtered);
});
This works. Until someone adds a new endpoint and forgets the filter. Or refactors the filter and introduces a bug. Or adds a WebSocket push handler that sends the full card object. Or writes a GraphQL resolver that exposes the raw database record.
Runtime privacy is opt-in. Every new code path that touches sensitive data must remember to apply the filter. The default is exposure. You are one forgotten check away from leaking data.
Compile-time privacy inverts this. The default is private. Exposure requires an explicit declaration. And the compiler verifies that declarations are consistent -- you cannot accidentally route private data through a public channel.
In Adama, every field has a privacy modifier:
private int internal_counter = 0; // No viewer ever sees this
public int score = 0; // All connected viewers see this
viewer_is<owner> int hand_value; // Only the principal in 'owner' sees this
use_policy<can_see> int balance; // Visible when policy function returns true
Fields without an explicit modifier default to private. I made this choice deliberately -- secure-by-default is the only sane option. You have to consciously decide to expose data.
The viewer_is<field> modifier is the workhorse for per-user privacy. It takes a field of type principal (Adama's term for an authenticated identity) and makes the data visible only when the current viewer matches that principal:
record Card {
public int id;
private principal owner;
viewer_is<owner> int value;
}
This is a complete privacy specification. The value field is visible to the card's owner and invisible to everyone else. Not hidden behind a null. Not redacted. Simply absent from the JSON that non-owners receive.
The privacy filter runs during delta computation. When Alice's card value changes from 7 to 9, Alice receives {"cards": {"1": {"value": 9}}}. Bob receives {} -- an empty delta. Nothing changed in Bob's view because Bob never had access to that field. The system leaks zero information about state changes that a viewer cannot see.
The @who constant represents the currently viewing principal. It is the foundation of every privacy decision:
policy is_owner {
return @who == owner;
}
For more complex rules, use_policy attaches a named policy function to a field:
record BankAccount {
private principal owner;
private bool account_public = false;
private int balance;
use_policy<can_view_balance> int visible_balance;
policy can_view_balance {
if (@who == owner) {
return true;
}
return account_public && balance > 0;
}
}
The owner always sees the balance. Others see it only if the account is public and has a positive balance. This logic is evaluated per-viewer during delta computation. Different viewers get different results.
Field-level privacy hides individual values. But sometimes you want to hide the existence of an entire record. The require keyword does this:
record PrivateNote {
private principal owner;
public string content;
policy is_owner {
return @who == owner;
}
require is_owner;
}
table<PrivateNote> notes;
When Alice queries the notes table, she only sees her own notes. Bob's notes are not hidden or redacted -- they are genuinely absent from her view. She cannot even determine how many notes exist for other users.
This is different from hiding field values. With require, the record does not appear in @o ordering arrays or table iteration results. The viewer's JSON has no trace of the record.
The compiler prevents the most common class of privacy bugs: accidentally exposing private data through public channels.
private int secret = 42;
public int exposed = secret; // COMPILE ERROR
This fails at compile time. You cannot assign a private value to a public field. The data flow analysis tracks privacy modifiers through expressions, assignments, and function calls. If private data could reach a public output, the compiler rejects the program.
This matters because the most dangerous privacy bugs are not the ones where you deliberately expose data. They are the ones where a new feature, a refactor, or a well-meaning code change inadvertently creates a path from private state to a public output. Compile-time checking catches these at the moment they are introduced, not after they ship to production.
The privacy system is not a separate layer bolted on top of the delta protocol. It is woven into the delta computation itself. When the runtime computes what to send each client, the privacy evaluation happens as part of the diff.
Consider what happens when a card changes owners -- Alice gives card 1 to Bob:
Before: Card 1 has owner=Alice, value=7 After: Card 1 has owner=Bob, value=7
Delta to Alice: {"cards": {"1": {"value": null}}} -- the value disappears from her view Delta to Bob: {"cards": {"1": {"value": 7}}} -- the value appears in his view
The null in Alice's delta is a deletion signal per RFC 7386. From Alice's perspective, the value field was removed. From Bob's perspective, it was added. The underlying data is the same, but each viewer's delta reflects the change in their own visibility.
This coupling between privacy and synchronization has a performance benefit: when a private field changes and only one viewer can see it, only that viewer receives a delta. The other N-1 clients receive nothing. For a 100-player game where each player has private state, this cuts delta traffic by roughly 99%.
Sometimes you need different viewers to see entirely different query results, not just different field visibility on the same records. Bubbles handle this:
bubble myCards = iterate deck where owner == @who;
When Alice connects, her myCards shows cards where owner == Alice. When Bob connects, his shows owner == Bob. The bubble incorporates @who into the query, producing a per-viewer result.
Bubbles can be gated by policies for role-based access:
policy is_admin {
if ((iterate _people where account == @who)[0] as person) {
return person.is_admin;
}
return false;
}
bubble<is_admin> all_people = iterate _people;
Only admin viewers see the all_people data. Non-admins do not receive it at all -- not an empty result, but the field itself is absent from their view.
Privacy enforcement extends beyond fields and records to the API surface itself. Channels can declare access requirements using requires:
policy is_admin {
return (iterate _admins where account == @who).size() > 0;
}
channel<requires<is_admin>> reset_scores(ResetMsg msg) {
(iterate _scores).delete();
}
The requires<policy_name> guard rejects messages from unauthorized principals before the handler body executes. The check is declarative -- you cannot accidentally forget it because it is part of the channel's type signature. This is compile-time privacy applied to the write path: just as viewer_is controls what data flows out, requires controls what actions are allowed in.
Compile-time privacy is not free. The privacy evaluation runs per-viewer during every delta computation. For N connected viewers, the server evaluates N sets of privacy policies on every state change. This is O(N) work per update, and for documents with complex policies and many viewers, it becomes the bottleneck.
The privacy model also constrains how you can write code. You cannot use a formula to compute a value from private data and expose it publicly, even if the computation is safe (like counting private records). The compiler does not reason about information flow at that level of granularity -- it is conservative.
Finally, the require keyword can make debugging confusing. If a record is invisible to a viewer, there is no indication that the record exists. When things seem missing, you have to check whether a require policy is filtering them out.
These costs are real. But compare them to the alternative: scattered runtime checks, one missed if statement away from a data breach, and no sleep at night wondering what you forgot.
I will take the constraints.