You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The typical backend architecture follows a familiar pattern. A web server connects to a database. Traffic grows, so you add Redis for caching. You need async processing, so you add Kafka. You need coordination, so you add distributed locks.
25
23
</Narrative>
26
24
27
-
Each layer solves a specific problem but introduces new ones. Cache invalidation bugs appear in production. Messages get lost in queues. Deadlocks take down the entire system. This results in coordinating transactions across services, debugging race conditions, and managing data consistency across multiple systems.
28
-
29
-
<Imagesrc={oldArch}alt="Traditional backend architecture with separate layers for web server, cache, database, and message queue" />
30
-
31
25
## Every Solution Creates A Problem
32
26
33
-
As you add compnents to your architecutre to solve these problems, in turn you create new problems for yourself:
27
+
As you add components to your architecture to solve problems as you scale, in turn you create new problems for yourself:
34
28
35
29
-**Caching**: brings cache invalidation bugs, stale data, and thundering herd problems.
36
30
-**Message queues**: bring message ordering issues, exactly-once delivery problems, and dead letter queue monitoring.
-**Distributed locks**: bring deadlocks, lock timeouts, and split-brain scenarios.
38
33
-**Multiple services**: bring distributed transactions, eventual consistency, and network partition handling.
39
34
40
-
These problems compound. A cache invalidation bug combined with a race condition can cause data corruption that's nearly impossible to reproduce or debug.
35
+
Worse, these are bugs you can't unit test for. They're emergent behaviors that only appear under load when it matters most.
41
36
42
-
New features touch caching logic, queue handlers, database schemas, and service boundaries. Complexity grows exponentially with each addition.
37
+
Yet it's accepted as _the way things must be_. Nobody got fired for adding Kafka, Redis, and RabbitMQ to the stack. The fact that each one brings its own failure modes is assumed to be the growing pains of any successful business.
43
38
44
-
Worse, these aren't bugs you can unit test for. They're emergent behaviors that only appear under load when it matters most.
39
+
<Imagesrc={oldArch}alt="Traditional backend architecture with separate layers for web server, cache, database, and message queue" />
45
40
46
41
<Separator />
47
42
48
43
## How We Got Here
49
44
50
-
The way you've designed your app is through an **age-old practice of "separating state and compute."** This solidified in the 1980s when client-server architecture emerged and everyone started running databases on dedicated machines.
45
+
Looking back at the very first thing you did when starting your application: setting up a web server and a database. The way you've designed your app is through an **age-old practice of "separating state and compute."**
51
46
52
-
This came from the fact that computers were slow and had limited resources. Running application code and database operations on the same machine meant they'd fight over CPU and memory, making both perform poorly. Separating them protected databases from compute overhead. This tradeoff made sense when CPU and memory were severly limited.
47
+
We've been doing it this way since the 1980s, when client-server architecture put databases on their own machines.
53
48
54
-
## 40 Years Later
49
+
This came from the fact that computers were slow and had limited resources. Running application code and database operations on the same machine meant they'd fight over CPU and memory, making both perform poorly. Separating them protected databases from compute overhead.
55
50
56
-
Those constraints from fourty years ago no longer apply to today's servers. Modern CPUs are orders of magnitude faster, and memory is abundant and cheap. **Application bottlenecks has shifted from local compute to network latency and locks.**
51
+
This tradeoff made sense when CPU and memory were severely limited, but the pattern outlived its purpose. As traffic grew, we added caching layers, message queues, and distributed locks — each solving a problem from the last without questioning the original assumption of how we got here.
57
52
58
-
A Postgres query over the network takes 1-10ms minimum. The same query on a local SQLite database (running in the same process as your application) takes 0.01-0.1ms. The network round trip and related locks is now the expensive part, not running the application and database on the same machine.
53
+
## 40 Years Later
54
+
55
+
Those constraints from forty years ago no longer apply to today's servers. Modern CPUs are orders of magnitude faster, and memory is abundant and cheap. **Application bottlenecks have shifted from local compute to network latency and locks.**
59
56
60
-
**Combining compute and storage eliminates the biggest source of latency for modern applications**: network trips to the database and locsk.
57
+
This is best demonstrated with a simple comparison between a real-world Postgres query over the network versus a SQLite query on the same machine: A Postgres query over the network takes 1-10ms over LAN. The same query on a local SQLite database running in the same process as your application takes 0.01-0.1ms, **roughly 100x faster**. (These benchmarks are heavily dependent on the workload, this is a conservative performance number for SQLite.)
61
58
62
-
Additionally, **it also eliminates entire categories of bugs**. No network means no network partitions. No shared state means no race conditions. No locks means no deadlocks.
59
+
That 100x difference is not about switching to a marginally different database, it's about rethinking your architecture for modern computers by eliminating the centralized database completely in favor of databases colocated with your compute. **Combining compute and state removes the biggest sources of latency in modern applications.**
63
60
64
61
<Separator />
65
62
66
-
## The Actor Model: Combining Compute & Storage
63
+
## The Actor Model: Combining Compute and State
64
+
65
+
Actors take the completely opposite approach to "separating compute and state:" they **merge state and compute together**.
67
66
68
-
Actors are the opposite of "separating compute and storage." **They put compute and storage in the same place** and load state into memory when awoken.
67
+
Each actor's **state is isolated to itself** and cannot be read by any other actors. Instead, you communicate with actors over the network via actions.
69
68
70
-
Each actor's **state is isolated to itself** and cannot be read by any other programs. Instead, you communicate with actors over the network via actions.
69
+
They're like mini-servers: they can accept and respond to network requests and even send network requests themselves. They remain running as a long-lived process with in-memory state until they decide to go to sleep.
71
70
72
-
They're like mini-servers: they can accept and respond to network requests and even send requests themselves without being prompted. They remain running as a long-lived process with in-memory state until they decide to go to sleep.
71
+
In addition to performance and complexity benefits, this architecture **eliminates entire categories of bugs by design.** No network to the database means no network partitions. No shared state means no race conditions. No locks means no deadlocks.
73
72
74
-
<Imagesrc={actorArch}alt="Actor architecture showing compute and storage combined in isolated actors"className="max-h-[500px]" />
73
+
<Imagesrc={actorArch}alt="Actor architecture showing compute and state combined in isolated actors"className="max-h-[500px]" />
75
74
76
75
## The 4 Properties That Eliminate Complexity
77
76
78
-
By combining compute and storage, actors present a few key properties that eliminate entire categories of problems.
77
+
By combining compute and state, actors present a few key properties that eliminate entire categories of problems. These properties are the core of the design patterns that we'll discuss in further articles.
79
78
80
79
### Isolated State
81
80
@@ -85,15 +84,17 @@ This eliminates race conditions (can't happen when only one process touches the
85
84
86
85
Debugging becomes straightforward: the actor's state is the single source of truth. There's no need to reconstruct state from multiple systems or reason about eventual consistency across caches, databases, and message queues.
87
86
88
-
As your app grows, new features affect a limited number of actors which have a limited scope. Changes don't ripple through shared state or risk breaking unrelated parts of your system.
87
+
As your app grows, new features affect a limited number of actors which have a limited scope. Changes don't ripple through shared state across services or risk breaking unrelated parts of your system.
89
88
90
89
<Imagesrc={isolatedState}alt="Diagram showing actors with isolated state that cannot be accessed by other processes"className="max-h-[500px]" />
91
90
92
91
### Message-Based Communication
93
92
94
93
Actors **talk through actions and events**, not direct state access. This makes it easier to scale actors since they can scale horizontally across multiple machines and still communicate efficiently.
95
94
96
-
**Actors frequently talk to each other** to build larger systems that scale well. We'll be talking a lot about patterns like this in this course.
95
+
Messages sent to actors are **automatically queued and processed sequentially**. This almost always eliminates the need for external message queues since backpressure, ordering, and delivery are handled by the actor runtime itself.
96
+
97
+
Crucially, **actors frequently talk to each other** to build larger systems that scale well. We'll be talking a lot about patterns like this in this course.
97
98
98
99
<Imagesrc={messagePassing}alt="Diagram showing actors communicating through messages and events"className="max-h-[500px]" />
99
100
@@ -117,13 +118,47 @@ Load spreads naturally since actors are small, lightweight units. No complex sha
117
118
118
119
## Putting It All Together: A Radically Simpler Architecture
119
120
120
-
When you switch to actors, your architecture no longer needs:
121
+
When you build your backend with actors, the four properties listed remove the need for:
121
122
122
-
-**Redis/Memcached**: Caching is built-in (state lives with compute).
123
-
-**Kafka/RabbitMQ/SQS**: Events and async messaging are built-in.
124
-
-**Consul/etcd/ZooKeeper**: No distributed coordination needed.
123
+
-**Redis/Memcached**: Caching is built-in (state already lives in-memory with compute).
124
+
-**Kafka/RabbitMQ/SQS**: Message queueing, events, and async messaging are built-in to the actor runtime.
125
+
-**NATS/Redis Streams**: Pub/sub is built-in to actors through message passing and events.
126
+
-**Consul/etcd/ZooKeeper**: No distributed coordination needed, actors encapsulate their own state and the runtime handles discovery and routing automatically.
125
127
-**Istio/Linkerd**: Actors handle routing and discovery automatically.
126
128
-**Database sharding**: Actors distribute themselves automatically. No shard keys, no rebalancing logic, no cross-shard queries.
127
129
128
-
New features are new actors or modifications to individual actors, not changes that cascade through your entire system.
130
+
## If Actors Are So Great, Why Aren't They Everywhere?
131
+
132
+
If you've reached this point and are unfamiliar with the actor model, you're probably asking this exact question. It all sounds a little _too_ rosy.
133
+
134
+
The truth is that actors _are_ used widely — just not visibly. Large enterprises with engineers who've spent years wrestling with traditional architectures have long since adopted them. The pattern has proven itself at massive scale:
135
+
136
+
- WhatsApp (notoriously acquired for $19B running Erlang/OTP with only 35 engineers)
So why hasn't the actor model spread to smaller teams and mainstream development?
145
+
146
+
This mirrors TypeScript's trajectory. It started as a niche tool for large codebases — most developers dismissed it as unnecessary overhead with poor tooling. But as more developers felt the pain of loose typing at scale, adoption grew. Today, TypeScript is a non-negotiable for many teams because of that collective suffering.
147
+
148
+
Actors are on the same trajectory. The pain of distributed systems complexity is becoming impossible to ignore.
149
+
150
+
Other ecosystems have had mature actor frameworks for years — Erlang has OTP, Java has Akka, C# has Microsoft Orleans. But TypeScript has been the missing piece until recently with:
151
+
152
+
-**Rivet Actors**: Open-source actor infrastructure for TypeScript
0 commit comments