← All Chapters

Chapter 12

Scalability and Architectural Patterns

Pages 475-539nodejsscalabilityclusterload-balancingdocker
Ch 1Ch 2Ch 3Ch 4Ch 5Ch 6Ch 7Ch 8Ch 9Ch 10Ch 11Ch 12Ch 13

Chapter 12: Scalability and Architectural Patterns

Summary

In its early days, Node.js was just a non-blocking web server called web.js. Its creator Ryan Dahl extended it into a platform for distributed applications. This chapter explores why Node.js was born to be distributed and how to scale it.

The chapter introduces the scale cube (three orthogonal dimensions of scaling) and dives into each: X-axis cloning with the cluster module and reverse proxies, Y-axis decomposition with microservices, and Z-axis data partitioning. It covers the practical mechanics of the cluster module (round-robin, resiliency, zero-downtime restart), stateful communications, dynamic service discovery, containerization with Docker/Kubernetes, and microservice integration patterns.

The central insight: Node.js's single-threaded model means you scale by running multiple processes or instances, not by adding threads. But scaling introduces new problems -- shared state, service discovery, deployment orchestration -- each requiring deliberate architectural decisions.

Key Concepts

The Scale Cube

From The Art of Scalability by Martin L. Abbott and Michael T. Fisher:

ℹ️Info

Three Dimensions of Scaling

  1. X-axis (Cloning) -- Run N identical copies, each handles 1/Nth of load
  2. Y-axis (Decomposition) -- Split by service/functionality (microservices)
  3. Z-axis (Splitting) -- Partition by data (sharding by user, region, etc.)
             Cloned, Decomposed,
             and Partitioned application
                    /
  Y-axis          /
  Decompose      /
                /
               /______________ Z-axis: Split by data
              /
             /
  Monolith, single instance
             _________________ X-axis: Cloning

The bottom-left corner represents a monolithic application -- all functionality in one codebase, one instance. Most real systems use combinations of all three axes.

πŸ’‘Tip

Vertical = Horizontal in Node.js The book notes that in Node.js, vertical scaling (more resources on one machine) and horizontal scaling (more machines) are almost equivalent -- both involve running multiple instances. Being forced to scale early gives you redundancy and fault-tolerance as side effects.

The Cluster Module

The cluster module is the simplest way to scale on a single machine.

import { createServer } from 'http'
import { cpus } from 'os'
import cluster from 'cluster'

if (cluster.isMaster) {                                 // (1)
  const availableCpus = cpus()
  console.log(`Clustering to ${availableCpus.length} processes`)
  availableCpus.forEach(() => cluster.fork())
} else {                                                // (2)
  const { pid } = process
  const server = createServer((req, res) => {
    let i = 1e7; while (i > 0) { i-- }                 // CPU work simulation
    console.log(`Handling request from ${pid}`)
    res.end(`Hello from ${pid}\n`)
  })
  server.listen(8080, () => console.log(`Started at ${pid}`))
}

Key behaviors:

  • Master process: cluster.isMaster is true, forks workers via cluster.fork()
  • Workers: cluster.isWorker is true, each runs the same file with its own V8/event loop/memory
  • Under the hood, cluster.fork() uses child_process.fork()
  • Workers share the same port through the master (every server.listen() in a worker delegates to the master)
  • Communication channel available via cluster.workers object
  • Benchmark: ~300 trans/sec single process -> ~1,000 trans/sec with 4 workers (3.3x)
ℹ️Info

Round-Robin Scheduling On all platforms except Windows, the cluster module uses explicit round-robin (cluster.SCHED_RR). The master accepts connections and distributes them. This avoids the thundering herd problem. Configurable via cluster.schedulingPolicy.

Resiliency and Availability

The book demonstrates resiliency by making workers crash randomly:

} else {
  // Inside worker block
  setTimeout(
    () => { throw new Error('Ooops') },
    Math.ceil(Math.random() * 3) * 1000
  )
  // ... start server
}

The master auto-restarts crashed workers:

cluster.on('exit', (worker, code) => {
  if (code !== 0 && !worker.exitedAfterDisconnect) {
    console.log(`Worker ${worker.process.pid} crashed. Starting a new worker`)
    cluster.fork()
  }
})
ℹ️Info

Benchmark Result Despite constant crashing, load testing with autocannon showed: 8k requests in 10.07s, 674 errors (7 timeouts) -- approximately 92% availability. The exitedAfterDisconnect flag distinguishes intentional shutdowns (true) from crashes (false).

Zero-Downtime Restart

Triggered by the SIGUSR2 signal, workers are restarted one at a time:

import { once } from 'events'

process.on('SIGUSR2', async () => {                      // (1)
  const workers = Object.values(cluster.workers)
  for (const worker of workers) {                        // (2)
    console.log(`Stopping worker: ${worker.process.pid}`)
    worker.disconnect()                                  // (3)
    await once(worker, 'exit')
    if (!worker.exitedAfterDisconnect) continue
    const newWorker = cluster.fork()                     // (4)
    await once(newWorker, 'listening')                   // (5)
  }
})

worker.disconnect() stops the worker gracefully -- it finishes in-flight requests before exiting. The new worker must be 'listening' before proceeding to the next.

Stateful Communications

StrategyHow it worksTradeoff
Shared state storeRedis/DB for sessionsExtra infrastructure, network latency
Sticky load balancingSame client -> same worker (cookie/IP hash)Uneven load, session loss on worker death
πŸ’‘Tip

Best Practice Prefer shared state stores over sticky sessions. Sticky sessions re-couple your client to a specific instance, reducing the benefits of horizontal scaling and complicating failover.

Reverse Proxy with Nginx

upstream nodejs_app {
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
    server 127.0.0.1:3004;
}

server {
    listen 80;
    location / {
        proxy_pass http://nodejs_app;
    }
}

Nginx advantages: multi-machine load balancing, SSL termination, static file serving (don't waste the event loop), compression, caching, URL rewriting.

Dynamic Horizontal Scaling

Static configurations don't work when instances scale dynamically. Solution: service registry.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”   register/deregister   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Service  β”‚ ──────────────────────> β”‚ Service Registry  β”‚
β”‚ Instance β”‚                         β”‚ (Consul / etcd)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                            ^
                                            β”‚ discover
                                     β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”
                                     β”‚ Load Balancerβ”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The book builds a dynamic load balancer with http-proxy and Consul:

  • Instances register on startup, deregister on shutdown
  • Health checks automatically remove unhealthy instances
  • New instances are automatically discovered

Peer-to-peer load balancing: each client queries the registry and distributes requests itself. No SPOF, no bottleneck, fewer hops -- but every client needs load-balancing logic.

Scaling with Containers

Docker packages an application with dependencies:

FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]

Kubernetes orchestrates containers across a cluster:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nodejs-app
  template:
    metadata:
      labels:
        app: nodejs-app
    spec:
      containers:
        - name: app
          image: myapp:1.0.0
          ports:
            - containerPort: 3000

Kubernetes provides: Pods (deployable unit), Deployments (desired state), Services (stable endpoint), ReplicaSets (N replicas), self-healing, rolling updates, auto-scaling.

Decomposing Applications: Monolith vs Microservices

AspectMonolithMicroservices
DeploymentSingle unitIndependent per service
ScalingScale everything togetherScale each service independently
DevelopmentSimpler initiallyTeams own individual services
DebuggingStack traces, local stateDistributed tracing needed
DataShared databaseDatabase per service (ideally)
ComplexityIn the codeIn the infrastructure
⚠️Warning

Microservices Are Not Free Microservices trade code complexity for operational complexity. You need service discovery, distributed tracing, circuit breakers, eventual consistency, and sophisticated deployment pipelines. Don't decompose prematurely -- start with a well-structured monolith.

Integration Patterns for Microservices

1. API Proxy (Gateway): Routes by URL path. Handles cross-cutting concerns.

2. Orchestration: Central coordinator calls services in order, aggregates results. Risk: the orchestrator becomes a coupling point.

3. Message Broker (Async): Services communicate via RabbitMQ/Kafka. Decoupling, resilience (messages persist if consumer is down), natural load leveling.

Mind Map

Connections

  • Previous: Chapter 11 -- Advanced Recipes (worker threads for CPU-bound work)
  • The cluster module uses child_process.fork() -- same API from Ch11's CPU-bound strategies
  • Message brokers connect to Chapter 13 -- Messaging and Integration Patterns
  • Microservice decomposition applies Y-axis scaling from the scale cube

22 quiz Β· 26 cards Β· 2 exercises Β· Ch 12 of 13