Architecture: How Coder Works

Codelooru
Codelooru coder architecture

Your developers are writing code on laptops. Powerful cloud machines sit idle in your data centre. Your team's local environments diverge slowly until "works on my machine" becomes the default explanation for every bug. That's the problem Coder was built to solve.

Coder is a self-hosted Cloud Development Environment (CDE) platform. It moves development off local machines and onto cloud infrastructure you control, giving developers consistent, powerful remote workspaces accessible from any device. This post walks through how Coder is architected — what the components are, how they fit together, and why it was designed this way.


What Coder is (and isn't)

Before diving into the architecture it's worth being precise about what Coder actually is, because it's easy to confuse with adjacent tools.

  • Coder is a platform for provisioning and managing remote development workspaces on your own infrastructure.
  • Coder is not an online IDE. It supports VS Code, JetBrains, vim, and others — all over HTTPS or SSH — but it doesn't provide the editor itself.
  • Coder is not an Infrastructure as Code platform. It uses Terraform as its provisioning engine, but it doesn't replace Terraform.
  • Coder is not a CI/CD platform. Workspaces are for development, not for running pipelines.
  • Coder is not GitHub Codespaces or Gitpod. Those are SaaS offerings that run on vendor infrastructure. Coder runs on your infrastructure.

That last point is the architectural north star: data sovereignty and control. Everything runs inside your own cloud account or on-premises environment. Your source code never leaves your perimeter.


The big picture

A Coder deployment has three conceptual layers: a control plane that manages everything, provisioners that create and destroy workspaces, and the workspaces themselves where developers actually work.

Developer browser / SSH / IDE Control plane (coderd) API + Dashboard UI Provisioner daemons PostgreSQL Workspaces VM / container Coder agent Dev tools / IDE Terraform (provisioning)

coderd: the control plane

coderd is the heart of Coder. It is a single Go binary that serves several responsibilities at once: it exposes the REST API that everything talks to, it serves the web dashboard UI, it manages the database, and by default it also runs built-in Terraform provisioner daemons.

In production, coderd runs as multiple replicas behind a load balancer for high availability. Each replica is stateless in terms of request handling — all persistent state lives in PostgreSQL. Coder recommends at least one replica per availability zone. For smaller deployments a single replica is fine; the guidance is to scale up past one hundred users or workspaces.

Every user interaction — creating a workspace, viewing the dashboard, running a CLI command — hits the coderd API. There is no other entry point into the system.

Load balancer HTTPS / SSH coderd replica 1 3× provisioner daemons coderd replica 2 3× provisioner daemons PostgreSQL shared state

Templates: workspaces defined as code

In Coder, a template is a Terraform configuration that describes a workspace. It specifies what infrastructure to provision — a Kubernetes pod, an AWS EC2 instance, a Docker container, or a combination — along with the tools, dependencies, and IDE integrations that workspace should have.

Templates are maintained by template administrators, versioned in source control, and published to coderd. When a developer creates a workspace they pick a template, and Coder runs the Terraform to provision the underlying infrastructure automatically. The developer never touches the infrastructure layer.

This is a powerful separation: platform teams own the templates (and therefore the infrastructure standards, security posture, and tooling), while developers just consume workspaces. Configuration drift — the slow divergence of one developer's machine from another's — is eliminated because every workspace of a given type is provisioned from the same Terraform definition.

Templates can also define:

  • Autostart — workspaces spin up automatically at a scheduled time.
  • Autostop — idle workspaces are stopped automatically, reducing cloud costs.
  • TTL (time-to-live) — workspaces are deleted after a defined period.
  • Parameters — developers can customise their workspace at creation time (choose region, instance size, etc.) within bounds the template allows.

Provisioners: how workspaces get built

When a developer creates or starts a workspace, coderd queues a provisioning job. A provisioner daemon picks up the job, runs the Terraform defined in the template, and creates (or destroys) the underlying infrastructure.

By default, coderd runs three built-in provisioner daemons per replica. For most small and medium deployments this is sufficient. But Coder also supports external provisioners — provisioner daemons that run as separate processes or pods, connected to coderd over a secure channel.

External provisioners are recommended for production for two key reasons:

  • Security isolation — Terraform has access to your cloud credentials and APIs during a workspace build. Running provisioners in isolated containers means a malicious or misconfigured template can't gain shell access to the coderd host.
  • Infrastructure isolation — you can deploy provisioners in specific environments (on-prem, AWS, Azure) so they have direct access to the APIs they need to provision, rather than exposing those APIs to coderd itself.
Developer create workspace coderd queues build job Provisioner runs terraform apply AWS EC2 / EKS GCP GKE / GCE K8s pods

Provisioner daemons use a tag system to route build jobs. You can tag a provisioner environment=production and a template targeting that tag will only be built by provisioners with that tag. This lets you route sensitive workspace builds to dedicated, hardened provisioner environments.


Workspaces and the Coder agent

A workspace in Coder is whatever Terraform provisions: a Kubernetes pod, a VM, a Docker container, a cloud instance. The provisioned compute resource is called a computational resource. Any other resources in the Terraform (storage buckets, secrets, databases) are peripheral resources.

Inside every computational resource, Coder runs a lightweight process called the Coder agent. The agent is the bridge between the workspace and the coderd control plane. It is responsible for:

  • Establishing and maintaining a secure connection back to coderd (outbound — the workspace calls home, not the other way around)
  • Exposing SSH access so developers can connect with their local IDE
  • Forwarding port traffic so web applications running in the workspace are accessible via the browser
  • Running startup scripts defined in the template
  • Reporting workspace health and resource usage metrics back to coderd

The outbound connection model is important. Because the agent initiates the connection to coderd, workspaces don't need a public IP address or any inbound firewall rules. They can live deep inside a private VPC and still be reachable by developers.

Workspace (private VPC) Coder agent no inbound ports required IDE / dev tools / code coderd DERP relay / direct Developer SSH / browser outbound

Networking: DERP relays and direct connections

Developers connect to their workspaces either directly or through a relay. Coder uses the same DERP (Designated Encrypted Relay for Packets) protocol as Tailscale for relayed connections. When a direct connection isn't possible — because the developer is behind a restrictive firewall, or the workspace is in a deeply private network — traffic is relayed through coderd's built-in DERP server.

Direct connections are always preferred when possible, as they have lower latency. The agent and the developer's client negotiate which path to use automatically using a protocol similar to WebRTC's ICE.

For globally distributed teams, Coder supports workspace proxies. A workspace proxy is a relay deployed in a region close to the developers who work there. Instead of traffic from a developer in Sydney travelling all the way to a coderd instance in the US, it hits the Sydney proxy, which relays it to the workspace. This dramatically reduces latency for SSH connections, port forwarding, and workspace app access.

Importantly, workspace proxies only handle workspace traffic — SSH, port forwarding, terminal. They do not handle API calls, dashboard connections, or database access. Those always go directly to coderd.

coderd (US) control plane + DERP Proxy (EU) Frankfurt Proxy (APAC) Sydney Developer (EU) low latency Developer (APAC) low latency Workspaces any region

Access control: RBAC and SSO

Coder has a built-in Role-Based Access Control (RBAC) system. The default roles are:

  • Owner — full administrative access to everything.
  • Template Admin — can create and manage templates. The team responsible for defining workspace standards.
  • User Admin — can manage users and groups.
  • Member — can create and use workspaces within the templates available to them.
  • Auditor — read-only access to audit logs.

Organizations (Coder's multi-tenancy unit) allow you to create separate groups of users with their own templates, quotas, and access policies — useful for large enterprises where different business units need isolation from each other.

Coder integrates with external identity providers via OIDC/SSO — Google Workspace, GitHub, Okta, Active Directory, or any other OIDC-compatible provider. Group membership can be synced from the identity provider so that your existing organisational structure maps directly to Coder access policies.


The database: PostgreSQL

All of Coder's persistent state lives in PostgreSQL. This includes users, workspaces, templates, template versions, build logs, audit logs, provisioner job queues, and workspace proxy registrations.

In production Coder recommends a managed PostgreSQL service (AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL) with high availability enabled. The database is the single point of shared state across coderd replicas — losing it without a backup means losing the cluster configuration, though not the workspaces themselves (those are real infrastructure resources that exist independently).

Coder also supports database encryption at rest for sensitive fields. Enabling it adds a small CPU overhead on each coderd replica and requires an additional CPU core on the database instance.


Where you can run Coder

Because Coder is self-hosted, you have full flexibility over where it runs. The control plane (coderd) and the workspaces it manages can live on entirely different infrastructure — and frequently do.

Local / Docker (development and evaluation)

The fastest way to try Coder is with Docker. A single docker run command starts a self-contained coderd instance with an embedded PostgreSQL database. This is not recommended for production — there's no HA, no external database, and workspace data is ephemeral — but it's the right starting point to explore the platform in minutes. Coder also ships a Docker Compose file for slightly more persistent local setups.

Kubernetes (recommended for production)

Kubernetes is Coder's recommended and primary production target. Coder ships a Helm chart that deploys coderd as a Kubernetes Deployment, exposes it via a LoadBalancer Service or Ingress, and manages configuration through Kubernetes Secrets and environment variables. Kubernetes gives you pod restarts, rolling updates, horizontal scaling of coderd replicas, and native integration with cloud load balancers on EKS, GKE, and AKS.

Coder recommends deploying into a dedicated cluster, separate from your production application workloads. Two node groups are suggested: one for the coderd control plane, and one for user workspaces (if workspaces themselves run as Kubernetes pods).

AWS

On AWS the typical production setup is coderd running on EKS, with Amazon RDS (PostgreSQL) as the database. AWS recommends a Network Load Balancer over the Classic load balancer — the Helm chart has a specific annotation for this. Coder is also available directly from the AWS Marketplace as a Community Edition listing, which simplifies initial provisioning. Workspaces can be provisioned as EC2 instances, EKS pods, or a mix of both — a Terraform template targets whichever AWS resource makes sense for the team.

GCP

On GCP, coderd runs on GKE with Cloud SQL (PostgreSQL) as the managed database. GKE's managed node upgrades and auto-repair make it a low-ops option for the control plane. Workspaces can be GCE VMs, GKE pods, or Cloud Workstations, all provisioned via Terraform templates using the Google provider.

Azure

On Azure, coderd runs on AKS with Azure Database for PostgreSQL (Premium SSD, P-series) as the backend. Azure environments often require the Azure Application Gateway rather than a standard LoadBalancer Service, because the Application Gateway properly handles the WebSocket traffic that workspace connections rely on. Workspaces can be Azure VMs or AKS pods.

Bare metal / on-premises

Coder can be installed directly on any Linux machine using the CLI binary, without Kubernetes. This is the path for teams with on-premises infrastructure who aren't running a container orchestrator. You bring your own PostgreSQL instance, run coderd as a systemd service, and put a reverse proxy (nginx, Caddy) in front of it. Workspaces in this model are typically VMs provisioned via a VMware or libvirt Terraform provider, or Docker containers on a dedicated host.

This is also the foundation for the air-gapped deployment model described in the next section.

coderd runs anywhere Docker local / eval Kubernetes EKS · GKE · AKS AWS EKS + RDS GCP GKE + Cloud SQL Azure AKS + Postgres Bare metal Linux + systemd On-prem air-gapped · VMware Workspaces any Terraform target

An important nuance: where coderd runs and where workspaces run are independent choices. You can run coderd on Kubernetes in AWS, while provisioning workspaces as GCP VMs for developers who need access to GCP-specific services. The Terraform provisioner bridges them.


Deployment topology

Coder supports three deployment topologies suited to different scales:

Single region

A load balancer, multiple coderd replicas, and workspaces all in the same region. The right choice for most teams. Provides high availability without the complexity of multi-region coordination.

Multi-region

Coderd stays in one region (or is replicated across regions behind a global load balancer). Workspace proxies are deployed in regions where developers are located. Workspaces can be provisioned in any region. This is the topology for globally distributed engineering teams where round-trip latency to a single region would meaningfully hurt the developer experience.

Air-gapped

Coder can run entirely without internet access. This requires a self-hosted Terraform registry mirror, a self-hosted container registry, and an external PostgreSQL instance (since coderd can't download the embedded Postgres binaries in air-gapped mode). All update checks and telemetry are disabled. Direct workspace connections are disabled and all traffic is relayed through the control plane's DERP proxy. Used by government agencies and highly regulated industries.


Putting it all together

With all the components covered, here's how a complete Coder deployment looks — from the developer's browser down to the workspace running on whatever infrastructure you've chosen.

Coder deployment Developers browser · SSH · CLI Identity provider Okta · GitHub · AD Load balancer HTTPS · SSH Workspace proxies EU · APAC · regional relay coderd replicas API + dashboard DERP relay built-in provisioner daemons (3× per replica) PostgreSQL RDS · Cloud SQL · ADB self-hosted / on-prem all cluster state · users · workspaces · audit logs Provisioners Built-in (default) External (isolated) tag-routed · runs terraform apply Templates Terraform configs autostart · autostop · TTL versioned · managed by template admins Workspace host platform (coderd deployment target) Docker · Kubernetes (EKS / GKE / AKS) · AWS EC2 · GCP GCE · Azure VMs · bare metal · on-prem VMware Workspaces K8s pod EKS · GKE · AKS Coder agent EC2 / GCE / Azure VM cloud VMs Coder agent Docker container local · on-prem Coder agent Bare metal / VMware on-prem · air-gapped Coder agent Control plane Provisioning Workspaces External / infra Coder agent

How Coder compares to similar tools

It helps to understand where Coder sits relative to adjacent tools:

  • GitHub Codespaces — SaaS, runs on Microsoft's infrastructure, tightly coupled to GitHub. Coder is self-hosted and infrastructure-agnostic.
  • Gitpod — originally SaaS, now also self-hosted. Less infrastructure-flexible than Coder; Coder's Terraform-native model gives more control over what a workspace actually is.
  • DevContainers — a spec for defining a development environment in a container. Coder supports DevContainers as one way to define a workspace, but Coder is the platform that provisions and manages them.
  • Kubernetes + custom tooling — many teams build their own dev environment platform on Kubernetes. Coder is essentially that platform, pre-built, with authentication, RBAC, templating, audit logs, and workspace lifecycle management already handled.

Summary

Coder's architecture is built around a single insight: development environments are infrastructure, and infrastructure should be defined as code, provisioned automatically, and managed centrally.

The control plane (coderd) handles all orchestration. Provisioners run Terraform to create workspaces on whatever cloud or on-prem infrastructure you have. The Coder agent inside each workspace establishes a secure outbound tunnel back to the control plane, making the workspace accessible without exposing it to the internet. Workspace proxies bring the relay close to developers in remote regions. And PostgreSQL holds all state, making every coderd replica stateless and replaceable.

The result is a platform where spinning up a fully configured development environment takes seconds, every developer on a team works in an identical environment, and the company's source code never leaves its own infrastructure.


Related on this blog: Architecture: Kubernetes



×