Welcome to the Coder Demo Environment's Github repository!
This project powers "coderdemo.io", a production-grade, multi-region demonstration environment showcasing Coder's cloud development capabilities, workspace proxies, and global deployment patterns.
Important
This infrastructure is HEAVILY AWS-opinionated.
This repository uses AWS-specific services and patterns throughout (EKS, Aurora Serverless v2, VPC, Route53, ACM, etc.). While Coder itself is cloud-agnostic, this particular deployment is designed exclusively for AWS. If you're deploying on GCP, Azure, or other cloud providers, you'll need to significantly adapt the infrastructure code.
Get Started Here 👉 https://coderdemo.io
Login Flow
- Click "Sign in with GitHub"
- Authorize the Coder Demo GitHub App
- Start creating workspaces in your preferred region!
Available Regions:
- 🇺🇸 US East (Ohio) - Primary deployment with database
- 🇺🇸 US West (Oregon) - Secondary server + workspace proxy
- 🇪🇺 EU West (London) - Workspace proxy
[!NOTE] This is a demo environment. For production Coder deployments, refer to the official Coder documentation.
This deployment implements a hub-and-spoke architecture across three AWS regions:
The primary region containing foundational, non-repeatable infrastructure:
- Central Database: Aurora Serverless v2 PostgreSQL cluster (shared by all regions)
- Terraform Backend: S3 bucket and DynamoDB table for state management
- Container Registry: ECR for custom images
- Primary VPC: Custom VPC with peering to spoke regions
- Primary Coder Server: Main deployment handling authentication and control plane
- Additional Services: Redis, LiteLLM, and custom applications
Repeatable regional infrastructure for workspace proxies:
- Workspace Proxies: Low-latency access to workspaces
- EKS Clusters: Regional Kubernetes clusters with Karpenter autoscaling
- Route53: Regional DNS records for proxy endpoints
- AWS ACM: Regional SSL/TLS certificates
┌─────────────────────────────────┐
│ us-east-2 (Primary Hub) │
│ │
│ ┌─────────────────────────┐ │
│ │ Coder Server │ │
│ │ Aurora Serverless v2 │ │
│ │ Redis / ECR │ │
│ └─────────────────────────┘ │
│ │
└────────────┬───────────────────┘
│
┌────────────┴────────────┐
│ │
┌──────────▼──────────┐ ┌─────────▼──────────┐
│ us-west-2 (Spoke) │ │ eu-west-2 (Spoke) │
│ │ │ │
│ ┌───────────────┐ │ │ ┌──────────────┐ │
│ │ Coder Proxy │ │ │ │ Coder Proxy │ │
│ │ Coder Server │ │ │ │ Workspaces │ │
│ │ Workspaces │ │ │ └──────────────┘ │
│ └───────────────┘ │ │ │
└─────────────────────┘ └────────────────────┘
For detailed architecture documentation, see:
Warning
Infrastructure Repeatability Notice
This environment is heavily opinionated towards AWS and uses a hub-and-spoke architecture:
infra/aws/us-east-2- Primary hub region with foundational infrastructure (database, terraform backend, VPC, etc.). This is NOT repeatable - it's meant to be deployed once as your control plane.infra/aws/eu-west-2- Clean spoke region example with workspace proxy only. This IS repeatable for adding new regions.infra/aws/us-west-2- Hybrid spoke region with both server and proxy deployments. Use this as a reference for redundant server deployments.
When deploying to new regions, use eu-west-2 as your template for workspace proxies.
The infrastructure is deployed in layers:
-
Foundation Layer (us-east-2 only - deploy once)
- Terraform backend (S3 + DynamoDB)
- VPC with custom networking
- Aurora Serverless v2 PostgreSQL database
- ECR for container images
- Redis for caching
-
Compute Layer (all regions)
- EKS clusters with managed node groups
- Karpenter for workspace autoscaling
- VPC peering (for spoke regions to hub)
-
Certificate & DNS Layer (all regions)
- AWS Certificate Manager (ACM) for SSL/TLS
- Route53 for DNS management
- Regional subdomains (e.g.,
us-west-2.coderdemo.io)
-
Kubernetes Applications Layer (all regions)
- AWS Load Balancer Controller
- AWS EBS CSI Driver
- Karpenter node provisioner
- Metrics Server
- Cert Manager
-
Coder Layer
- Primary (us-east-2): Coder Server with database connection
- Spoke regions: Coder Workspace Proxies connected to primary
This repository provides reusable Terraform modules for deploying Coder on AWS:
Network Module: eks-vpc
Creates an opinionated VPC designed for EKS and Coder workloads:
- Customizable public and private subnets across multiple AZs
- Internet Gateway for public access
- Cost-optimized NAT Gateway using fck-nat
- Automatic routing configuration
- Subnet tagging for EKS and Karpenter integration
Compute Module: eks-cluster
Creates a production-ready EKS cluster similar to EKS Auto Mode:
- Leverages the AWS Managed Terraform EKS module
- Pre-configured IAM roles and policies for:
- Karpenter - Node autoscaling
- AWS EBS CSI Driver - Persistent volumes
- AWS Load Balancer Controller - Ingress management
- Coder External Provisioner - Workspace provisioning
- Amazon Bedrock - AI capabilities
- IRSA (IAM Roles for Service Accounts) configuration
- Node group with custom launch templates
Kubernetes Bootstrap Modules: modules/k8s/bootstrap/
Helm-based Kubernetes application deployments:
lb-controller- AWS Load Balancer Controllerebs-controller- AWS EBS CSI Drivermetrics-server- Kubernetes Metrics Serverkarpenter- Karpenter autoscaler with NodePoolscert-manager- Certificate managementcoder-server- Primary Coder deploymentcoder-proxy- Workspace proxy deployments
- AWS CLI configured with appropriate credentials
- Terraform >= 1.9.0
- kubectl
- Helm 3.x
- GitHub OAuth App credentials (for authentication)
Important
Only deploy this once for your entire multi-region setup.
cd infra/aws/us-east-2
# 1. Create Terraform backend
cd terraform-backend
terraform init
terraform apply
cd ..
# 2. Create VPC
cd vpc
terraform init -backend-config=backend.hcl
terraform apply
cd ..
# 3. Deploy EKS cluster
cd eks
terraform init -backend-config=backend.hcl
terraform apply
cd ..
# 4. Deploy Aurora Serverless v2 database
cd rds
terraform init -backend-config=backend.hcl
terraform apply
cd ..
# 5. Set up Route53 and ACM for primary domain
cd route53
terraform init -backend-config=backend.hcl
terraform apply
cd ..
cd acm
terraform init -backend-config=backend.hcl
terraform apply
cd ..cd infra/aws/us-east-2/k8s
# Update kubeconfig
aws eks update-kubeconfig --region us-east-2 --name coderdemo
# Deploy in order (each depends on previous)
cd lb-controller && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd ebs-controller && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd metrics-server && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd karpenter && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd cert-manager && terraform init -backend-config=backend.hcl && terraform apply && cd ..
# Deploy Coder Server
cd coder-server && terraform init -backend-config=backend.hcl && terraform apply && cd ..
# Deploy Coder Workspace Provisioner
cd coder-ws && terraform init -backend-config=backend.hcl && terraform apply && cd ..For each additional region (use eu-west-2 as template):
# Example: Deploy to eu-west-2
cd infra/aws/eu-west-2
# 1. Deploy EKS cluster
cd eks
terraform init -backend-config=backend.hcl
terraform apply
cd ..
# 2. Deploy Kubernetes applications (same order as us-east-2)
cd k8s
aws eks update-kubeconfig --region eu-west-2 --name coderdemo-euw2
cd lb-controller && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd ebs-controller && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd metrics-server && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd karpenter && terraform init -backend-config=backend.hcl && terraform apply && cd ..
cd cert-manager && terraform init -backend-config=backend.hcl && terraform apply && cd ..
# 3. Deploy Coder Workspace Proxy
cd coder-proxy && terraform init -backend-config=backend.hcl && terraform apply && cd ..
# 4. Deploy Coder Workspace Provisioner
cd coder-ws && terraform init -backend-config=backend.hcl && terraform apply && cd ..Each region requires:
- Route53 DNS records pointing to the regional load balancer
- ACM certificate for the regional subdomain
- TLS certificate configuration in Coder proxy/server
See the region-specific configurations in:
infra/aws/us-east-2/route53/infra/aws/us-west-2/route53/infra/aws/us-west-2/acm/
Each deployment requires a terraform.tfvars file (gitignored for security). Key variables include:
cluster_name = "coderdemo"
cluster_region = "us-east-2"
cluster_profile = "your-aws-profile"coder_access_url = "https://coderdemo.io"
coder_wildcard_access_url = "*.coderdemo.io"
addon_version = "2.27.1" # Coder versioncoder_db_secret_url = "postgres://user:pass@host:5432/coder?sslmode=require"# GitHub OAuth
coder_oauth_secret_client_id = "your-github-oauth-client-id"
coder_oauth_secret_client_secret = "your-github-oauth-secret"
# GitHub External Auth (for workspace git operations)
coder_github_external_auth_secret_client_id = "your-github-app-id"
coder_github_external_auth_secret_client_secret = "your-github-app-secret"# Using AWS ACM (recommended)
kubernetes_create_ssl_secret = false
kubernetes_ssl_secret_name = "coder-tls"
acme_registration_email = "admin@coderdemo.io"Each region uses S3 for Terraform state. Create a backend.hcl file:
bucket = "your-terraform-state-bucket"
key = "path/to/state/terraform.tfstate"
region = "us-east-2"
dynamodb_table = "your-terraform-locks-table"
encrypt = true
profile = "your-aws-profile"This deployment uses a centralized database approach:
- Aurora Serverless v2 PostgreSQL in us-east-2
- All regions connect to the same database over VPC peering
- Benefits: Simplified data consistency, no replication complexity
- Trade-offs: All regions depend on us-east-2 availability
For production high-availability requirements, consider:
- Aurora Global Database for multi-region read replicas
- Active-active deployments with database replication
- Regional database failover strategies
See Multi-Region Deployment Guide for more details.
Workspace proxies provide:
- Low-latency connections to workspaces in remote regions
- Reduced bandwidth costs by keeping traffic regional
- Improved user experience for global teams
Each proxy:
- Registers with the primary Coder server (us-east-2)
- Receives a session token for authentication
- Proxies workspace connections without database access
- Can run workspace provisioners locally
- VPC Peering: Spoke regions peer with hub region for database access
- NAT Strategy: Cost-optimized fck-nat for outbound internet access
- Load Balancers: NLB for Coder, ALB for other services
- DNS: Regional subdomains route to closest workspace proxy
Note
Observability stack configuration is in progress.
Planned integrations:
- Prometheus for metrics collection
- Grafana for visualization
- CloudWatch for AWS resource monitoring
- Coder built-in metrics and health endpoints
- Database credentials: Stored in terraform.tfvars (gitignored)
- OAuth credentials: Stored in terraform.tfvars (gitignored)
- TLS certificates: Managed by AWS ACM
- Kubernetes secrets: Created by Terraform, stored in etcd
For production, consider:
- AWS Secrets Manager for credential rotation
- External Secrets Operator for Kubernetes
- HashiCorp Vault for centralized secret management
- Private subnets for all compute resources
- Security groups restricting traffic between tiers
- VPC peering for controlled cross-region access
- TLS encryption for all external endpoints
- IRSA (IAM Roles for Service Accounts) for pod-level permissions
- Least privilege principle for all IAM policies
- No long-lived credentials in pods
- Regular IAM policy audits
Key strategies used in this deployment:
- Karpenter Autoscaling: Scales nodes to zero when workspaces are idle
- Aurora Serverless v2: Scales database capacity based on load
- fck-nat: Open-source NAT solution (90% cheaper than AWS NAT Gateway)
- Spot Instances: Karpenter uses spot for workspace nodes where appropriate
- Regional Resources: Only deploy proxies in regions with active users
Estimated monthly costs:
- Hub region (us-east-2): $200-400/month base + per-workspace costs
- Spoke regions: $100-200/month base + per-workspace costs
See Infrastructure Best Practices for detailed cost analysis.
EKS cluster creation fails
- Verify IAM permissions for EKS and VPC operations
- Check VPC CIDR doesn't conflict with existing networks
- Ensure sufficient EIPs available in the region
Karpenter not scaling nodes
- Verify Karpenter controller has IRSA permissions
- Check NodePool configurations in
k8s/karpenter/ - Review Karpenter logs:
kubectl logs -n karpenter -l app.kubernetes.io/name=karpenter
Coder proxy not connecting
- Verify proxy token is correctly configured
- Check network connectivity from proxy to primary server
- Review NLB health checks and target group status
Database connection failures
- Verify security group allows traffic from EKS nodes
- Check VPC peering routes are configured
- Confirm database URL includes
?sslmode=require
# Check EKS cluster status
aws eks describe-cluster --name coderdemo --region us-east-2
# Get kubeconfig
aws eks update-kubeconfig --name coderdemo --region us-east-2
# View Karpenter logs
kubectl logs -n karpenter -l app.kubernetes.io/name=karpenter -f
# Check Coder server logs
kubectl logs -n coder -l app.kubernetes.io/name=coder -f
# List all Karpenter nodes
kubectl get nodes -l karpenter.sh/initialized=true
# Check workspace proxy status
kubectl get pods -n coder-proxyThis repository represents a production demo environment. For general Coder questions or contributions, please visit:
This infrastructure code is provided as-is for reference purposes. Refer to individual component licenses:
- Coder Documentation
- Coder Template Examples
- EKS Best Practices Guide
- Karpenter Documentation
- Multi-Region Deployment Guide
- Infrastructure Best Practices
Built with ❤️ by the Coder team