Deploying GitHub self-hosted runners to apply migrations to AWS RDS for MySQL.
Architecture overview:
Create the .auto.tfvars from the template:
cp samples/sample.tfvars .auto.tfvarsSet the EC2 user data file according to your requirements:
# Available files: ubuntu-nodejs.sh, ubuntu-docker.sh
gh_runner_user_data = "ubuntu-nodejs.sh"If you wish to create the application cluster as well, change the variable to true:
create_application_cluster = trueCreate the infrastructure:
terraform init
terraform apply -auto-approveTip
Device names from EC2 can be different from the actual device. Check the documentation
In this project, the EBS volume device name will be /dev/sdf, and the block device should be /dev/nvme1n1. More about this in the naming documentation.
Login as root to list the drives:
fdisk -lList the available disks with lsblk:
lsblkTo determine if the volume is formatted or not, use the -f option. If the FSTYPE column for the volume (e.g., /dev/nvme1n1) is empty, the volume is not formatted. If it shows a file system type (e.g., ext4, xfs), the volume is already formatted.
lsblk -fTo check it directly, use command below. If the output says data, the volume is not formatted. If the output shows a file system type (e.g., ext4 or xfs), the volume is formatted.
file -s /dev/nvme1n1Check the mount:
df -hmount -aFollow the documentation to format and mount the partition.
Connect to the GitHub Runner host.
aws ssm start-session --target i-00000000000000000If creating a new environment, verify that the userdata executed correctly and reboot to apply kernel upgrades:
Should
rebootautomatically
cloud-init statusSwitch to root:
sudo su -Enter the /opt directory, this is where we'll install the runner agent:
cd /optEnable the runner scripts to run as root:
export RUNNER_ALLOW_RUNASROOT="1"Access the repository Actions section and create a new runner.
Make sure you select the appropriate architecture, which should be Linux and ARM64.
Once done, stop the agent and install the runner agent as a service:
./svc.sh install
./svc.sh start
./svc.sh statusThis repository contains examples of pipelines in the .github/workflows directory.
Check out the guidelines for Prisma migrations deployment, or for your preferred migration tool.
Start bu running a MySQL instance:
docker run -d \
-e MYSQL_DATABASE=mysqldb \
-e MYSQL_ROOT_PASSWORD=cxvxc2389vcxzv234r \
-p 3306:3306 \
--name mysql-prisma-local \
mysql:8.0Special privileges are required by Prisma to apply shadow databases.
Enter the application directory:
cd appApply the migrations:
Whenever you update your Prisma schema, you will have to update your database schema using either
prisma migrate devorprisma db push. This will keep your database schema in sync with your Prisma schema. The commands will also regenerate Prisma Client.
# This calls generate under the hood
npx prisma migrate dev --name initRun the application locally:
npm run devCheck if the schema and database connections are working:
curl localhost:3000/prismaTo verify that the Docker image,
docker compose upAdd the DATABASE_URL environment variable:
export DATABASE_URL='mysql://root:cxvxc2389vcxzv234r@localhost:3306/mysqldb'Deploy the migration:
npx prisma migrate deploy