Skip to content

Conversation

@Hexeong
Copy link
Contributor

@Hexeong Hexeong commented Dec 27, 2025

관련 이슈

작업 내용

  1. server의 docker-compose에서 분리해온 사이드 인프라(redis, redis-exporter, alloy)를 remote-SSH와 docker-compose로 실행하는 스크립트 파일 구현
  2. nginx setup을 userdata에서 분리하여 remote-SSH로 실행되게 수정
  3. 기존 setup을 실행하던 script 파일에 대해 적힌 한글 주석이 압축/해제되는 과정에서 깨지기에 영어 주석으로 바꿨습니다!

특이 사항

리뷰 요구사항 (선택)

Summary by CodeRabbit

  • New Features

    • Added Grafana Alloy integration for structured logging and Loki monitoring of application logs
    • Introduced Redis and Redis Exporter services to the application infrastructure
    • Automated SSL/TLS certificate management with Certbot for Nginx
  • Infrastructure Updates

    • Enhanced EC2 provisioning with new setup scripts for Docker, Nginx, and side infrastructure services
    • Added configuration variables for SSH keys, working directories, container versions, and monitoring environment settings

✏️ Tip: You can customize this high-level summary in your review settings.

@Hexeong Hexeong force-pushed the feat/6-integrate-side-infra branch from 2065e3a to b003435 Compare December 27, 2025 12:07
@coderabbitai
Copy link

coderabbitai bot commented Dec 28, 2025

📝 Walkthrough

Walkthrough

This change introduces Terraform-managed side infrastructure components (Redis, Redis Exporter, and Alloy) alongside updated module variables and provisioning scripts. EC2 instances now execute dynamic setup procedures via cloud-init and remote-exec triggers, with monitoring server integration for log aggregation.

Changes

Cohort / File(s) Summary
Environment Configuration
environment/prod/main.tf, environment/prod/variables.tf, environment/stage/main.tf, environment/stage/variables.tf
Added six new Terraform input variables (ssh\_key\_path, work\_dir, alloy\_env\_name, redis\_version, redis\_exporter\_version, alloy\_version) to both prod and stage environments; variables passed to module invocations.
Module Variable Declarations
modules/app_stack/variables.tf
Added matching six new input variables to app\_stack module with descriptions for SSH key path, working directory, Alloy environment name, and Docker image versions for Redis, Redis Exporter, and Alloy.
EC2 & Orchestration
modules/app_stack/ec2.tf
Added data source for monitoring server discovery, refactored cloud-init to separate Docker setup, introduced two null\_resource triggers (update\_nginx, update\_side\_infra) for dynamic script execution with template rendering and monitoring server IP injection.
Provisioning Scripts
modules/app_stack/scripts/docker_setup.sh, modules/app_stack/scripts/nginx_setup.sh.tftpl, modules/app_stack/scripts/side_infra_setup.sh.tftpl
New/updated shell scripts: Docker installation and configuration; Nginx setup with Let's Encrypt SSL integration via template; new side infrastructure provisioning script that configures Redis, Redis Exporter, and Alloy with docker-compose.
Logging & Monitoring Configuration
config/side-infra/config.alloy.tftpl
New Alloy configuration template for live debugging and Loki integration; enables Spring backend log collection with static labels and batch processing.
Submodule Update
config/secrets
Submodule reference updated to new commit; metadata pointer change only.

Sequence Diagram(s)

sequenceDiagram
    participant EC2 as EC2 Instance
    participant CloudInit as Cloud-init
    participant Trigger as Null Resource<br/>(Triggers)
    participant Scripts as Setup Scripts
    participant Services as Services<br/>(Docker)
    participant Monitor as Monitoring Server
    
    EC2->>CloudInit: Initialize instance
    CloudInit->>Scripts: Execute docker_setup.sh (Part 1)
    Scripts->>Services: Install Docker Engine
    Services-->>EC2: Docker ready
    
    Trigger->>Trigger: Detect nginx template change
    Trigger->>Scripts: Execute nginx_setup.sh.tftpl
    Scripts->>Services: Install Nginx + Certbot
    Scripts->>Services: Configure SSL/TLS
    Services-->>EC2: Nginx ready
    
    Trigger->>Trigger: Detect side_infra template change
    EC2->>Monitor: Query monitoring server IP
    Monitor-->>EC2: Return monitoring IP
    Trigger->>Scripts: Execute side_infra_setup.sh.tftpl
    Scripts->>Scripts: Render Alloy config (inject<br/>monitoring server IP)
    Scripts->>Services: Start redis via docker-compose
    Scripts->>Services: Start redis-exporter
    Scripts->>Services: Start alloy (logging agent)
    Services->>Monitor: Alloy pushes logs to Loki
    Monitor-->>Services: ✓ Logs ingested
    Services-->>EC2: Side infrastructure ready
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Redis hops in, Alloy takes flight,
Nginx stands tall with TLS so bright,
Monitoring logs flow in streams so clean,
Side infra magic—the best ever seen! ✨

Pre-merge checks and finishing touches

✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly summarizes the main change: integrating side infrastructure (Redis, Redis-exporter, Alloy) from the server repository into the infra repository using Terraform.
Description check ✅ Passed The PR description includes the required linked issue (#6), work content describing the three main tasks, but lacks details in the optional review requirements section.
Linked Issues check ✅ Passed The pull request successfully implements all objectives from issue #6: manages side infrastructure via Terraform, separates components from server repo, and provides automated deployment scripts.
Out of Scope Changes check ✅ Passed All changes are directly related to integrating side infrastructure: new scripts, Alloy configuration, variables for deployment, and infrastructure modifications. No out-of-scope changes detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/6-integrate-side-infra

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (12)
modules/app_stack/variables.tf (1)

118-148: Add validation rules for the new variables.

The new variables lack validation rules, which could lead to runtime errors when invalid values are provided. Consider adding validation for:

  • ssh_key_path: Validate the path exists and is readable
  • work_dir: Validate it's a valid directory path format
  • Version variables (redis_version, redis_exporter_version, alloy_version): Consider validating the format matches expected Docker tag patterns
🔎 Example validation rules
 variable "ssh_key_path" {
   description = "Path to the SSH private key file for remote-exec"
   type        = string
+
+  validation {
+    condition     = can(file(var.ssh_key_path))
+    error_message = "The ssh_key_path must be a valid file path."
+  }
 }

 variable "redis_version" {
   description = "Docker image tag for Redis"
   type        = string
+
+  validation {
+    condition     = can(regex("^[0-9]+\\.[0-9]+\\.[0-9]+$", var.redis_version))
+    error_message = "Redis version must follow semantic versioning (e.g., 7.2.0)."
+  }
 }
config/side-infra/config.alloy.tftpl (1)

11-11: Consider making the log path configurable.

The log path is hardcoded to /var/log/spring/*.log. If different environments or deployments need different log locations, consider making this a template variable.

🔎 Example parameterization
 local.file_match "spring_logs" {
-  path_targets = [{ __path__ = "/var/log/spring/*.log" }]  // 서비스 로그 파일 경로
+  path_targets = [{ __path__ = "${log_path}" }]
 }

Then pass log_path as a template variable when rendering this file.

modules/app_stack/scripts/docker_setup.sh (1)

1-27: Add idempotency checks and error handling.

The script lacks:

  1. Checks for existing Docker installation (will fail if Docker is already installed)
  2. Explicit error handling (set -e or set -euo pipefail)
  3. Retry logic for apt operations

These improvements would make the script more robust for re-runs and handle transient failures.

🔎 Suggested improvements
 #!/bin/bash
+set -euo pipefail
+
+# Check if Docker is already installed
+if command -v docker &> /dev/null; then
+  echo "Docker is already installed. Skipping installation."
+  exit 0
+fi

 # 1. 필수 패키지 설치
 apt-get update
 apt-get install -y ca-certificates curl gnupg lsb-release
modules/app_stack/scripts/nginx_setup.sh.tftpl (2)

6-8: Add validation for template variables.

The script doesn't validate that the template variables (domain_name, email, conf_file_name) are non-empty or properly formatted. Invalid values could cause the script to fail in unexpected ways.

🔎 Add input validation
 # --- variables setting ---
 DOMAIN="${domain_name}"
 EMAIL="${email}"
 CONF_NAME="${conf_file_name}"
+
+# Validate required variables
+if [[ -z "$DOMAIN" ]] || [[ -z "$EMAIL" ]] || [[ -z "$CONF_NAME" ]]; then
+  echo "Error: Required variables are not set"
+  exit 1
+fi

22-31: Make certificate issuance idempotent.

The script will fail if the SSL certificate already exists for the domain. Consider checking if the certificate exists before attempting to issue a new one.

🔎 Add certificate existence check
 # 3. Issue SSL certificate (Non-interactive mode)
+
+# Check if certificate already exists
+if [ -d "/etc/letsencrypt/live/$DOMAIN" ]; then
+  echo "Certificate already exists for $DOMAIN. Skipping issuance."
+else
+  systemctl stop nginx
+
+  certbot certonly --standalone \
+    --non-interactive \
+    --agree-tos \
+    --email "$EMAIL" \
+    -d "$DOMAIN"
+
+  echo "Certificate obtained successfully."
+fi
-systemctl stop nginx
-
-certbot certonly --standalone \
-  --non-interactive \
-  --agree-tos \
-  --email "$EMAIL" \
-  -d "$DOMAIN"
-
-echo "Certificate obtained successfully."
modules/app_stack/ec2.tf (1)

87-87: Verify cloud-init completion timeout.

The cloud-init status --wait command will wait indefinitely for cloud-init to complete. If cloud-init hangs or fails, the provisioner will block forever. Consider adding a timeout.

🔎 Add timeout to cloud-init wait
-      "cloud-init status --wait > /dev/null", # Docker 설치 대기
+      "timeout 300 cloud-init status --wait > /dev/null || (echo 'cloud-init timeout or failure' && exit 1)", # Docker 설치 대기 (5분 타임아웃)
modules/app_stack/scripts/side_infra_setup.sh.tftpl (4)

1-13: Consider adding set -u for safer error handling.

While set -e is present, adding set -u would catch undefined variable references and prevent silent failures when template variables aren't properly substituted.

🔎 Proposed enhancement
 #!/bin/bash
 
-set -e
+set -eu

30-30: Remove deprecated version field from docker-compose.yml.

The version field in Docker Compose files is deprecated and no longer necessary in recent versions of Docker Compose.

🔎 Proposed fix
 cat <<EOF > "$WORK_DIR/docker-compose.side-infra.yml"
-version: '3.8'
-
 services:

36-36: Verify the necessity of host networking mode.

All three services use network_mode: "host", which bypasses Docker's network isolation and directly exposes services on the host network. While this simplifies networking, it reduces security isolation and can lead to port conflicts.

If host networking is required for performance or specific architectural reasons, ensure this is documented. Otherwise, consider using Docker networks with explicit port mappings for better isolation.

Also applies to: 47-47, 58-58


72-75: Health check verification is optional; Docker Compose V2 is already ensured.

The script uses Docker Compose V2 syntax (docker compose with a space), which is correctly installed by docker_setup.sh as a prerequisite before this script runs (via docker-compose-plugin package). Verifying Docker Compose V2 availability is not necessary.

Adding health checks after docker compose up -d would be a nice-to-have enhancement for better visibility, but the current implementation with error handling on cleanup operations (|| true) is sufficient for the script's purpose.

environment/stage/variables.tf (1)

107-135: Add validation and improve variable documentation.

The new variables lack validation rules and some have vague descriptions:

  1. alloy_env_name (lines 117-120): The description "Alloy Env Name" is not descriptive enough. Clarify what values are expected (e.g., "production", "dev", "staging").

  2. Version variables (lines 122-135): Consider adding validation to ensure Docker image tags follow expected formats.

  3. ssh_key_path (lines 107-110): Could benefit from validation to ensure it's a valid file path.

🔎 Proposed enhancements
 variable "alloy_env_name" {
-  description = "Alloy Env Name"
+  description = "Environment name for Alloy (e.g., 'production', 'dev', 'staging')"
   type        = string
+  validation {
+    condition     = contains(["production", "dev", "staging"], var.alloy_env_name)
+    error_message = "alloy_env_name must be one of: production, dev, staging"
+  }
 }
 
 variable "redis_version" {
   description = "Docker image tag for Redis"
   type        = string
+  validation {
+    condition     = can(regex("^[a-zA-Z0-9._-]+$", var.redis_version))
+    error_message = "redis_version must be a valid Docker image tag"
+  }
 }
 
 variable "redis_exporter_version" {
   description = "Docker image tag for Redis Exporter"
   type        = string
+  validation {
+    condition     = can(regex("^[a-zA-Z0-9._-]+$", var.redis_exporter_version))
+    error_message = "redis_exporter_version must be a valid Docker image tag"
+  }
 }
 
 variable "alloy_version" {
   description = "Docker image tag for Grafana Alloy"
   type        = string
+  validation {
+    condition     = can(regex("^[a-zA-Z0-9._-]+$", var.alloy_version))
+    error_message = "alloy_version must be a valid Docker image tag"
+  }
 }
environment/prod/variables.tf (1)

107-135: Add validation and improve variable documentation (same as stage).

The new variables have the same issues as in environment/stage/variables.tf:

  1. alloy_env_name lacks clear documentation about expected values
  2. Version variables lack validation for Docker tag formats
  3. ssh_key_path could benefit from path validation

Additionally, consider whether these identical variable definitions across stage and prod environments could be DRYed up using a shared variables module or common configuration to reduce duplication and maintenance burden.

Refer to the proposed validation enhancements in environment/stage/variables.tf. The same improvements should be applied here for consistency.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1ea76ec and e7760fc.

📒 Files selected for processing (12)
  • README.md
  • config/secrets
  • config/side-infra/config.alloy.tftpl
  • environment/prod/main.tf
  • environment/prod/variables.tf
  • environment/stage/main.tf
  • environment/stage/variables.tf
  • modules/app_stack/ec2.tf
  • modules/app_stack/scripts/docker_setup.sh
  • modules/app_stack/scripts/nginx_setup.sh.tftpl
  • modules/app_stack/scripts/side_infra_setup.sh.tftpl
  • modules/app_stack/variables.tf
🔇 Additional comments (10)
config/secrets (1)

1-1: The review comment references incorrect commit hashes that do not exist in this repository's history. The old commit hash c1cf69a9de6f6b766750395875cd5bdcb16a0e96 is not present in any branch or tag. The config/secrets submodule was first added in commit df43044 (feat: terraform으로 prod/stage 환경 IaC 구현) and subsequently updated in commit e7760fc (refactor: private IP를 사용하여 접근하도록 수정). The current commit 83f176821a253bb18f5ca36f1f24e8ce2e7c91d7 is the result of these normal commits and is fully visible in the repository history. No hidden or obscured changes are present.

Likely an incorrect or invalid review comment.

modules/app_stack/scripts/nginx_setup.sh.tftpl (1)

52-54: LGTM! Strong TLS configuration.

The TLS configuration uses modern protocols (TLSv1.2+) and strong cipher suites, which aligns with current security best practices.

modules/app_stack/ec2.tf (3)

58-94: Consider null_resource re-execution behavior.

The null_resource uses SHA256 hash of the rendered template as a trigger. This means:

  1. Any change to the script will trigger re-execution (intended)
  2. The script runs via remote-exec every time the hash changes
  3. If the script is not idempotent, this could cause issues

Since the nginx setup script has non-idempotent operations (like cert issuance), repeated executions could fail or cause rate limiting from Let's Encrypt.

Consider whether the scripts are designed to be re-run safely. Based on the earlier review of nginx_setup.sh.tftpl, the certificate issuance step needs idempotency improvements.


97-143: Good approach to side infra provisioning.

Using null_resource with remote-exec for side infrastructure deployment is appropriate here. The trigger based on script hash ensures updates are applied when configuration changes.

The nested templatefile() for Alloy config correctly injects the monitoring server's private IP, ensuring proper integration with the logging backend.


104-106: Verify monitoring server allows inbound traffic on port 3100.

The Alloy configuration sends logs to Loki on port 3100 (http://${loki_ip}:3100/loki/api/v1/push). Both the API server and monitoring server are in the same VPC, so routing is available. However, the monitoring server's security group (not managed in this module) must allow inbound port 3100 traffic from the API server's security group (api_sg). Verify this rule exists in the monitoring server's security group configuration.

README.md (1)

6-42: LGTM! Clear documentation of new structure.

The updated directory structure clearly documents the new side-infra configuration directory and the scripts for deployment. The organization is logical and makes it easy to understand where files are located.

environment/prod/main.tf (1)

49-58: LGTM! Proper variable propagation.

The new variables are correctly passed through to the module. The comments clearly indicate the purpose of each section (SSH key path and Side Infra variables).

environment/stage/main.tf (1)

49-58: LGTM! Consistent with production environment.

The stage environment correctly mirrors the production environment's variable propagation, ensuring consistency across environments.

config/side-infra/config.alloy.tftpl (1)

25-25: No action required. The ALLOY_ENV environment variable is properly configured in the deployment setup. The side_infra_setup.sh.tftpl script correctly passes ALLOY_ENV=$ALLOY_ENV_NAME to the Alloy container via the docker-compose environment section (line 57), ensuring logs receive the environment label as intended.

modules/app_stack/scripts/side_infra_setup.sh.tftpl (1)

23-26: LGTM!

The heredoc approach for creating the Alloy configuration file is clean and appropriate.

Comment on lines +1 to +11
data "aws_instance" "monitoring_server" {
filter {
name = "tag:Name"
values = ["solid-connection-monitoring"]
}

filter {
name = "instance-state-name"
values = ["running"]
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling for missing monitoring server.

The data source will fail the Terraform plan if no monitoring server is found or if multiple servers match the filters. Consider adding validation or documentation about this prerequisite.

🔎 Add validation check

You can add a validation step after the module that uses this data source:

# In the calling module (environment/prod or stage)
locals {
  monitoring_server_id = module.prod_stack.monitoring_server_id
}

# Add output in modules/app_stack/outputs.tf
output "monitoring_server_id" {
  value = data.aws_instance.monitoring_server.id
  description = "ID of the monitoring server"
}

Alternatively, add a precondition check in Terraform 1.2+:

 data "aws_instance" "monitoring_server" {
+  lifecycle {
+    precondition {
+      condition     = length(data.aws_instance.monitoring_server.*.id) == 1
+      error_message = "Exactly one monitoring server must exist with tag Name=solid-connection-monitoring in running state"
+    }
+  }
+
   filter {
     name   = "tag:Name"
     values = ["solid-connection-monitoring"]
   }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +16 to +21
# 1. Create working and log directories
mkdir -p "$WORK_DIR/config/side-infra"
mkdir -p "$WORK_DIR/logs"
# Set log directory permissions (Shared by App and Alloy)
chmod 777 "$WORK_DIR/logs"
chown -R ubuntu:ubuntu "$WORK_DIR"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reduce overly permissive directory permissions.

Setting the log directory to 777 grants read, write, and execute permissions to all users, which poses a security risk. If the Alloy container needs write access, consider using more restrictive permissions (e.g., 775) combined with proper user/group configuration in the container setup.

🔎 Proposed fix
 mkdir -p "$WORK_DIR/config/side-infra"
 mkdir -p "$WORK_DIR/logs"
 # Set log directory permissions (Shared by App and Alloy)
-chmod 777 "$WORK_DIR/logs"
+chmod 775 "$WORK_DIR/logs"
 chown -R ubuntu:ubuntu "$WORK_DIR"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# 1. Create working and log directories
mkdir -p "$WORK_DIR/config/side-infra"
mkdir -p "$WORK_DIR/logs"
# Set log directory permissions (Shared by App and Alloy)
chmod 777 "$WORK_DIR/logs"
chown -R ubuntu:ubuntu "$WORK_DIR"
# 1. Create working and log directories
mkdir -p "$WORK_DIR/config/side-infra"
mkdir -p "$WORK_DIR/logs"
# Set log directory permissions (Shared by App and Alloy)
chmod 775 "$WORK_DIR/logs"
chown -R ubuntu:ubuntu "$WORK_DIR"
🤖 Prompt for AI Agents
In modules/app_stack/scripts/side_infra_setup.sh.tftpl around lines 16 to 21,
the script sets overly-permissive permissions on the logs directory (chmod 777);
change this to a more restrictive mode such as chmod 775 and ensure group
ownership is set to the group the Alloy container runs as (e.g., chown -R
ubuntu:<alloy-group> "$WORK_DIR" or set the correct group name), so the
container has write access without granting world-writable permissions; update
any container/user docs or deployment config to use that group if needed.

Comment on lines +33 to +48
redis:
image: redis:$REDIS_VERSION
container_name: redis
network_mode: "host"
restart: always
command: redis-server --bind 127.0.0.1 --protected-mode yes
redis-exporter:
image: oliver006/redis_exporter:$REDIS_EXPORTER_VERSION
container_name: redis-exporter
environment:
REDIS_ADDR: "127.0.0.1:6379"
depends_on:
- redis
network_mode: "host"
restart: always
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add health checks and consider readiness dependencies.

The redis-exporter service uses depends_on which only ensures Redis starts before the exporter, but doesn't guarantee Redis is ready to accept connections. This could cause the exporter to fail or enter a restart loop during initialization.

🔎 Proposed enhancement with health checks
   redis:
     image: redis:$REDIS_VERSION
     container_name: redis
     network_mode: "host"
     restart: always
     command: redis-server --bind 127.0.0.1 --protected-mode yes
+    healthcheck:
+      test: ["CMD", "redis-cli", "ping"]
+      interval: 5s
+      timeout: 3s
+      retries: 3
 
   redis-exporter:
     image: oliver006/redis_exporter:$REDIS_EXPORTER_VERSION
     container_name: redis-exporter
     environment:
       REDIS_ADDR: "127.0.0.1:6379"
     depends_on:
-      - redis
+      redis:
+        condition: service_healthy
     network_mode: "host"
     restart: always
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
redis:
image: redis:$REDIS_VERSION
container_name: redis
network_mode: "host"
restart: always
command: redis-server --bind 127.0.0.1 --protected-mode yes
redis-exporter:
image: oliver006/redis_exporter:$REDIS_EXPORTER_VERSION
container_name: redis-exporter
environment:
REDIS_ADDR: "127.0.0.1:6379"
depends_on:
- redis
network_mode: "host"
restart: always
redis:
image: redis:$REDIS_VERSION
container_name: redis
network_mode: "host"
restart: always
command: redis-server --bind 127.0.0.1 --protected-mode yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 3
redis-exporter:
image: oliver006/redis_exporter:$REDIS_EXPORTER_VERSION
container_name: redis-exporter
environment:
REDIS_ADDR: "127.0.0.1:6379"
depends_on:
redis:
condition: service_healthy
network_mode: "host"
restart: always
🤖 Prompt for AI Agents
In modules/app_stack/scripts/side_infra_setup.sh.tftpl around lines 33-48, the
redis-exporter can start before Redis is ready because depends_on only orders
container start, not readiness; add a proper healthcheck to the redis service
(e.g., a redis-cli PING or tcp check with sensible interval/timeout/retries) and
change redis-exporter to wait for Redis healthy (use depends_on with condition:
service_healthy if your compose version supports it) or, if compose version does
not support health conditions, add a small wrapper/startup check for
redis-exporter that polls Redis until it responds before launching the exporter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Side InfraStructure(Redis, Redis-Exporter, Alloy)를 Terraform으로 관리하기

2 participants