diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/_index.md new file mode 100644 index 0000000000..433d462096 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/_index.md @@ -0,0 +1,56 @@ +--- +title: Deploy ClickHouse on Google Cloud C4A (Arm-based Axion VMs) + +minutes_to_complete: 30 + +who_is_this_for: This learning path is intended for software developers deploying and optimizing ClickHouse on Linux/Arm64 environments, specifically using Google Cloud C4A virtual machines powered by Axion processors. + +learning_objectives: + - Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors) + - Install ClickHouse on a SUSE Arm64 (C4A) instance + - Verify ClickHouse functionality by starting the server, connecting via client, and performing baseline data insertion and simple query tests on the Arm64 VM + - Measure ClickHouse query performance (read, aggregation, and concurrent workloads) to evaluate throughput and latency on Arm64 (Aarch64) + +prerequisites: + - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled + - Basic familiarity with [ClickHouse](https://clickhouse.com/) +author: Pareena Verma + +##### Tags +skilllevels: Introductory +subjects: Databases +cloud_service_providers: Google Cloud + +armips: + - Neoverse + +tools_software_languages: + - ClickHouse + - clickhouse-benchmark + +operatingsystems: + - Linux + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +further_reading: + - resource: + title: Google Cloud documentation + link: https://cloud.google.com/docs + type: documentation + + - resource: + title: ClickHouse documentation + link: https://clickhouse.com/docs/ + type: documentation + + - resource: + title: ClickHouse benchmark documentation + link: https://clickhouse.com/docs/operations/utilities/clickhouse-benchmark + type: documentation + +weight: 1 +layout: "learningpathall" +learning_path_main_page: "yes" +--- diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/background.md new file mode 100644 index 0000000000..3ca33a6fd8 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/background.md @@ -0,0 +1,23 @@ +--- +title: Getting started with ClickHouse on Google Axion C4A (Arm Neoverse-V2) + +weight: 2 + +layout: "learningpathall" +--- + +## Google Axion C4A Arm instances in Google Cloud + +Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. + +The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. + +To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog. + +## ClickHouse + +ClickHouse is an open-source, columnar OLAP database designed for **high-performance analytics** and real-time reporting. It supports **vectorized execution, columnar storage, and distributed deployments** for fast queries on large datasets. It offers **scalable, fault-tolerant architecture** with support for replication and sharding. + +Ideal for analytics, monitoring, and event processing, ClickHouse runs efficiently on both x86 and Arm-based platforms, including AWS Graviton and GCP Arm VMs. + +Learn more at the [ClickHouse website](https://clickhouse.com/). diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/baseline.md new file mode 100644 index 0000000000..3ccf16ea38 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/baseline.md @@ -0,0 +1,168 @@ +--- +title: ClickHouse Baseline Testing on Google Axion C4A Arm Virtual Machine +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## ClickHouse Baseline Testing on GCP SUSE VMs +This section validates that ClickHouse is functioning correctly and provides a **basic performance baseline** on a SUSE Linux Arm64 VM. + + +### Verify ClickHouse is running + +```console +sudo systemctl status clickhouse-server +``` + +This confirms that the ClickHouse server is running correctly under systemd and ready to accept connections. + +```output +● clickhouse-server.service - ClickHouse Server + Loaded: loaded (/etc/systemd/system/clickhouse-server.service; enabled; vendor preset: disabled) + Active: active (running) since Thu 2025-11-27 05:07:42 UTC; 18s ago + Main PID: 4229 (ClickHouseWatch) + Tasks: 814 + CPU: 2.629s + CGroup: /system.slice/clickhouse-server.service + ├─ 4229 clickhouse-watchdog server --config=/etc/clickhouse-server/config.xml + └─ 4237 /usr/bin/clickhouse server --config=/etc/clickhouse-server/config.xml +``` + +### Connect to ClickHouse +Client connection ensures that the ClickHouse CLI can successfully communicate with the running server. + +```console +clickhouse client +``` +### Create a test database and table +Database and table creation sets up a dedicated test environment and an analytics-optimized MergeTree table for baseline evaluation. + +```sql +CREATE DATABASE baseline_test; +USE baseline_test; +``` + +You should see an output similar to: +```output +CREATE DATABASE baseline_test +Query id: bc615167-ecd5-4470-adb0-918d8ce07caf +Ok. +0 rows in set. Elapsed: 0.012 sec. + + +USE baseline_test +Query id: cd49553a-c0ff-4656-a3e5-f0e9fccd9eba +Ok. +0 rows in set. Elapsed: 0.001 sec. +``` +Create a simple table optimized for analytics: + +```sql +CREATE TABLE events +( + event_time DateTime, + user_id UInt64, + event_type String +) +ENGINE = MergeTree +ORDER BY (event_time, user_id); +``` + +You should see an output similar to: +```output +Query id: 62ce9b9c-9a7b-45c8-9a58-fa6302b13a88 + +Ok. + +0 rows in set. Elapsed: 0.011 sec. +``` + +### Insert baseline test data +Data insertion loads a small, controlled dataset to simulate real event data and validate write functionality. +Insert sample data (10,000 rows): + +```sql +INSERT INTO events +SELECT + now() - number, + number, + 'click' +FROM numbers(10000); +``` + +You should see an output similar to: +```output +Query id: af860501-d903-4226-9e10-0e34467f7675 + +Ok. + +10000 rows in set. Elapsed: 0.003 sec. Processed 10.00 thousand rows, 80.00 KB (3.36 million rows/s., 26.86 MB/s.) +Peak memory usage: 3.96 MiB. +``` + +**Verify row count:** + +Row count validation verifies that the inserted data is stored correctly and consistently. + +```sql +SELECT count(*) FROM events; +``` + +You should see an output similar to: +```output +Query id: 644f6556-e69b-4f98-98ec-483ee6869d6e + + ┌─count()─┐ +1. │ 10000 │ + └─────────┘ + +1 row in set. Elapsed: 0.002 sec. +``` + +### Baseline read performance test +Baseline read queries measure basic query performance for filtering, aggregation, and grouping, establishing an initial performance reference on the Arm64 VM. + +- Run simple analytical queries: + +```sql +SELECT count(*) FROM events WHERE event_type = 'click'; +``` + +You should see an output similar to: +```output +Query id: bd609de4-c08e-4f9f-804a-ee0528c94e4d + + ┌─count()─┐ +1. │ 10000 │ + └─────────┘ + +1 row in set. Elapsed: 0.003 sec. Processed 10.00 thousand rows, 130.00 KB (2.98 million rows/s., 38.71 MB/s.) +Peak memory usage: 392.54 KiB. +``` + +- This query groups events by date and counts how many events occurred on each day, returning a daily summary of total events in chronological order. + +```sql +SELECT + toDate(event_time) AS date, + count(*) AS total_events +FROM events +GROUP BY date +ORDER BY date; +``` + +You should see an output similar to: +```output +Query id: b3db69f8-c885-419f-9900-53d258f0b996 + + ┌───────date─┬─total_events─┐ +1. │ 2025-11-27 │ 10000 │ + └────────────┴──────────────┘ + +1 row in set. Elapsed: 0.002 sec. Processed 10.00 thousand rows, 40.00 KB (4.08 million rows/s., 16.33 MB/s.) +Peak memory usage: 785.05 KiB. +``` + +The baseline tests confirm that ClickHouse is stable, functional, and performing efficiently on the Arm64 VM. With core operations validated, the setup is now ready for detailed performance benchmarking. diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/benchmarking.md new file mode 100644 index 0000000000..5c6f1534ff --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/benchmarking.md @@ -0,0 +1,277 @@ +--- +title: ClickHouse Benchmarking +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## ClickHouse Benchmark on GCP SUSE Arm64 VM +ClickHouse provides an official benchmarking utility called **`clickhouse-benchmark`**, which is included **by default** in the ClickHouse installation. +This tool measures **query throughput and latency**. + +### Verify the benchmarking tool exists +Confirm that `clickhouse-benchmark` is installed and available on the system before running performance tests. + +```console +which clickhouse-benchmark +``` +You should see an output similar to: + +```output +/usr/bin/clickhouse-benchmark +``` + +### Prepare benchmark database and table +Create a test database and table structure where sample data will be stored for benchmarking. + +```console +clickhouse client +``` + +```sql +CREATE DATABASE IF NOT EXISTS bench; +USE bench; + +CREATE TABLE IF NOT EXISTS hits +( + event_time DateTime, + user_id UInt64, + url String +) +ENGINE = MergeTree +ORDER BY (event_time, user_id); +``` +You should see an output similar to: +```output +Query id: 83485bc4-ad93-4dfc-bafe-c0e2a45c1b34 +Ok. +0 rows in set. Elapsed: 0.005 sec. +``` + +Exit client: + +```console +exit; +``` +### Load benchmark data +Insert 1 million sample records into the table to simulate a realistic workload for testing query performance. + +```sql +clickhouse-client --query " +INSERT INTO bench.hits +SELECT + now() - number, + number, + concat('/page/', toString(number % 100)) +FROM numbers(1000000)" +``` + +This inserts 1 million rows. + +**Verify:** + +Check that the data load was successful by counting the total number of rows in the table. + +```sql +clickhouse-client --query "SELECT count(*) FROM bench.hits" +``` + +You should see an output similar to: +```output +1000000 +``` + +### Read query benchmark +Measures how fast ClickHouse can scan and count rows using a simple filter, showing basic read performance and low latency. + +```sql +clickhouse-benchmark \ + --host localhost \ + --port 9000 \ + --iterations 10 \ + --concurrency 1 \ + --query "SELECT count(*) FROM bench.hits WHERE url LIKE '/page/%'" +``` + +You should see an output similar to: +```output +Loaded 1 queries. + +Queries executed: 10 (100%). + +localhost:9000, queries: 10, QPS: 63.167, RPS: 63167346.434, MiB/s: 957.833, result RPS: 63.167, result MiB/s: 0.000. + +0% 0.003 sec. +10% 0.003 sec. +20% 0.003 sec. +30% 0.004 sec. +40% 0.004 sec. +50% 0.004 sec. +60% 0.004 sec. +70% 0.004 sec. +80% 0.004 sec. +90% 0.004 sec. +95% 0.005 sec. +99% 0.005 sec. +99.9% 0.005 sec. +99.99% 0.005 sec. +``` + + +### Benchmark aggregation query +Test the performance of grouping and aggregation operations, demonstrating analytical query efficiency. + +```sql +clickhouse-benchmark \ + --host localhost \ + --port 9000 \ + --iterations 10 \ + --concurrency 2 \ + --query " + SELECT + url, + count(*) AS total + FROM bench.hits + GROUP BY url + " +``` + +You should see an output similar to: +```output +Queries executed: 10 (100%). + +localhost:9000, queries: 10, QPS: 67.152, RPS: 67151788.647, MiB/s: 1018.251, result RPS: 6715.179, result MiB/s: 0.153. + +0% 0.005 sec. +10% 0.005 sec. +20% 0.005 sec. +30% 0.007 sec. +40% 0.007 sec. +50% 0.007 sec. +60% 0.007 sec. +70% 0.007 sec. +80% 0.007 sec. +90% 0.007 sec. +95% 0.008 sec. +99% 0.008 sec. +99.9% 0.008 sec. +99.99% 0.008 sec. +``` + +### Benchmark concurrent read workload +Run multiple queries at the same time to evaluate how well ClickHouse handles higher user load and parallel processing. + +```sql +clickhouse-benchmark \ + --host localhost \ + --port 9000 \ + --iterations 20 \ + --concurrency 8 \ + --query " + SELECT count(*) + FROM bench.hits + WHERE user_id % 10 = 0 + " +``` + +You should see an output similar to: +```output +Loaded 1 queries. + +Queries executed: 20 (100%). + +localhost:9000, queries: 20, QPS: 99.723, RPS: 99723096.882, MiB/s: 760.827, result RPS: 99.723, result MiB/s: 0.001. + +0% 0.012 sec. +10% 0.012 sec. +20% 0.013 sec. +30% 0.017 sec. +40% 0.020 sec. +50% 0.029 sec. +60% 0.029 sec. +70% 0.038 sec. +80% 0.051 sec. +90% 0.062 sec. +95% 0.063 sec. +99% 0.078 sec. +99.9% 0.078 sec. +99.99% 0.078 sec. +``` + +### Measuring insert performance +Measures bulk data ingestion speed and write latency under concurrent insert operations. + +```sql +clickhouse-benchmark \ + --iterations 5 \ + --concurrency 4 \ + --query " + INSERT INTO bench.hits + SELECT + now(), + rand64(), + '/benchmark' + FROM numbers(500000) + " +``` + +You should see an output similar to: +```output +Queries executed: 5 (100%). + +localhost:9000, queries: 5, QPS: 20.935, RPS: 10467305.309, MiB/s: 79.859, result RPS: 0.000, result MiB/s: 0.000. + +0% 0.060 sec. +10% 0.060 sec. +20% 0.060 sec. +30% 0.060 sec. +40% 0.068 sec. +50% 0.068 sec. +60% 0.068 sec. +70% 0.069 sec. +80% 0.069 sec. +90% 0.073 sec. +95% 0.073 sec. +99% 0.073 sec. +99.9% 0.073 sec. +99.99% 0.073 sec. +``` +### Benchmark Metrics Explanation + +- **QPS (Queries Per Second):** Indicates how many complete queries ClickHouse can execute per second. Higher QPS reflects stronger overall query execution capacity. +- **RPS (Rows Per Second):** Shows the number of rows processed every second. Very high RPS values demonstrate ClickHouse’s efficiency in scanning large datasets. +- **MiB/s (Throughput):** Represents data processed per second in mebibytes. High throughput highlights effective CPU, memory, and disk utilization during analytics workloads. +- **Latency Percentiles (p50, p95, p99):** Measure query response times. p50 is the median latency, while p95 and p99 show tail latency under heavier load—critical for understanding performance consistency. +- **Iterations:** Number of times the same query is executed. More iterations improve measurement accuracy and stability. +- **Concurrency:** Number of parallel query clients. Higher concurrency tests ClickHouse’s ability to scale under concurrent workloads. +- **Result RPS / Result MiB/s:** Reflects the size and rate of returned query results. Low values are expected for aggregate queries like `COUNT(*)`. +- **Insert Benchmark Metrics:** Write tests measure ingestion speed and stability, where consistent latency indicates reliable bulk insert performance. + +### Benchmark summary on x86_64 +To compare the benchmark results, the following results were collected by running the same benchmark on an `x86 - c4-standard-4` (4 vCPUs, 15 GB Memory) x86_64 VM in GCP, running SUSE: + +| Test Category | Test Case | Query / Operation | Iterations | Concurrency | QPS | Rows / sec (RPS) | Throughput (MiB/s) | p50 Latency | p95 Latency | p99 Latency | +| ----------------------- | -------------- | ---------------------------------------------------------------------------------- | ---------- | ----------- | ------ | ---------------- | ------------------ | ----------- | ----------- | ----------- | +| Read | Filtered COUNT | `SELECT COUNT(*) FROM bench.hits WHERE url LIKE '/page/%'` | 10 | 1 | 63.22 | 63.22 M | 958.68 | 4 ms | 6 ms | 6 ms | +| Read / Aggregate | GROUP BY | `SELECT url, COUNT(*) FROM bench.hits GROUP BY url` | 10 | 2 | 67.23 | 67.23 M | 1019.43 | 7 ms | 8 ms | 8 ms | +| Read (High Concurrency) | Filtered COUNT | `SELECT COUNT(*) FROM bench.hits WHERE user_id % 10 = 0` | 20 | 8 | 102.54 | 102.54 M | 782.31 | 23 ms | 54 ms | 60 ms | +| Write | Bulk Insert | `INSERT INTO bench.hits SELECT now(), rand64(), '/benchmark' FROM numbers(500000)` | 5 | 4 | 22.09 | 11.04 M | 84.25 | 80 ms | 81 ms | 81 ms | + +### Benchmark summary on Arm64 +Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE): + +| Test Category | Test Case | Query / Operation | Iterations | Concurrency | QPS | Rows / sec (RPS) | Throughput (MiB/s) | p50 Latency | p95 Latency | p99 Latency | +| ----------------------- | -------------- | -------------------------------------- | ---------: | ----------: | ----: | ---------------: | -----------------: | ----------: | ----------: | ----------: | +| Read | Filtered COUNT | `COUNT(*) WHERE url LIKE '/page/%'` | 10 | 1 | 63.17 | 63.17 M | 957.83 | 4 ms | 5 ms | 5 ms | +| Read / Aggregate | GROUP BY | `GROUP BY url` | 10 | 2 | 67.15 | 67.15 M | 1018.25 | 7 ms | 8 ms | 8 ms | +| Read (High Concurrency) | Filtered COUNT | `COUNT(*) WHERE user_id % 10 = 0` | 20 | 8 | 99.72 | 99.72 M | 760.83 | 29 ms | 63 ms | 78 ms | +| Write | Bulk Insert | `INSERT SELECT … FROM numbers(500000)` | 5 | 4 | 20.94 | 10.47 M | 79.86 | 68 ms | 73 ms | 73 ms | + +### ClickHouse Benchmark comparison insights + +- **High Read Throughput:** Simple filtered reads and aggregations achieved over **63–67 million rows/sec**, demonstrating strong scan and aggregation performance on Arm64. +- **Scales Under Concurrency:** At higher concurrency (8 clients), the system sustained nearly **100 million rows/sec**, showing efficient parallel execution and CPU utilization. +- **Fast Aggregations:** `GROUP BY` workloads delivered over **1 GiB/s throughput** with low single-digit millisecond latency at moderate concurrency. +- **Stable Write Performance:** Bulk inserts maintained consistent throughput with predictable latency, indicating reliable ingestion performance on C4A Arm cores. diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/images/gcp-vm.png b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/images/gcp-vm.png new file mode 100644 index 0000000000..0d1072e20d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/images/gcp-vm.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/installation.md b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/installation.md new file mode 100644 index 0000000000..68c17a5b2c --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/installation.md @@ -0,0 +1,183 @@ +--- +title: Install ClickHouse +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Install ClickHouse on GCP VM +This guide covers installing, configuring, and validating ClickHouse on a GCP SUSE Linux Arm64 VM. It includes system preparation, installing ClickHouse with the official installer, verifying the setup, starting the server, and connecting via the client. The guide also configures ClickHouse as a systemd service to ensure reliable, automatic startup on Arm-based environments. + +### Install required system packages +Refresh system repositories and install basic utilities needed to download and run ClickHouse. + +```console +sudo zypper refresh +sudo zypper install -y curl tar gzip sudo +``` + +### Download ClickHouse using the official installer +Download the official ClickHouse installation script, which works for both x86 and ARM64 systems. + +```console +curl https://clickhouse.com/ | sh +``` +This command downloads the ClickHouse binary into the current directory. + +### Install ClickHouse components +Run the installer with root privileges to install all ClickHouse components. + +```console +sudo ./clickhouse install +``` + +This installs: + +- **ClickHouse Server** – Runs the core database engine and handles all data storage, queries, and processing. +- **ClickHouse Client** – Provides a command-line interface to connect to the server and run SQL queries. +- **ClickHouse Local** – Allows running SQL queries on local files without starting a server. +- **Default configuration files (/etc/clickhouse-server)** – Stores server settings such as ports, users, storage paths, and performance tuning options. + +### Verify the installed version +Confirm that all ClickHouse components are installed correctly by checking their versions. + +```console +clickhouse --version +clickhouse server --version +clickhouse client --version +clickhouse local --version +``` + +You should see an output similar to: +```output +ClickHouse local version 25.12.1.168 (official build). +ClickHouse server version 25.12.1.168 (official build). +ClickHouse client version 25.12.1.168 (official build). +``` + +### Create ClickHouse user and directories +Create a dedicated system user and required directories for data, logs, and runtime files. + +```console +sudo useradd -r -s /sbin/nologin clickhouse || true +sudo mkdir -p /var/lib/clickhouse +sudo mkdir -p /var/log/clickhouse-server +sudo mkdir -p /var/run/clickhouse-server +``` +Set proper ownership so ClickHouse can access these directories. + +```console +sudo chown -R clickhouse:clickhouse \ + /var/lib/clickhouse \ + /var/log/clickhouse-server \ + /var/run/clickhouse-server +``` + +### Start ClickHouse Server manually (validation) +You can just run the ClickHouse server in the foreground to confirm the configuration is valid. + +```console +sudo clickhouse server --config-file=/etc/clickhouse-server/config.xml +``` +Keep this terminal open while testing. + +### Start clickhouse-server with: +Start ClickHouse using the built-in start command for normal operation. + +```console +sudo clickhouse start +``` +### Connect using ClickHouse Client +Open a new terminal and connect to the ClickHouse server. + +```console +clickhouse client +``` +Run a test query to confirm connectivity. + +```sql +SELECT version(); +``` +You should see an output similar to: +```output +SELECT version() + +Query id: ddd3ff38-c0c6-43c5-8ae1-d9d07af4c372 + + ┌─version()───┐ +1. │ 25.12.1.168 │ + └─────────────┘ + +1 row in set. Elapsed: 0.001 sec. +``` + +{{% notice Note %}} +Recent benchmarks show that ClickHouse (v22.5.1.2079-stable) delivers up to 26% performance improvements on Arm-based platforms, such as AWS Graviton3, compared to other architectures, highlighting the efficiency of its vectorized execution engine on modern Arm CPUs. +You can view [this Blog](https://community.arm.com/arm-community-blogs/b/servers-and-cloud-computing-blog/posts/improve-clickhouse-performance-up-to-26-by-using-aws-graviton3) + +The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends ClickHouse version v22.5.1.2079-stable, the minimum recommended on the Arm platforms. +{{% /notice %}} + +### Create a systemd service +Set up ClickHouse as a system service so it starts automatically on boot. + +```console +sudo tee /etc/systemd/system/clickhouse-server.service <<'EOF' +[Unit] +Description=ClickHouse Server +After=network.target + +[Service] +Type=simple +User=clickhouse +Group=clickhouse +ExecStart=/usr/bin/clickhouse server --config=/etc/clickhouse-server/config.xml +Restart=always +RestartSec=10 +LimitNOFILE=1048576 + +[Install] +WantedBy=multi-user.target +EOF +``` +**Reload systemd and enable the service:** + +```console +sudo systemctl daemon-reload +sudo systemctl enable clickhouse-server +sudo systemctl start clickhouse-server +``` + +### Verify ClickHouse service +Ensure the ClickHouse server is running correctly as a background service. + +```console +sudo systemctl status clickhouse-server +``` + +### Final validation +Reconnect to ClickHouse and confirm it is operational. + +```console +clickhouse client +``` + +```sql +SELECT version(); +``` + +You should see an output similar to: +```output +SELECT version() + +Query id: ddd3ff38-c0c6-43c5-8ae1-d9d07af4c372 + + ┌─version()───┐ +1. │ 25.12.1.168 │ + └─────────────┘ + +1 row in set. Elapsed: 0.001 sec. +``` + +ClickHouse is now successfully installed, configured, and running on SUSE Linux Arm64 with automatic startup enabled. diff --git a/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/instance.md new file mode 100644 index 0000000000..2b93bc950d --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/clickhouse-gcp/instance.md @@ -0,0 +1,31 @@ +--- +title: Create a Google Axion C4A Arm virtual machine on GCP +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +In this section, you will learn how to provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. + +{{% notice Note %}} +For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/google/). +{{% /notice %}} + +## Provision a Google Axion C4A Arm VM in Google Cloud Console + +To create a virtual machine based on the C4A instance type: +- Navigate to the [Google Cloud Console](https://console.cloud.google.com/). +- Go to **Compute Engine > VM Instances** and select **Create Instance**. +- Under **Machine configuration**: + - Populate fields such as **Instance name**, **Region**, and **Zone**. + - Set **Series** to `C4A`. + - Select `c4a-standard-4` for machine type. + + ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") + +- Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server**. Pick the preferred version for your Operating System. Ensure you select the **Arm image** variant. Click **Select**. +- Under **Networking**, enable **Allow HTTP traffic**. +- Click **Create** to launch the instance.