Skip to content

Commit e591bf9

Browse files
committed
Add more config options and refine
1 parent 364a82f commit e591bf9

File tree

1 file changed

+50
-26
lines changed

1 file changed

+50
-26
lines changed

manage-data/migrate/migrate-with-logstash.md

Lines changed: 50 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -15,21 +15,18 @@ products:
1515

1616
# Migrate {{ech}} data to {{serverless-full}} with {{ls}} [migrate-with-ls]
1717

18-
You can use {{ls}} to migrate data from an {{ech}} deployment to an {{serverless-full}} project.
18+
[{{ls}}](logstash://reference/index.md) is a data collection engine that uses a large ecosystem of [plugins](logstash-docs-md://lsr/index.md) to collect, process, and forward data from a variety of sources to a variety of destinations. Here we focus on using the [Elasticsearch input](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md) plugin to read from your {{ech}} deployment, and the [Elasticsearch output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) plugin to write to your {{{serverless-full}} project.
19+
1920
Familiarity with {{ech}}, {{es}}, and {{ls}} is helpful, but not required.
2021

2122
:::{admonition} Basic migration
22-
This guide focuses on a basic data migration scenario for moving static data from an {{ech}} deployment to a {{serverless-full}} project. Dashboards, visualizations, pipelines, templates, and other {{kib}} assets must be migrated separately using the {{kib}} [export/import APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) or recreated manually.
23-
:::
24-
25-
:::{admonition} Advanced migration
26-
:applies_to: stack: preview
23+
This guide focuses on a basic data migration scenario for moving static data from an {{ech}} deployment to a {{serverless-full}} project.
2724

28-
{{ls}} can handle more advanced migrations with field tracking settings in the [Elasticsearch input](https://www.elastic.co/docs/reference/logstash/plugins/plugins-inputs-elasticsearch) plugin. The field tracking feature adds cursor-like pagination functionality that can support more complex migrations and ongoing data migration over time.
29-
30-
More information is available in the Elasticsearch input plugin documentation: [Tracking a field's value across runs](https://www.elastic.co/docs/reference/logstash/plugins/plugins-inputs-elasticsearch#plugins-inputs-elasticsearch-cursor).
25+
Dashboards, visualizations, pipelines, templates, and other {{kib}} assets must be migrated separately using the {{kib}} [export/import APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) or recreated manually.
3126
:::
3227

28+
The Elasticsearch input plugin offers [additional configuration options](#additional-config) that can support more advanced use cases and migrations. More information about those options is available near the end of this topic.
29+
3330
## Prerequisites [migrate-prereqs]
3431

3532
- {{ech}} deployment with data to migrate
@@ -44,19 +41,31 @@ More information is available in the Elasticsearch input plugin documentation: [
4441
* [Verify data migration](#verify-migration)
4542

4643

47-
## Step 1: Configure {{ls}} [configure-ls]
48-
Create a new {{ls}} [pipeline configuration file](logstash://reference/creating-logstash-pipeline.md) (_migration.conf_) with these settings:
44+
### Step 1: Configure {{ls}} [configure-ls]
45+
Create a new {{ls}} [pipeline configuration file](logstash://reference/creating-logstash-pipeline.md) (_migration.conf_) using the [Elasticsearch input](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md) and the [Elasticsearch output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md):
46+
- The **input** reads from your {{ech}}.
47+
- The **output** writes to your {{serverless-full}} project.
48+
49+
#### Input: Read from your {{ech}} deployment [read-from-ech]
4950

5051
```
5152
input {
5253
elasticsearch {
53-
cloud_id => "<HOSTED_DEPLOYMENT_CLOUD_ID>" # Your Hosted deployment's Cloud ID
54-
api_key => "<HOSTED_API_KEY>" # Your Hosted deployment API key
55-
index => "index_pattern*" # Your index or pattern (such as logs-*,metrics-*)
56-
docinfo => true
54+
cloud_id => "<HOSTED_DEPLOYMENT_CLOUD_ID>" # Connects Logstash to your Elastic Cloud Hosted deployment using its Cloud ID.
55+
api_key => "<HOSTED_API_KEY>" # API key for authenticating the connection.
56+
index => "index_pattern*" # The index or index pattern (such as logs-*,metrics-*).
57+
docinfo => true # Includes metadata about each document, such as its original index name or doc ID. This setting preserves index names on the destination cluster.
5758
}
5859
}
60+
```
61+
62+
:::{tip}
63+
To migrate multiple indexes at the same time, use a wildcard in the index name. For example, `index => "logs-*"` migrates all indices starting with `logs-`.
64+
:::
65+
66+
#### Output: Write to your {{serverless-full}} project [write-to-serverless]
5967

68+
```
6069
output {
6170
elasticsearch {
6271
hosts => [ "https://<SERVERLESS_HOST_URL>:443" ] # URL for your Serverless project URL, set port as 443
@@ -69,27 +78,42 @@ output {
6978
}
7079
```
7180

72-
:::{admonition} Tips
73-
74-
- When you create an [API key for {{ls}}](logstash://reference/connecting-to-serverless.md#api-key), be sure to select **Logstash** from the **API key** format dropdown. This option formats the API key in the correct `id:api_key` format required by {{ls}}.
75-
76-
- To migrate multiple indexes at the same time, use a wildcard in the index name.
77-
For example, `index => "logs-*"` migrates all indices starting with `logs-`.
81+
:::{tip}
82+
When you create an [API key for {{ls}}](logstash://reference/connecting-to-serverless.md#api-key), be sure to select **Logstash** from the **API key** format dropdown. This option formats the API key in the correct `id:api_key` format required by {{ls}}.
7883
:::
7984

80-
## Step 2: Run {{ls}} [run-ls]
85+
### Step 2: Run {{ls}} [run-ls]
8186

8287
Start {{ls}}:
8388

8489
```
8590
bin/logstash -f migration.conf
8691
```
8792

88-
## Step 3: Verify data migration [verify-migration]
93+
### Step 3: Verify data migration [verify-migration]
8994

90-
After running {{ls}}, verify that the data has been successfully migrated:
95+
After running {{ls}}, verify that the data has been migrated successfully:
9196

9297
1. Log in to your {{serverless-full}} project.
93-
2. Navigate to Index Management and select the index.
94-
3. Verify that the migrated data is visible.
98+
2. Navigate to Index Management and select the relevant index.
99+
3. Confirm that the migrated data is visible.
100+
101+
102+
## Additional configuration options [additional-config]
103+
104+
The Elasticsearch input includes more [configuration options](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-options)
105+
that offer greater flexibility and can handle more advanced migrations.
106+
Some options that can be particularly relevant for a migration use case are:
107+
108+
- `size` - Controls how many documents are retrieved per scroll. Larger values increase throughput, but use more memory.
109+
- `slices` - Enables parallel reads from the source index.
110+
- `scroll` - Adjusts how long Elasticsearch keeps the scroll context alive.
111+
112+
### Field tracking options [field-tracking]
113+
{applies_to}`serverless: preview` {applies_to}`stack: preview`
114+
115+
The {{es}} input plugin supports cursor-like pagination functionality, unlocking more advanced migration features, including the ability to resume migration tasks after a {{ls}} restart, and support for ongoing data migration over time. Tracking field options are:
116+
- `tracking_field` - Plugin records the value of a field for the last document retrieved in a run.
117+
- `tracking_field_seed` - Sets the starting value for `tracking_field` if no `last_run_metadata_path` is set.
95118

119+
Check out the Elasticsearch input plugin documentation for more details and code samples: [Tracking a field's value across runs](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-cursor).

0 commit comments

Comments
 (0)