You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: manage-data/migrate/migrate-with-logstash.md
+50-26Lines changed: 50 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,21 +15,18 @@ products:
15
15
16
16
# Migrate {{ech}} data to {{serverless-full}} with {{ls}} [migrate-with-ls]
17
17
18
-
You can use {{ls}} to migrate data from an {{ech}} deployment to an {{serverless-full}} project.
18
+
[{{ls}}](logstash://reference/index.md) is a data collection engine that uses a large ecosystem of [plugins](logstash-docs-md://lsr/index.md) to collect, process, and forward data from a variety of sources to a variety of destinations. Here we focus on using the [Elasticsearch input](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md) plugin to read from your {{ech}} deployment, and the [Elasticsearch output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) plugin to write to your {{{serverless-full}} project.
19
+
19
20
Familiarity with {{ech}}, {{es}}, and {{ls}} is helpful, but not required.
20
21
21
22
:::{admonition} Basic migration
22
-
This guide focuses on a basic data migration scenario for moving static data from an {{ech}} deployment to a {{serverless-full}} project. Dashboards, visualizations, pipelines, templates, and other {{kib}} assets must be migrated separately using the {{kib}} [export/import APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) or recreated manually.
23
-
:::
24
-
25
-
:::{admonition} Advanced migration
26
-
:applies_to: stack: preview
23
+
This guide focuses on a basic data migration scenario for moving static data from an {{ech}} deployment to a {{serverless-full}} project.
27
24
28
-
{{ls}} can handle more advanced migrations with field tracking settings in the [Elasticsearch input](https://www.elastic.co/docs/reference/logstash/plugins/plugins-inputs-elasticsearch) plugin. The field tracking feature adds cursor-like pagination functionality that can support more complex migrations and ongoing data migration over time.
29
-
30
-
More information is available in the Elasticsearch input plugin documentation: [Tracking a field's value across runs](https://www.elastic.co/docs/reference/logstash/plugins/plugins-inputs-elasticsearch#plugins-inputs-elasticsearch-cursor).
25
+
Dashboards, visualizations, pipelines, templates, and other {{kib}} assets must be migrated separately using the {{kib}} [export/import APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) or recreated manually.
31
26
:::
32
27
28
+
The Elasticsearch input plugin offers [additional configuration options](#additional-config) that can support more advanced use cases and migrations. More information about those options is available near the end of this topic.
29
+
33
30
## Prerequisites [migrate-prereqs]
34
31
35
32
- {{ech}} deployment with data to migrate
@@ -44,19 +41,31 @@ More information is available in the Elasticsearch input plugin documentation: [
44
41
*[Verify data migration](#verify-migration)
45
42
46
43
47
-
## Step 1: Configure {{ls}} [configure-ls]
48
-
Create a new {{ls}} [pipeline configuration file](logstash://reference/creating-logstash-pipeline.md) (_migration.conf_) with these settings:
44
+
### Step 1: Configure {{ls}} [configure-ls]
45
+
Create a new {{ls}} [pipeline configuration file](logstash://reference/creating-logstash-pipeline.md) (_migration.conf_) using the [Elasticsearch input](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md) and the [Elasticsearch output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md):
46
+
- The **input** reads from your {{ech}}.
47
+
- The **output** writes to your {{serverless-full}} project.
48
+
49
+
#### Input: Read from your {{ech}} deployment [read-from-ech]
49
50
50
51
```
51
52
input {
52
53
elasticsearch {
53
-
cloud_id => "<HOSTED_DEPLOYMENT_CLOUD_ID>" # Your Hosted deployment's Cloud ID
54
-
api_key => "<HOSTED_API_KEY>" # Your Hosted deployment API key
55
-
index => "index_pattern*" # Your index or pattern (such as logs-*,metrics-*)
56
-
docinfo => true
54
+
cloud_id => "<HOSTED_DEPLOYMENT_CLOUD_ID>" # Connects Logstash to your Elastic Cloud Hosted deployment using its Cloud ID.
55
+
api_key => "<HOSTED_API_KEY>" # API key for authenticating the connection.
56
+
index => "index_pattern*" # The index or index pattern (such as logs-*,metrics-*).
57
+
docinfo => true # Includes metadata about each document, such as its original index name or doc ID. This setting preserves index names on the destination cluster.
57
58
}
58
59
}
60
+
```
61
+
62
+
:::{tip}
63
+
To migrate multiple indexes at the same time, use a wildcard in the index name. For example, `index => "logs-*"` migrates all indices starting with `logs-`.
64
+
:::
65
+
66
+
#### Output: Write to your {{serverless-full}} project [write-to-serverless]
59
67
68
+
```
60
69
output {
61
70
elasticsearch {
62
71
hosts => [ "https://<SERVERLESS_HOST_URL>:443" ] # URL for your Serverless project URL, set port as 443
@@ -69,27 +78,42 @@ output {
69
78
}
70
79
```
71
80
72
-
:::{admonition} Tips
73
-
74
-
- When you create an [API key for {{ls}}](logstash://reference/connecting-to-serverless.md#api-key), be sure to select **Logstash** from the **API key** format dropdown. This option formats the API key in the correct `id:api_key` format required by {{ls}}.
75
-
76
-
- To migrate multiple indexes at the same time, use a wildcard in the index name.
77
-
For example, `index => "logs-*"` migrates all indices starting with `logs-`.
81
+
:::{tip}
82
+
When you create an [API key for {{ls}}](logstash://reference/connecting-to-serverless.md#api-key), be sure to select **Logstash** from the **API key** format dropdown. This option formats the API key in the correct `id:api_key` format required by {{ls}}.
78
83
:::
79
84
80
-
## Step 2: Run {{ls}} [run-ls]
85
+
###Step 2: Run {{ls}} [run-ls]
81
86
82
87
Start {{ls}}:
83
88
84
89
```
85
90
bin/logstash -f migration.conf
86
91
```
87
92
88
-
## Step 3: Verify data migration [verify-migration]
93
+
###Step 3: Verify data migration [verify-migration]
89
94
90
-
After running {{ls}}, verify that the data has been successfully migrated:
95
+
After running {{ls}}, verify that the data has been migrated successfully:
91
96
92
97
1. Log in to your {{serverless-full}} project.
93
-
2. Navigate to Index Management and select the index.
94
-
3. Verify that the migrated data is visible.
98
+
2. Navigate to Index Management and select the relevant index.
The Elasticsearch input includes more [configuration options](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-options)
105
+
that offer greater flexibility and can handle more advanced migrations.
106
+
Some options that can be particularly relevant for a migration use case are:
107
+
108
+
-`size` - Controls how many documents are retrieved per scroll. Larger values increase throughput, but use more memory.
109
+
-`slices` - Enables parallel reads from the source index.
110
+
-`scroll` - Adjusts how long Elasticsearch keeps the scroll context alive.
The {{es}} input plugin supports cursor-like pagination functionality, unlocking more advanced migration features, including the ability to resume migration tasks after a {{ls}} restart, and support for ongoing data migration over time. Tracking field options are:
116
+
-`tracking_field` - Plugin records the value of a field for the last document retrieved in a run.
117
+
-`tracking_field_seed` - Sets the starting value for `tracking_field` if no `last_run_metadata_path` is set.
95
118
119
+
Check out the Elasticsearch input plugin documentation for more details and code samples: [Tracking a field's value across runs](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-cursor).
0 commit comments