logstash迁移es数据7.17

修改 Logstash 批量写入记录条数,每批量写入 5-15MB 数据,可以加快集群数据的迁移效率。例如,每批量写入记录条数 pipeline.batch.size 从 125 改为 5000 。

vi config/pipelines.yml

得到如下报错:

1
2
3
[2023-08-08T06:07:13,418][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@10.233.198.134:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://10.233.198.134:9200/][Manticore::UnknownException] Certificate for <10.233.198.134> doesn't match any of the subject alternative names: [quickstart-es-http.elastic-system-bak.es.local, quickstart-es-http, quickstart-es-http.elastic-system-bak.svc, quickstart-es-http.elastic-system-bak, *.quickstart-es-master-nodes.elastic-system-bak.svc, *.quickstart-es-data-nodes.elastic-system-bak.svc]"}
[2023-08-08T06:07:16,516][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2023-08-08T06:07:19,014][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}

所以 logstash 需要配置域名。

1
2
3
4
[2023-08-08T06:09:39,350][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@quickstart-es-http.elastic-system-bak.es.local:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://quickstart-es-http.elastic-system-bak.es.local:9200/][Manticore::ResolutionFailure] quickstart-es-http.elastic-system-bak.es.local: Name or service not known"}
[2023-08-08T06:09:44,356][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"quickstart-es-http.elastic-system-bak.es.local", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: quickstart-es-http.elastic-system-bak.es.local>}
[2023-08-08T06:09:44,357][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@quickstart-es-http.elastic-system-bak.es.local:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://quickstart-es-http.elastic-system-bak.es.local:9200/][Manticore::ResolutionFailure] quickstart-es-http.elastic-system-bak.es.local"}
[2023-08-08T06:09:45,989][WARN ][logstash.runner          ] SIGTERM received. Shutting down.

集群间迁移配置参考 k8s 中 config-map.yaml :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
apiVersion: v1
data:
  logstash.conf: |-
    input {
      elasticsearch {
        hosts =>  ["elastic-es-http.logging.es.local"]
        user  => "elastic"
        password => "123456"
        index => "out_carry_statistics"
        docinfo=>true
        slices => 5
        size => 5000
        ssl  => true
        ca_file => "/etc/es1/ca.crt"

      }
    }
    filter {
    }
    output {
      elasticsearch {
        hosts => ["https://quickstart-es-http.elastic-cluster-bak2:9200"]
        user  => "elastic"
        password => "123456"
        ilm_enabled => false
        manage_template => false
        index => "out_carry_statistics"
        ssl  => true
        ssl_certificate_verification => false
        cacert => "/etc/es2/ca.crt"
      }
          stdout { codec => rubydebug { metadata => true }}

    }    
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: logging
  resourceVersion: '156197493'

image-20230808133601306

后台启动 Logstash 全量迁移任务

1
nohup bin/logstash -f config/es2es_all.conf >/dev/null 2>&1 &

四、增量数据迁移

Logstash 增量迁移需要数据增量标志,写出对应 ES 的 DLS query,可以查出查出增量数据。

开启 Logstash 定时任务即可触发增量迁移。

增量迁移配置参考 es2es_kibana_sample_data_logs.conf :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
input{
    elasticsearch{
        # 源端ES地址
        hosts =>  ["http://localhost:9200"]
        # 安全集群配置登录用户名密码
        user => "xxxxxx"
        password => "xxxxxx"
        # 需要迁移的索引列表,以逗号分隔
        index => "kibana_sample_data_logs"
        # 按时间范围查询增量数据,示例查询最近5分钟数据
        query => '{"query":{"range":{"@timestamp":{"gte":"now-5m","lte":"now/m"}}}}'
        # 定时任务,每分钟执行一次
        schedule => "* * * * *"
        scroll => "5m"
        docinfo=>true
        size => 5000
    }
}

filter {
  # 去掉一些logstash自己加的字段
  mutate {
    remove_field => ["@timestamp", "@version"]
  }
}


output{
    elasticsearch{
        # 目的端ES地址
        hosts => ["http://new-cluster-xxxxxx:9200"]
        # 安全集群配置登录用户名密码
        user => "elastic"
        password => "xxxxxx"
        # 目的端索引名称,以下配置为和源端保持一致
        index => "%{[@metadata][_index]}"
        # 目的端索引type,以下配置为和源端保持一致
        document_type => "%{[@metadata][_type]}"
        # 目标端数据的_id,如果不需要保留原_id,可以删除以下这行,删除后性能会更好
        document_id => "%{[@metadata][_id]}"
        ilm_enabled => false
        manage_template => false
    }

    # 调试信息,正式迁移去掉
    #stdout { codec => rubydebug { metadata => true }}
}

后台启动 Logstash 增量迁移任务

1
nohup bin/logstash -f config/es2es_kibana_sample_data_logs.conf >/dev/null 2>&1 &

在 Kibana 中查询最近更新的记录,验证增量数据是否同步。

示例查询条件中索引名称 kibana_sample_data_logs、时间戳字段@timestamp、最近时间范围 5 分钟可以根据实际情况修改。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
GET kibana_sample_data_logs/_search
{
  "query": {
    "range": {
      "@timestamp": {
        "gte": "now-5m",
        "lte": "now/m"
      }
    }
  },
  "sort": [
    {
      "@timestamp": {
        "order": "desc"
      }
    }
  ]
}

注意报错信息:elasticsearch:9200 注意可能是默认的 es 配置文件造成的。再 config/logstash.yml 上有logstash.licensechecker.licensereader

1
2
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]

logstash 部署文档:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
data:
  logstash.conf: |-
    input {
      elasticsearch {
        hosts => ["elasticsearch:9200"]
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      }
    }
    filter {
      grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
      }
      geoip {
        source => "clientip"
      }
    }
    output {
      elasticsearch {
        hosts => ["elasticsearch:9200"]
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      }
    }

ds-deployment.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
        - name: logstash
          image:  www.harbor.mobi/bcs_dev/logstash:7.12.1
          volumeMounts:
            - name: config-volume
              mountPath: /usr/share/logstash/pipeline/
          ports:
            - containerPort: 5044
          resources:
            limits:
              memory: 2Gi
            requests:
              memory: 1Gi
      volumes:
        - name: config-volume
          configMap:
            name: logstash-config

参考还是以具体文档为准

https://www.elastic.co/guide/en/logstash/7.17/plugins-outputs-elasticsearch.html

java 生成 dump 文件

1
jmap -dump:format=b,file=test.dump 4849

logstash8 on k8s 上注意需要修改配置文件来解决以下报错

1
[logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}

解决办法配置 logstash.yml

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
---
apiVersion: v1
data:
  jvm.options: |-
    -Xms7g
    -Xmx10g
    11-13:-XX:+UseConcMarkSweepGC
    11-13:-XX:CMSInitiatingOccupancyFraction=75
    11-13:-XX:+UseCMSInitiatingOccupancyOnly
    -Djava.awt.headless=true
    -Dfile.encoding=UTF-8
    -Djruby.compile.invokedynamic=true
    -XX:+HeapDumpOnOutOfMemoryError
    -Djava.security.egd=file:/dev/urandom
    -Dlog4j2.isThreadContextMapInheritable=true
  log4j2.file.properties: >
    status = error

    name = LogstashPropertiesConfig


    appender.console.type = Console

    appender.console.name = plain_console

    appender.console.layout.type = PatternLayout

    appender.console.layout.pattern =
    [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]}
    %m%n


    appender.json_console.type = Console

    appender.json_console.name = json_console

    appender.json_console.layout.type = JSONLayout

    appender.json_console.layout.compact = true

    appender.json_console.layout.eventEol = true


    appender.rolling.type = RollingFile

    appender.rolling.name = plain_rolling

    appender.rolling.fileName = ${sys:ls.logs}/logstash-plain.log

    appender.rolling.filePattern =
    ${sys:ls.logs}/logstash-plain-%d{yyyy-MM-dd}-%i.log.gz

    appender.rolling.policies.type = Policies

    appender.rolling.policies.time.type = TimeBasedTriggeringPolicy

    appender.rolling.policies.time.interval = 1

    appender.rolling.policies.time.modulate = true

    appender.rolling.layout.type = PatternLayout

    appender.rolling.layout.pattern =
    [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]}
    %m%n

    appender.rolling.policies.size.type = SizeBasedTriggeringPolicy

    appender.rolling.policies.size.size = 100MB

    appender.rolling.strategy.type = DefaultRolloverStrategy

    appender.rolling.strategy.max = 30

    appender.rolling.avoid_pipelined_filter.type = PipelineRoutingFilter


    appender.json_rolling.type = RollingFile

    appender.json_rolling.name = json_rolling

    appender.json_rolling.fileName = ${sys:ls.logs}/logstash-json.log

    appender.json_rolling.filePattern =
    ${sys:ls.logs}/logstash-json-%d{yyyy-MM-dd}-%i.log.gz

    appender.json_rolling.policies.type = Policies

    appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy

    appender.json_rolling.policies.time.interval = 1

    appender.json_rolling.policies.time.modulate = true

    appender.json_rolling.layout.type = JSONLayout

    appender.json_rolling.layout.compact = true

    appender.json_rolling.layout.eventEol = true

    appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy

    appender.json_rolling.policies.size.size = 100MB

    appender.json_rolling.strategy.type = DefaultRolloverStrategy

    appender.json_rolling.strategy.max = 30

    appender.json_rolling.avoid_pipelined_filter.type = PipelineRoutingFilter


    appender.routing.type = PipelineRouting

    appender.routing.name = pipeline_routing_appender

    appender.routing.pipeline.type = RollingFile

    appender.routing.pipeline.name = appender-${ctx:pipeline.id}

    appender.routing.pipeline.fileName =
    ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.log

    appender.routing.pipeline.filePattern =
    ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.%i.log.gz

    appender.routing.pipeline.layout.type = PatternLayout

    appender.routing.pipeline.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n

    appender.routing.pipeline.policy.type = SizeBasedTriggeringPolicy

    appender.routing.pipeline.policy.size = 100MB

    appender.routing.pipeline.strategy.type = DefaultRolloverStrategy

    appender.routing.pipeline.strategy.max = 30


    rootLogger.level = ${sys:ls.log.level}

    rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console

    rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling

    rootLogger.appenderRef.routing.ref = pipeline_routing_appender


    # Slowlog


    appender.console_slowlog.type = Console

    appender.console_slowlog.name = plain_console_slowlog

    appender.console_slowlog.layout.type = PatternLayout

    appender.console_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n


    appender.json_console_slowlog.type = Console

    appender.json_console_slowlog.name = json_console_slowlog

    appender.json_console_slowlog.layout.type = JSONLayout

    appender.json_console_slowlog.layout.compact = true

    appender.json_console_slowlog.layout.eventEol = true


    appender.rolling_slowlog.type = RollingFile

    appender.rolling_slowlog.name = plain_rolling_slowlog

    appender.rolling_slowlog.fileName =
    ${sys:ls.logs}/logstash-slowlog-plain.log

    appender.rolling_slowlog.filePattern =
    ${sys:ls.logs}/logstash-slowlog-plain-%d{yyyy-MM-dd}-%i.log.gz

    appender.rolling_slowlog.policies.type = Policies

    appender.rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy

    appender.rolling_slowlog.policies.time.interval = 1

    appender.rolling_slowlog.policies.time.modulate = true

    appender.rolling_slowlog.layout.type = PatternLayout

    appender.rolling_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n

    appender.rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy

    appender.rolling_slowlog.policies.size.size = 100MB

    appender.rolling_slowlog.strategy.type = DefaultRolloverStrategy

    appender.rolling_slowlog.strategy.max = 30


    appender.json_rolling_slowlog.type = RollingFile

    appender.json_rolling_slowlog.name = json_rolling_slowlog

    appender.json_rolling_slowlog.fileName =
    ${sys:ls.logs}/logstash-slowlog-json.log

    appender.json_rolling_slowlog.filePattern =
    ${sys:ls.logs}/logstash-slowlog-json-%d{yyyy-MM-dd}-%i.log.gz

    appender.json_rolling_slowlog.policies.type = Policies

    appender.json_rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy

    appender.json_rolling_slowlog.policies.time.interval = 1

    appender.json_rolling_slowlog.policies.time.modulate = true

    appender.json_rolling_slowlog.layout.type = JSONLayout

    appender.json_rolling_slowlog.layout.compact = true

    appender.json_rolling_slowlog.layout.eventEol = true

    appender.json_rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy

    appender.json_rolling_slowlog.policies.size.size = 100MB

    appender.json_rolling_slowlog.strategy.type = DefaultRolloverStrategy

    appender.json_rolling_slowlog.strategy.max = 30


    logger.slowlog.name = slowlog

    logger.slowlog.level = trace

    logger.slowlog.appenderRef.console_slowlog.ref =
    ${sys:ls.log.format}_console_slowlog

    logger.slowlog.appenderRef.rolling_slowlog.ref =
    ${sys:ls.log.format}_rolling_slowlog

    logger.slowlog.additivity = false


    logger.licensereader.name = logstash.licensechecker.licensereader

    logger.licensereader.level = error


    # Silence http-client by default

    logger.apache_http_client.name = org.apache.http

    logger.apache_http_client.level = fatal


    # Deprecation log

    appender.deprecation_rolling.type = RollingFile

    appender.deprecation_rolling.name = deprecation_plain_rolling

    appender.deprecation_rolling.fileName =
    ${sys:ls.logs}/logstash-deprecation.log

    appender.deprecation_rolling.filePattern =
    ${sys:ls.logs}/logstash-deprecation-%d{yyyy-MM-dd}-%i.log.gz

    appender.deprecation_rolling.policies.type = Policies

    appender.deprecation_rolling.policies.time.type = TimeBasedTriggeringPolicy

    appender.deprecation_rolling.policies.time.interval = 1

    appender.deprecation_rolling.policies.time.modulate = true

    appender.deprecation_rolling.layout.type = PatternLayout

    appender.deprecation_rolling.layout.pattern =
    [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]}
    %m%n

    appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy

    appender.deprecation_rolling.policies.size.size = 100MB

    appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy

    appender.deprecation_rolling.strategy.max = 30


    logger.deprecation.name = org.logstash.deprecation, deprecation

    logger.deprecation.level = WARN

    logger.deprecation.appenderRef.deprecation_rolling.ref =
    deprecation_plain_rolling

    logger.deprecation.additivity = false


    logger.deprecation_root.name = deprecation

    logger.deprecation_root.level = WARN

    logger.deprecation_root.appenderRef.deprecation_rolling.ref =
    deprecation_plain_rolling

    logger.deprecation_root.additivity = false
  log4j2.properties: >
    status = error

    name = LogstashPropertiesConfig


    appender.console.type = Console

    appender.console.name = plain_console

    appender.console.layout.type = PatternLayout

    appender.console.layout.pattern =
    [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]}
    %m%n


    appender.json_console.type = Console

    appender.json_console.name = json_console

    appender.json_console.layout.type = JSONLayout

    appender.json_console.layout.compact = true

    appender.json_console.layout.eventEol = true


    rootLogger.level = ${sys:ls.log.level}

    rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
  logstash-sample.conf: |
    # Sample Logstash configuration for creating a simple
    # Beats -> Logstash -> Elasticsearch pipeline.

    input {
      beats {
        port => 5044
      }
    }

    output {
      elasticsearch {
        hosts => ["http://localhost:9200"]
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        #user => "elastic"
        #password => "changeme"
      }
    }
  logstash.yml: |-
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
    xpack.monitoring.enabled: false
  pipelines.yml: |
    # This file is where you define your pipelines. You can define multiple.
    # For more information on multiple pipelines, see the documentation:
    #   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

    - pipeline.id: main
      path.config: "/usr/share/logstash/pipeline"
  startup.options: >
    ################################################################################

    # These settings are ONLY used by $LS_HOME/bin/system-install to create a
    custom

    # startup script for Logstash and is not used by Logstash itself. It should

    # automagically use the init system (systemd, upstart, sysv, etc.) that your

    # Linux distribution uses.

    #

    # After changing anything here, you need to re-run
    $LS_HOME/bin/system-install

    # as root to push the changes to the init script.

    ################################################################################


    # Override Java location

    #JAVACMD=/usr/bin/java


    # Set a home directory

    LS_HOME=/usr/share/logstash


    # logstash settings directory, the path which contains logstash.yml

    LS_SETTINGS_DIR=/etc/logstash


    # Arguments to pass to logstash

    LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"


    # Arguments to pass to java

    LS_JAVA_OPTS=""


    # pidfiles aren't used the same way for upstart and systemd; this is for
    sysv users.

    LS_PIDFILE=/var/run/logstash.pid


    # user and group id to be invoked as

    LS_USER=logstash

    LS_GROUP=logstash


    # Enable GC logging by uncommenting the appropriate lines in the GC logging

    # section in jvm.options

    LS_GC_LOG_FILE=/var/log/logstash/gc.log


    # Open file limit

    LS_OPEN_FILES=16384


    # Nice level

    LS_NICE=19


    # Change these to have the init script named and described differently

    # This is useful when running multiple instances of Logstash on the same

    # physical box or vm

    SERVICE_NAME="logstash"

    SERVICE_DESCRIPTION="logstash"


    # If you need to run a command or script before launching Logstash, put it

    # between the lines beginning with `read` and `EOM`, and uncomment those
    lines.

    ###

    ## read -r -d '' PRESTART << EOM

    ## EOM
kind: ConfigMap
metadata:
  name: log-config
  namespace: elastic-system-bak
  resourceVersion: '47725627'

logstash.conf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
apiVersion: v1
data:
  logstash.conf: |-
    input {
      elasticsearch {
        hosts =>  ["https://quickstart-es-http.elastic-system:9200"]
        user  => "elastic"
        password => "Y0qo5SUP206M1Vdl1h73ak7p"
        index => "rrc_ue_statistics"
        docinfo=>true
        size => 1000
        ssl  => true
        ca_file => "/etc/es1/ca.crt"
      }
    }
    filter {
    }
    output {
      elasticsearch {
        hosts => ["https://quickstart-es-http.elastic-system-bak:9200"]
        user  => "elastic"
        password => "zl354Noo0F4U7K9W4JM31NRJ"
        manage_template => false
        index => "rrc_ue_statistics"
        ssl  => true
        cacert => "/etc/es2/ca.crt"
        document_id => "%{[@metadata][_id]}"
        ilm_enabled => false
        manage_template => false
      }
          stdout { codec => rubydebug { metadata => true }}

    }
kind: ConfigMap
metadata:
  name: logstash-config2all
  namespace: elastic-system-bak
  resourceVersion: '47456208'

操作记录 operation

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
 helm install elasticsearch     -n logging-system     --set name="logging"     --set global.coordinating.name="coordinating-only"     --set global.storageClass="cephfs-rook-pv"     --set security.enabled=true     --set security.tls.autoGenerated=true     --set security.elasticPassword="PASSWORD"     -f ./values.yaml ./elasticsearch
# Click the Variables button, above, to create your own variables.
GET ${exampleVariable1} // _search
{
  "query": {
    "${exampleVariable2}": {} // match_all
  }
}
PUT /phy/_settings?pretty
{

  "settings": {

    "index.blocks.write": true

  }

}
#执行split
POST /phy/_split/phy_back?pretty
{

  "settings": {

    "index.number_of_shards": 10,

    "index.number_of_replicas": 0

  }

}

PUT /remote_statistics/_settings
{
  "settings": {
    "index.blocks.write": null
  }
}
GET /remote_statistics/_search
{
  "query": {
    "match_all": {}
  }
}
POST /remote_statistics/_forcemerge
PUT _snapshot/my_backup1
{

    "type":"s3",

    "settings":{

        "bucket":"esbackup1",

        "protocol":"http",

        "disable_chunked_encoding":"true",

        "endpoint":"10.246.131.15:32000",
        "client":"default"


    }

}
GET _snapshot/my_backup1/_all?pretty=true
POST /_nodes/reload_secure_settings
GET /_cat/shards?v&h=index,shard,docs,store
POST _snapshot/my_backup1/snapshot_dwd/_restore
{
"indices": "dwd_remote_statics",
"index_settings": {
  "index.number_of_replicas": 0  //副本数
},
"rename_pattern": "dwd_remote_statics",
"rename_replacement": "dwd_remote_statics_s3"
}

GET _snapshot/my_backup/_all?pretty=true

PUT /dwd_remote_statics_s3/_settings?pretty
{

  "settings": {

    "index.blocks.write": true

  }

}
POST /dwd_remote_statics_s3/_split/dwd_remote_statics_s2?pretty
{

  "settings": {

    "index.number_of_shards": 5,

    "index.number_of_replicas": 0

  }

}
POST /dwd_remote_statics_s2/_forcemerge
POST /dwd_remote_statics_s2/_split/dwd_remote_statics?pretty
{

  "settings": {

    "index.number_of_shards": 10,

    "index.number_of_replicas": 0

  }

}
#删除恢复的索引
DELETE /dwd_remote_statics_s2
DELETE /dwd_remote_statics_s3
#压缩文件
POST /phy_back/_forcemerge


PUT _snapshot/my_backup2
{

    "type":"s3",

    "settings":{

        "bucket":"esbackup2",

        "protocol":"http",

        "disable_chunked_encoding":"true",

        "endpoint":"10.246.131.15:32000",
        "client":"default"


    }

}
POST _snapshot/my_backup2/snapshot_shenyu/_restore
GET /_recovery/
GET /_cat/shards?v&h=index,shard,docs,store


PUT phy
{
  "settings": {
    "index": {
      "refresh_interval": "40s",
      "number_of_shards": "30",
      "translog": {
        "flush_threshold_size": "1024mb",
        "sync_interval": "120s",
        "durability": "async"
      },
      "number_of_replicas": "0",
      "merge": {
        "scheduler": {
          "max_thread_count": "1"
        }
      }
    }
  },
  "mappings": {
        "properties": {
        "_class": {
          "type": "keyword",
          "index": false,
          "doc_values": false
        },
        "createAt": {
          "type": "long"
        },
        "ddm_conn": {
          "type": "long"
        },
        "ddm_id": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "dis_ok": {
          "type": "long"
        },
        "dvb_enable": {
          "type": "long"
        },
        "dvb_sn_err": {
          "type": "long"
        },
        "full": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "line_board_config_software_type": {
          "type": "long"
        },
        "line_board_dvb_bandwidth_hz": {
          "type": "long"
        },
        "line_board_dvb_freq_hz": {
          "type": "long"
        },
        "line_board_dvb_speed_bps": {
          "type": "long"
        },
        "line_board_heartbeat": {
          "type": "long"
        },
        "line_board_local_ip": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "line_board_ul_bandwidth_hz": {
          "type": "long"
        },
        "line_board_ul_freq_hz": {
          "type": "long"
        },
        "line_board_ul_speed_bps": {
          "type": "long"
        },
        "mdb": {
          "type": "long"
        },
        "more_inc": {
          "type": "long"
        },
        "netId": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "reportTime": {
          "type": "long"
        },
        "status": {
          "type": "long"
        },
        "ul_crc_err": {
          "type": "long"
        },
        "version": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        }
      }
    }
  },
  "aliases": {}
}
Licensed under CC BY-NC-SA 4.0
最后更新于 Jan 06, 2025 05:52 UTC
comments powered by Disqus
Built with Hugo
主题 StackJimmy 设计
Caret Up