I've setup an ELk stack using logstash (on EC2) and ElasticSearch Service from AWS. The source of the logs is coming from CloudWatch. I'm using Curator 5.8.1 to clean old indices.
The config:
---
client:
hosts:
- vpc-elasticsearch-xxx.eu-xxx-x.es.amazonaws.com
port: 443
use_ssl: True
ssl_no_validate: False
timeout: 300
logging:
loglevel: DEBUG
The action.yml:
---
actions:
1:
action: delete_indices
description: "Delete cloudwatch logs older than 7 days"
options:
timeout_override: 300
continue_if_exception: False
ignore_empty_list: True
allow_ilm_indices: True
filters:
- filtertype: kibana
exclude: True
- filtertype: pattern
kind: regex
value: '^(cw-*).*$'
exclude: True
- filtertype: age
source: creation_date
direction: older
unit: days
unit_count: 7
The indices from CloudWatch appear in my stack like:
yellow open cw-xxx-log-2020.07.13 B4NAbdsjQxuVLw0rxxxxx 5 1 751950 0 1.3gb 1.3gb
yellow open cw-xx-xx-log-2020.07.16 YecRAK3hRGGYgwxQxxxx 5 1 584031 0 1gb 1gb
With the current configuration I want to remove them after 1 week. But as you can see. The indices above are still available in my cluster while they are older then 2 weeks.
What is wrong here?
Your action configuration is wrong, you have
exclude: Truefor thefiltertypepattern, this will make every index that match the pattern to be excluded from the actionable list, which is the list of target index where the action, in this case the delete action, will be applied.Try the following config based in the elastic example