Have time-series indices in Elasticsearch? This is the tool for you!
Install dependencies
pip install -r requirements.txt
See python curator.py --help
for usage specifics.
The default values for host, port and prefix are:
--host localhost
--port 9200
-t (or --timeout) 30
-C (or --curation-style) time
-T (or --time-unit) days
-p (or --prefix) logstash-
-s (or --separator) .
--max_num_segments 2
If your values match these you do not need to include them. The prefix
should be everything before the date string.
Close indices older than 14 days, delete indices older than 30 days (See elastic#1):
python curator.py --host my-elasticsearch -d 30 -c 14
Keep 14 days of logs in elasticsearch:
python curator.py --host my-elasticsearch -d 14
Disable bloom filter for indices older than 2 days, close indices older than 14 days, delete indices older than 30 days:
python curator.py --host my-elasticsearch -b 2 -c 14 -d 30
Optimize (Lucene forceMerge) indices older than 2 days to 1 segment per shard:
python curator.py --host my-elasticsearch -t 3600 -o 2 --max_num_segments 1
Keep 1TB of data in elasticsearch, show debug output:
python curator.py --host my-elasticsearch -C space -g 1024 -D
Dry run of above:
python curator.py --host my-elasticsearch -C space -g 1024 -D -n
If you need to close and delete based on different criteria, please use separate command lines, e.g.
python curator.py --host my-elasticsearch -C space -g 1024
python curator.py --host my-elasticsearch -c 15
When using optimize the current behavior is to wait until the optimize operation is complete before continuing. With large indices, this can result in timeouts with the default 30 seconds. It is recommended that you increase the timeout to at least 3600 seconds, if not more.
- fork the repo
- make changes in your fork
- send a pull request!