Managing High Cardinality
This document explains how Last9 provides visibility, superior defaults, and control levers to tame high cardinality
Last9 Limits
Quota Type | Limit | Reset Period | Possible Actions |
---|---|---|---|
Writes | |||
Per Time Series Cardinality | 1M | Per Hour | Can be raised on request |
Per Time Series Cardinality | 20M | Per Day | Can be raised on request |
Streaming Aggregation Cardinality | 3M | Per Hour | Can be raised on request |
Ingestion Concurrency | 20K | Per Second | Can be raised on Request |
Number of Metrics Aggregated in one Pipeline | 1 Metric | Per Query | Cannot be changed for now |
Reads | |||
Time Series Scanned Per Query — Blaze Tier | 5M | Per Query | Cannot be changed |
Time Series Scanned Per Query — Hot Tier | 10M | Per Query | Cannot be changed |
Samples Scanned Per Query | 100M | Per Query | Cannot be changed |
Query Time Range | 35 Days | Per Query | Can be raised on request |
Even though Last9 can scale to large values of cardinality in terms of the ingest pipeline, large per-metric cardinalities have an adverse effect on read response-times.
Beyond a daily cardinality of 3M time series per metric, query response times for a metric start degrading.
It is advised to keep daily per-metric cardinality within this limit by using Last9's streaming aggregation pipeline.
Read more about using Cardinality Explorer to identify impacted metrics and labels and how to PromQL-powered Streaming Aggregations to reduce high cardinality metrics data.
Troubleshooting
Please get in touch with us on Discord or Email if you have any questions.