Skip to content
This repository has been archived by the owner on Feb 9, 2022. It is now read-only.

prometheus: switch from time-based to size-based retention #452

Open
anguslees opened this issue Mar 20, 2019 · 4 comments
Open

prometheus: switch from time-based to size-based retention #452

anguslees opened this issue Mar 20, 2019 · 4 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@anguslees
Copy link
Contributor

We should switch to size-based retention policy (--storage.tsdb.retention.size) since that most accurately reflects our PVC storage constraints.

@anguslees anguslees added enhancement New feature or request good first issue Good for newcomers labels Mar 20, 2019
@EamonKeane
Copy link

EamonKeane commented Jun 4, 2019

Agreed, --storage.tsdb.retention=183d ends up overloading the default 8Gi PV within 90 days depending on the cluster.

@0xshipthecode
Copy link

It seems that --storage.tsdb.retention.size is still marked experimental in the prometheus docs A second issue is that apparently the WAL size is not taken into account, so the size parameter does not actually limit the storage requirements. If both have been considered and this seems like the way to go, I can add this in.

@javsalgar
Copy link
Contributor

Hi,

I just saw this PR: prometheus/prometheus#5886 It seems that the WAL size might be taken into account, am I wrong?

@optimus-kart
Copy link

Even I had an issue in produciton where the prometheus pod crashed after storage limit was reached, even though the storage.retention was configured on the server. Can anyone confirm if this is working as expected?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

5 participants