A different and often better way to downsample your Prometheus metrics

  • At some point somebody "invents" the circular buffers that have the multiple data resolutions that was RRDtool and maybe we get compact and fast time series storage and reporting again.

  • I'm curious how this can both avoid the average-of-averages problem (presumably by using the original full-rate data to compute multiple aggregates) and also supports backfilling. Is there a danger of the full-rate data expiring and having a different behavior for backfills past that horizon? Or am I wholly misunderstading both these features?

  • Just one more note. Timescale is hiring, including for roles working on Promscale.

    https://www.timescale.com/careers

    Promscale roles are listed in the "Observability" section.

  • Congratz timescale on being #1 on the frontpage 3 days in a row !

  • This sounds awesome! But is it the right approach if I am just running a simple Prometheus instance on my home NAS? I've wondered for a while how I can persist my Prometheus timeseries, I guess I could use promscale for this, but maybe it's overkill for something this simple. Advice appreciated :)

  • Years ago I had a Graphite installation where I configured retention policies, and the same for InfluxDB if my memory doesn't fail me.

    The downsampling feature at first glance seems to serve a different use case than Prometheus was built for, which I think is observability and alerting for a relatively short time period. For systems that need to work with years of data it totally makes sense, but I don't think Prometheus is used in those cases.

    Since this feature has been built for a reason however, I could be wrong