OpenData Timeseries: Prometheus-compatible metrics on object storage

apurvamehta 13 points 15 comments April 16, 2026
www.opendata.dev · View on Hacker News

Discussion Highlights (5 comments)

mdwaud

The "why should I care" is about 3/4 of the way down the page: > None of these numbers are exact, but the structural gap is clear: a handful of nodes costing roughly $560/month versus $10,000-20,000/month for a managed service at the same scale. As we explained earlier, it’s practical to operate OpenData Timeseries yourself and fully realize these massive cost savings since it isn’t a traditional distributed database that manages partitioned and replicated state. It doesn't look 100% turn-key, but those are compelling numbers.

davistreybig

Wow this is so, so much cheaper than alternatives

hagen1778

Comparing self-hosted prices with managed solutions isn't exactly apples to apples. But if you do compare, VictoriaMetrics cloud for 3Mil active series and twice higher ingestion rate (100K samples/s or 30s scrape interval) will cost you ~$1k/month + storage costs. See https://victoriametrics.cloud/#estimate-cost

valyala

Interesting solution! According to the provided numbers at "query latency" chapter, the query over cold data, which selects samples for 497 time series over 6 hours time range takes 15 seconds if the queried data isn't available in the cache. This means that typical queries over historical data will take eternity to execute ;(

hagen1778

I am curious to see more tests on the reading path. The article mentions matching 500 series over 6h window with 1m step - and it takes 2s for warmed caches. That doesn't sound good at all. Especially nowadays, when metrics from k8s ramping up churn rate to hundreds of thousands and millions series.

Semantic search powered by Rivestack pgvector
4,783 stories · 45,112 chunks indexed