-
Notifications
You must be signed in to change notification settings - Fork 79
Cyanite is always two datapoints behind Graphite #264
Comments
What's your metric throughput? |
Around 1 mln / min, 6x cyanite nodes. |
Sorry, just to make sure: it'd be around 16K a second, also they're split between 6 Cyanite nodes? Do you see the gap also growing over the time? |
Yes, 16k per second, 6 cyanite nodes - just for testing purposes. No gaps over the time. |
The load doesn't seem too high. My suspicion (if Cyanite doesn't start lagging behind more and more) is this somehow related to the timer being faster than ingestion. |
Well, it looks like i didn't get how exactly Cyanite works. It awaits for incomming metric values for whole datapoint period, calculates max/mean/min/sum and then it flushes it to Cassandra. So it is not possible to read data directly from Cyanite cache (and this is how i always get those nulls i think). |
I think I'm starting to understand. Do you have a configuration file at hand? Wondering what rollups you have defined. Do I understand it correctly: you would like to have a datapoint for current period, even if it's not yet flushed (in-memory). I'm still uncertain why it returns nulls, usually they should be filtered out though, didn't have a chance to test yet. |
Hi there,
my metric relay clones traffic into Graphite and Cyanite. Unfortunately Cyanite is always two datapoints behind Graphite. Is there any possibility to speed this up?
Engine rule:
Graphite results:
Cyanite results:
and after a while:
this testing metric is sent every one minute.
The text was updated successfully, but these errors were encountered: