Problem with scraping metrics from prometheus sink. #1004
Replies: 2 comments 3 replies
-
|
Hello. I do not see any bugs here. Metric interval 60secs, scrape interval 14secs. What are you suppose to have in between? :-) You cannot send the same measurement again, this will cause error in prometheus. It's easy to solve though, just increase an aggregation interval in dashboards, and you won't notice gaps anymore. |
Beta Was this translation helpful? Give feedback.
-
|
I don't suppose to have something between metric interval. I just suppose to have values in "/metrics" each minute - as you can see in log of trying "manual scrape" - there are a lot of intervals which don't contain metric. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I use pgwatch in docker with prometheus sink
In Grafana I see empty spaces in metrics - I will show it based on one metric pgwatch_db_stats_numbackends and one source database
In pgwatch logs I see that metric collects each 60sec as defined
But in Grafana I see this image

Our VictoriaMetrics scrape metrics each 15 sec. I try to check metrics which available on /metrics endpoint with the same interval and run next script to save to file our metric for our source each 15 sec for the same period of time as in logs and on Grafana screenshot.
It shows same result as Grafana - the most time there isn't metric in /metrics endpoint (cache)
I hide part of empty iterations in log to make it readable.
So, how I see this problem:
I am not sure that it is BUG, maybe our configuration is not right, so I need help with it.
Beta Was this translation helpful? Give feedback.
All reactions