question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I'm trying to figure out how to add a label to a prometheus collector. Any ideas what I'm missing here? I have two files: main.go and collector.go I used the following link as a guide. https://rsmitty.github.io/Prometheus-Exporters/ I mocked up this example, so I could post it here. I'm ultimately not going to pull "date +%s" for the command. Just can't figure out where to add labels. For the label I'm trying to add a hostname, so I have a result like: # HELP cmd_result Shows the cmd result # TYPE cmd_result gauge cmd_result{host="my_device_hostname"} 1.919256141363144e-76 I'm also really new to golang, so there is a good chance I'm going about this all wrong! I'm ultimately trying to get prometheus to pull the cmd result on each scrape. main.go package main import ( "net/http" log "github.com/Sirupsen/logrus" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" ) func main() { //Create a new instance of the collector and //register it with the prometheus client. cmd := newCollector() prometheus.MustRegister(cmd) //This section will start the HTTP server and expose //any metrics on the /metrics endpoint. http.Handle("/metrics", promhttp.Handler()) log.Info("Beginning to serve on port :8080") log.Fatal(http.ListenAndServe(":8080", nil)) } collector.go package main import ( "encoding/binary" "fmt" "math" "os/exec" "strings" "github.com/prometheus/client_golang/prometheus" ) type cmdCollector struct { cmdMetric *prometheus.Desc } func newCollector() *cmdCollector { return &cmdCollector{ cmdMetric: prometheus.NewDesc("cmd_result", "Shows the cmd result", nil, nil, ), } } func (collector *cmdCollector) Describe(ch chan<- *prometheus.Desc) { ch <- collector.cmdMetric } func (collector *cmdCollector) Collect(ch chan<- prometheus.Metric) { var metricValue float64 command := string("date +%s") cmdResult := exeCmd(command) metricValue = cmdResult ch <- prometheus.MustNewConstMetric(collector.cmdMetric, prometheus.GaugeValue, metricValue) } func exeCmd(cmd string) float64 { parts := strings.Fields(cmd) out, err := exec.Command(parts[0], parts[1]).Output() if err != nil { fmt.Println("error occured") fmt.Printf("%s", err) } cmdProcessResult := Float64frombytes(out) return cmdProcessResult } func Float64frombytes(bytes []byte) float64 { bits := binary.LittleEndian.Uint64(bytes) float := math.Float64frombits(bits) return float }
I figured it out. I had to declare the label where I was calling the NewDesc method and then pass the value within the MustNewConstMetric method Here is my new "newCollector" with the "hostname" label. func newCollector() *cmdCollector { return &cmdCollector{ cmdMetric: prometheus.NewDesc("cmd_result", "Shows the cmd result", []string{"hostname"}, nil, ), } } It's worth noting that I'm only adding "variable labels" here. That last 'nil' is for constant labels. You can add any number of items like so... []string{"hostname", "another_label", "and_another_label"} This is covered here: https://godoc.org/github.com/prometheus/client_golang/prometheus#NewDesc Next I can add those values when calling the "MustNewConstMetric" method. ch <- prometheus.MustNewConstMetric(collector.cmdMetric, prometheus.GaugeValue, metricValue, hostname) The whole block... func (collector *cmdCollector) Collect(ch chan<- prometheus.Metric) { var metricValue float64 command := string("date +%s") cmdResult := exeCmd(command) metricValue = cmdResult ch <- prometheus.MustNewConstMetric(collector.cmdMetric, prometheus.GaugeValue, metricValue, hostname) } If I was passing in multiple labels; such as my example above, it would look more like this... ch <- prometheus.MustNewConstMetric(collector.cmdMetric, prometheus.GaugeValue, metricValue, hostname, anotherLabel", "andAnotherLabel) This is covered here: https://godoc.org/github.com/prometheus/client_golang/prometheus#MustNewConstMetric
Prometheus
53,017,421
10
In Grafana I have a drop down for variable $topic with values "topic_A" "topic_B" "topic_A" is selected so $topic = "topic_A" I want to query prometheus using function{topic=$topic} and that works fine. How would I implement function{topic="$topic" + "_ERROR"} (this fails) where what I want to query would be "topic_A_ERROR" if "topic_A" is selected. How do I combine variable $topic and string "_ERROR" in the query?
UPDATE 2020-08-17: There is a new syntax for Grafana variables, new format is to use curly braces after dollar sign: function{topic=~"${topic}_ERROR"} Double brackets syntax is deprecated and will be deleted soon. Also now you can define the format of the variable, which may help to solve some spacial characters issues. Example: ${topic:raw} Docs: https://grafana.com/docs/grafana/latest/variables/syntax/ If you want to include text in the middle you need to use a different syntax: function{topic=~"[[topic]]_ERROR"} Note not only the double brackets but also the change from = to =~. It is documented on the link at the end of my comment, basically it says: When the Multi-value or Include all value options are enabled, Grafana converts the labels from plain text to a regex compatible string. Which means you have to use =~ instead of =. You can check the official explanation here: https://grafana.com/docs/grafana/latest/features/datasources/prometheus/#using-variables-in-queries
Prometheus
59,792,809
10
Prometheus allows me to dynamically load targets with file_sd_config from a .json file like this #prometheus.yaml - job_name: 'kube-metrics' file_sd_configs: - files: - 'targets.json' [ { "labels": { "job": "kube-metrics" }, "targets": [ "http://node1:8080", "http://node2:8080" ] } ] However my targets differ in the metrics_path and not the host (I want to scrape metrics for every kubernetes node on <kube-api-server>/api/v1/nodes/<node-name>/proxy/metrics/cadvisor) but I can only set the metrics_path at the job level and not per target. Is this even achievable with prometheus or do I have to write my own code to scrape all these metrics and export them at a single target. Also I couldn't find a list of all supported auto discovery mechanisms, did I miss something in the docs?
You can use relabel_config in Prometheus config to change __metrics_path__ label config. The principe is to provide the metrics path in your targets under the form host:port/path/of/metrics (note: drop the http://, it is in scheme parameter of scrape_config) [ { "targets": [ "node1:8080/first-metrics", "node2:8080/second-metrics" ] } ] And then replace the related meta-labels with the parts - job_name: 'kube-metrics' file_sd_configs: - files: - 'targets.json' relabel_configs: - source_labels: [__address__] regex: '[^/]+(/.*)' # capture '/...' part target_label: __metrics_path__ # change metrics path - source_labels: [__address__] regex: '([^/]+)/.*' # capture host:port target_label: __address__ # change target You can reuse this method on any label known at configuration time to modify the config of the scrape. On Prometheus, use the service discovery page to check your config has been correctly modified. The official list of service discovery is in the configuration documentation: look for the *_sd_config in the index.
Prometheus
59,866,342
10
I am trying to create a table/chart in Grafana showing the total number of unique users who have logged in to a given application over a given time range (e.g. last 24 hours). I have a metric, app_request_path which records the number of requests hitting a specific path per minute: app_request_count{app="my-app", path="/login"} This gives me the following: app_request_count{app="my-app",path="/login",status="200",username="username1"} app_request_count{app="my-app",path="/login",status="200",username="username2"} Now I want to count the number of unique usernames, so I run: count_values("username", app_request_count{app="my_app", path="/login"}) and I get: {username="0"} {username="1"} {username="2"} {username="3"} {username="4"} {username="5"} What am I missing / what am I doing wrong? Ideally I'd like to get a single scalar value that display the total number of unique usernames who have logged in in the past 24 hours. Many thanks.
count without (username)(app_request_count) count_values is for metric values, count is for time series. It's also not advised to have something like usernames as label values as they tend to be high cardinality. They may also be PII, which could have legal implications.
Prometheus
59,935,902
10
I use MicroMeter gauges in a Spring Boot 2 application to track statuses of objects. On status change, the statusArrived() method is called. This function should update the gauge related to that object. Here is my current implementation: public class PrometheusStatusLogger { private int currentStatus; public void statusArrived(String id, int value) { currentStatus = value; Tags tags = Tags.of("product_id", id); Gauge.builder("product_status",this::returnStatus) .tags(tags) .strongReference(true) .register(Metrics.globalRegistry); } private int returnStatus(){ return currentStatus; } } This works quite well, but the problem is that when this method is called, all gauges values are updated. I would like only the gauge with the given product_id to be updated. Input: statusArrived(1, 2); statusArrived(2, 3); Current output: product_status{product_id=1} 3 product_status{product_id=2} 3 All gauges are updated Desired output: product_status{product_id=1} 2 product_status{product_id=2} 3 Only the gauge with the given product_id tag is updated. How can I achieve that ?
Since all your gauges are referencing the same currentStatus, when the new value comes in, all the gauge's source is changed. Instead use a map to track all the current status by id: public class PrometheusStatusLogger { private Map<String, Integer> currentStatuses = new HashMap<>(); public void statusArrived(String id, int value) { if(!currentStatuses.containsKey(id)) { Tags tags = Tags.of("product_id", id); Gauge.builder("product_status",currentStatuses, map -> map.get(id)) .tags(tags) .register(Metrics.globalRegistry); } currentStatuses.put(id, value); } }
Prometheus
60,171,522
10
I would like a Grafana variable that contains all the Prometheus metric names with a given prefix. I would like to do this so I can control what graphs are displayed with a drop down menu. I'd like to be able to display all the metrics matching the prefix without having to create a query for each one. In the Grafana documentation under the Prometheus data source I see: metrics(metric) Returns a list of metrics matching the specified metric regex. -- Using Prometheus in Grafana I tried creating a variable in Grafana using this metrics function but it didn't work. See the screenshot for the variable settings I have: settings As you can see the "Preview of values" only shows "None"
In promql, you can select metrics by name by using the internal __name__ label: {__name__=~"mysql_.*"} And then, you can reuse it to extract the metrics name using query label_values(): label_values({__name__=~"mysql_.*"},__name__) This will populate your variable with metrics name starting with mysql_. You can get the same result using metrics(); I don't know why it doesn't work for you (it should also works with prefix): metrics(mysql_)
Prometheus
60,874,653
10
Let's take this processor as an example: a CPU with 2 cores and 4 threads (2 threads per core). From what I've read, such a CPU has 2 physical cores but can process 4 threads simultaneously through hyper threading. But, in reality, one physical core can only truly run one thread at a time, but using hyper threading, the CPU exploits the idle stages in the pipeline to process another thread. Now, here is Kubernetes with Prometheus and Grafana and their CPU resource units measurement - millicore/millicpu. So, they virtually slice a core to 1000 millicores. Taking into account the hyper threading, I can't understand how they calculate those millicores under the hood. How can a process, for example, use 100millicore (10th part of the core)? How is this technically possible? PS: accidentally, found a really descriptive explanation here: Multi threading with Millicores in Kubernetes
This gets very complicated. So k8s doesn't actually manage this it just provides a layer on top of the underlying container runtime (docker, containerd etc). When you configure a container to use 100 millicore k8's hands that down to the underlying container runtime and the runtime deals with it. Now once you start going to this level you have to start looking at the Linux kernel and how it does cpu scheduling / rate with cgroups. Which becomes incredibly interesting and complicated. In a nutshell though: The linux CFS Bandwidth Control is the thing that manages how much cpu a process (container) can use. By setting the quota and period params to the schedular you can control how much CPU is used by controlling how long a process can run before being paused and how often it runs. as you correctly identify you cant only use a 10th of a core. But you can use a 10th of the time and by doing that you can only use a 10th of the core over time. For example if I set quota to 250ms and period to 250ms. That tells the kernel that this cgroup can use 250ms of CPU cycle time every 250ms. Which means it can use 100% of the CPU. if I set quota to 500ms and keep the period to 250ms. That tells the kernel that this cgroup can use 500ms of CPU cycle time every 250ms. Which means it can use 200% of the CPU. (2 cores) if I set quota to 125ms and keep the period to 250ms. That tells the kernel that this cgroup can use 125ms of CPU cycle time every 250ms. Which means it can use 50% of the CPU. This is a very brief explanation. Here is some further reading: https://blog.krybot.com/a?ID=00750-cfae57ed-c7dd-45a2-9dfa-09d42b7bd2d7 https://www.kernel.org/doc/html/latest/scheduler/sched-bwc.html
Prometheus
71,944,390
10