question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I have this gauge metric "metric_awesome" from two different instances. What i want to do, is subtract instance one from instance two like so metric_awesome{instance="one"} - metric_awesome{instance="two"} Unfortunately the result set is empty. Has anyone experienced this?
The issue here is that the labels don't match. What you want is: metric_awesome{instance="one"} - ignoring(instance) metric_awesome{instance="two"}
Prometheus
45,005,524
27
Because Prometheus topk returns more results than expected, and because https://github.com/prometheus/prometheus/issues/586 requires client-side processing that has not yet been made available via https://github.com/grafana/grafana/issues/7664, I'm trying to pursue a different near-term work-around to my similar problem. In my particular case most of the metric values that I want to graph will be zero most of the time. Only when they are above zero are they interesting. I can find ways to write prometheus queries to filter data points based on the value of a label, but I haven't yet been able to find a way to tell prometheus to return time series data points only if the value of the metric meets a certain condition. In my case, I want to filter for a value greater than zero. Can I add a condition to a prometheus query that filters data points based on the metric value? If so, where can I find an example of the syntax to do that?
If you're confused by brian's answer: The result of filtering with a comparison operator is not a boolean, but the filtered series. E.g. min(flink_rocksdb_actual_delayed_write_rate > 0) Will show the minimum value above 0. In case you actually want a boolean (or rather 0 or 1), use something like sum (flink_rocksdb_actual_delayed_write_rate >bool 0) which will give you the greater-than-zero count.
Prometheus
46,697,754
26
I am unable to identify what the exact issue with the permissions with my setup as shown below. I've looked into all the similar QAs but still unable to solve the issue. The aim is to deploy Prometheus and let it scrape /metrics endpoints that my other applications in the cluster expose fine. Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"endpoints\" in API group \"\" at the cluster scope" Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope" Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"services\" in API group \"\" at the cluster scope" ... ... The command below returns no to all services, nodes, pods etc. kubectl auth can-i get services --as=system:serviceaccount:default:default -n default Minikube $ minikube start --vm-driver=virtualbox --extra-config=apiserver.Authorization.Mode=RBAC 😄 minikube v1.14.2 on Darwin 11.2 ✨ Using the virtualbox driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🔄 Restarting existing virtualbox VM for "minikube" ... 🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.12 ... ▪ apiserver.Authorization.Mode=RBAC 🔎 Verifying Kubernetes components... 🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard 🏄 Done! kubectl is now configured to use "minikube" by default Roles apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: monitoring-cluster-role rules: - apiGroups: [""] resources: ["nodes", "services", "pods", "endpoints"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["configmaps"] verbs: ["get"] - apiGroups: ["extensions"] resources: ["deployments"] verbs: ["get", "list", "watch"] apiVersion: v1 kind: ServiceAccount metadata: name: monitoring-service-account namespace: default apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: monitoring-cluster-role-binding roleRef: kind: ClusterRole name: monitoring-cluster-role apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: monitoring-service-account namespace: default Prometheus apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config-map namespace: default data: prometheus.yml: | global: scrape_interval: 15s scrape_configs: - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name apiVersion: apps/v1 kind: Deployment metadata: name: prometheus-deployment namespace: default labels: app: prometheus spec: replicas: 1 selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus spec: containers: - name: prometheus image: prom/prometheus:latest ports: - name: http protocol: TCP containerPort: 9090 volumeMounts: - name: config mountPath: /etc/prometheus/ - name: storage mountPath: /prometheus/ volumes: - name: config configMap: name: prometheus-config-map - name: storage emptyDir: {} apiVersion: v1 kind: Service metadata: name: prometheus-service namespace: default spec: type: NodePort selector: app: prometheus ports: - name: http protocol: TCP port: 80 targetPort: 9090
User "system:serviceaccount:default:default" cannot list resource "endpoints" in API group "" at the cluster scope" User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" at the cluster scope" User "system:serviceaccount:default:default" cannot list resource "services" in API group "" at the cluster scope" Something running with ServiceAccount default in namespace default is doing things it does not have permissions for. apiVersion: v1 kind: ServiceAccount metadata: name: monitoring-service-account Here you create a specific ServiceAccount. You also give it some Cluster-wide permissions. apiVersion: apps/v1 kind: Deployment metadata: name: prometheus-deployment namespace: default You run Prometheus in namespace default but do not specify a specific ServiceAccount, so it will run with ServiceAccount default. I think your problem is that you are supposed to set the ServiceAccount that you create in the Deployment-manifest for Prometheus.
Prometheus
67,151,953
25
I am trying out Prometheus on Mac OS X. I looked up the downloads and not having a direct indication of which version is for Mac. I tried docker to run the Prometheus on Mac. Just want to run it directly on Mac without docker. Does any one know which version to pick. There were few BSDs there for pick. I know Mac is also BSD. Not sure which one matches or doesn't matter as long as it is bsd?. Other than those binaries, I think brew install should do the work
The downloads page has a build for Darwin on amd64. To quote the wikipedia page: Darwin forms the core set of components upon which macOS (previously OS X and Mac OS X), iOS, watchOS, and tvOS are based. This is the official binary for OSX. Other methods (such as brew install prometheus are also available).
Prometheus
48,433,869
24
How do you export and import data in Prometheus? How do you make sure the data is backed up if the instance gets down? It does not seem that there is a such feature yet, how do you do then?
There is no export and especially no import feature for Prometheus. If you need to keep data collected by prometheus for some reason, consider using the remote write interface to write it somewhere suitable for archival, such as InfluxDB (configured as a time-series database). Prometheus isn't a long term storage: if the database is lost, the user is expected to shrug, mumble "oh well", and restart Prometheus. credits and many thanks to amorken from IRC #prometheus.
Prometheus
46,348,758
24
How can I find the overall average of metrics over time interval ? avg(metric) = overall average value but avg_over_time(metrics[interval]) = averages value per label avg( avg_over_time(metric[scrape interval]) ) won't be same as(when the data is not continuous and denominator value is different) avg(metric) !!!! Given a scenario, what will be the possible way to find the overall average over a time period. Eg: Find the average response time now and Find the average response time(over all) of all the request triggered in last one hour. The number will be helpful to notify a performance issue with latest upgrades.
You need to calculate the average a bit more manually: sum(sum_over_time(metric[interval])) / sum(count_over_time(metric[interval])) Note that this is for data in a gauge, you'd need a different approach for data from a counter or summary.
Prometheus
51,859,464
23
I have read that Spark does not have Prometheus as one of the pre-packaged sinks. So I found this post on how to monitor Apache Spark with prometheus. But I found it difficult to understand and to success because I am beginner and this is a first time to work with Apache Spark. First thing that I do not get is what I need to do? I need to change the metrics.properties Should I add some code in the app or? I do not get what are the steps to make it... The thing that I am making is: changing the properties like in the link, write this command: --conf spark.metrics.conf=<path_to_the_file>/metrics.properties And what else I need to do to see metrics from Apache spark? Also I found this links: Monitoring Apache Spark with Prometheus https://argus-sec.com/monitoring-spark-prometheus/ But I could not make it with it too... I have read that there is a way to get metrics from Graphite and then to export them to Prometheus but I could not found some useful doc.
There are few ways to monitoring Apache Spark with Prometheus. One of the way is by JmxSink + jmx-exporter Preparations Uncomment *.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink in spark/conf/metrics.properties Download jmx-exporter by following link on prometheus/jmx_exporter Download Example prometheus config file Use it in spark-shell or spark-submit In the following command, the jmx_prometheus_javaagent-0.3.1.jar file and the spark.yml are downloaded in previous steps. It might need be changed accordingly. bin/spark-shell --conf "spark.driver.extraJavaOptions=-javaagent:jmx_prometheus_javaagent-0.3.1.jar=8080:spark.yml" Access it After running, we can access with localhost:8080/metrics Next It can then configure prometheus to scrape the metrics from jmx-exporter. NOTE: We have to handle to discovery part properly if it's running in a cluster environment.
Prometheus
49,488,956
23
How can I create a receiver configuration with multiple email addresses in the "to" field?
You can put comma separated email addresses in the to field. to: 'user1@example.com, user2@example.com'
Prometheus
47,921,028
23
I have container_fs_usage_bytes with prometheus to monitor container root fs, but it seems that there is no metrics for other volumes in cAdvisor.
I confirmed that Kubernetes 1.8 expose metrics for prometheus. kubelet_volume_stats_available_bytes kubelet_volume_stats_capacity_bytes kubelet_volume_stats_inodes kubelet_volume_stats_inodes_free kubelet_volume_stats_inodes_used kubelet_volume_stats_used_bytes
Prometheus
44,718,268
23
Failed to create clusterroles. <> already assigned as the roles of "container engine admin" & "container engine cluster admin" Error from server (Forbidden): error when creating "prometheus- operator/prometheus-operator-cluster-role.yaml": clusterroles.rbac.authorization.k8s.io "prometheus-operator" is forbidden: attempt to grant extra privileges: [{[create] [extensions] [thirdpartyresources] [] []} {[*] [monitoring.coreos.com] [alertmanagers] [] []} {[*] [monitoring.coreos.com] [prometheuses] [] []} {[*] [monitoring.coreos.com] [servicemonitors] [] []} {[*] [apps] [statefulsets] [] []} {[*] [] [configmaps] [] []} {[*] [] [secrets] [] []} {[list] [] [pods] [] []} {[delete] [] [pods] [] []} {[get] [] [services] [] []} {[create] [] [services] [] []} {[update] [] [services] [] []} {[get] [] [endpoints] [] []} {[create] [] [endpoints] [] []} {[update] [] [endpoints] [] []} {[list] [] [nodes] [] []} {[watch] [] [nodes] [] []}] user=&{<<my_account>>@gmail.com [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
I've got the same problem on Google Kubernetes Engine. According to the answer of enj and the comment of ccyang2005 please find the following snipet who solve my problem :) Step 1 : Get your identity gcloud info | grep Account Will output you something like Account: [myname@example.org] Step 2 : grant cluster-admin to your current identity kubectl create clusterrolebinding myname-cluster-admin-binding \ --clusterrole=cluster-admin \ --user=myname@example.org Will output somthing like Clusterrolebinding "myname-cluster-admin-binding" created After that, you'll be able to create CusterRoles
Prometheus
44,349,987
23
I run a v1.9.2 custom setup of Kubernetes and scrape various metrics with Prometheus v2.1.0. Among others, I scrape the kubelet and cAdvisor metrics. I want to answer the question: "How much of the CPU resources defined by requests and limits in my deployment are actually used by a pod (and its containers) in terms of (milli)cores?" There are a lot of scraped metrics available, but nothing like that. Maybe it could be calculated by the CPU usage time in seconds, but I don't know how. I was considering it's not possible - until a friend told me she runs Heapster in her cluster which has a graph in the built-in Grafana that tells exactly that: It shows the indivual CPU usage of a pod and its containers in (milli)cores. Since Heapster also uses kubelet and cAdvisor metrics, I wonder: how can I calculate the same? The metric in InfluxDB is named cpu/usage_rate but even with Heapster's code, I couldn't figure out how they calculate it. Any help is appreciated, thanks!
We're using the container_cpu_usage_seconds_total metric to calculate Pod CPU usage. This metrics contains the total amount of CPU seconds consumed by container by core (this is important, as a Pod may consist of multiple containers, each of which can be scheduled across multiple cores; however, the metric has a pod_name annotation that we can use for aggregation). Of special interest is the change rate of that metric (which can be calculated with PromQL's rate() function). If it increases by 1 within one second, the Pod consumes 1 CPU core (or 1000 milli-cores) in that second. The following PromQL query does just that: Compute the CPU usage of all Pods (using the sum(...) by (pod_name) operation) over a five minute average: sum(rate(container_cpu_usage_seconds_total[5m])) by (pod_name)
Prometheus
48,872,042
22
Background I have installed Prometheus on my Kubernetes cluster (hosted on Google Container Engineer) using the Helm chart for Prometheus. The Problem I cannot figure out how to add scrape targets to the Prometheus server. The prometheus.io site describes how I can mount a prometheus.yml file (which contains a list of scrape targets) to a Prometheus Docker container -- I have done this locally and it works. However, I don't know how to specify scrape targets for a Prometheus setup installed via Kubernetes-Helm. Do I need to add a volume to the Prometheus server pod that contains the scrape targets, and therefore update the YAML files generated by Helm?? I am also not clear on how to expose metrics in a Kubernetes Pod -- do I need to forward a particular port?
You need to add annotations to the service you want to monitor. apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: 'true' From the prometheus.yml in the chart: prometheus.io/scrape: Only scrape services that have a value of true prometheus.io/scheme: http or https prometheus.io/path: override if the metrics path is not /metrics prometheus.io/port: If the metrics are exposed on a different port And yes you need to expose the port with metrics to the service so Prometheus could access it
Prometheus
45,613,660
22
I have articles and for each article I want to have read count # TYPE news_read_counter2 Counter news_read_counter2{id="2000"} 168 now the counters on the servers are saved in redis\memcached so they can get reset from time to time so after a while the redis machine is restart and the server dont have the last news_read_counter number and if I start from zero again # TYPE news_read_counter2 Counter news_read_counter2{id="2000"} 2 now looking at the news_read_counter2{id="2000"} graph I see that the counter is getting dropped to 2 while the docs says: A counter is a cumulative metric that represents a single numerical value that only ever goes up. so now to keep track of the news_read_counter I need to save the data into db and I back to the start zone where I need to use mysql to handle my data here an Image of counter after redis got restart:
Counters are allowed to be reset to 0, so there's no need to do anything special here to handle it. See http://www.robustperception.io/how-does-a-prometheus-counter-work/ for more detail. It's recommended to use a client library which will handle all of this for you. Also, by convention you should suffix counters with _total so that metric should be news_reads_total.
Prometheus
37,548,412
22
On my ActiveMQ I have some Queues, which end with .error. On a Grafana Dashboard I want to list all queues without these .error-queues. Example: some.domain.one some.domain.one.error some.domain.two some.domain.two.error To list all queues I use this query: org_apache_activemq_localhost_QueueSize{Type="Queue",Destination=~"some.domain.*",} How do I exclude all .error-queues?
You can use a negative regex matcher: org_apache_activemq_localhost_QueueSize{Type="Queue",Destination=~"some.domain.*",Destination!~".*\.error"}
Prometheus
40,277,612
20
We graph fast counters with sum(rate(my_counter_total[1m])) or with sum(irate(my_counter_total[20s])). Where the second one is preferrable if you can always expect changes within the last couple of seconds. But how do you graph slow counters where you only have some increments every couple of minutes or even hours? Having values like 0.0013232/s is not very human friendly. Let's say I want to graph how many users sign up to our service (we expect a couple of signups per hour). What's a reasonable query? We currently use the following to graph that in grafana: Query: 3600 * sum(rate(signup_total[1h])) Step: 3600s Resolution: 1/1 Is this reasonable? I'm still trying to understand how all those parameters play together to draw a graph. Can someone explain how the range selector ([10m]), the rate() and the irate() functions, the Step and Resolution settings in grafana influence each other?
That's a correct way to do it. You can also use increase() which is syntactic sugar for using rate() that way. Can someone explain how the range selector This is only used by Prometheus, and indicates what data to work over. the Step and Resolution settings in grafana influence each other? This is used on the Grafana side, it affects how many time slices it'll request from Prometheus. These settings do not directly influence each other. However the resolution should work out to be smaller than the range, or you'll be undersampling and miss information.
Prometheus
38,659,784
20
How do I write a query that outputs average memory usage for instances over the past 24 hours? The following query displays the current memory usage 100 * (1 - ((node_memory_MemFree + node_memory_Cached + node_memory_Buffers) / node_memory_MemTotal)) For CPU, I was able to use irate 100 * (1 - avg(irate(node_cpu[24h])) BY (instance)) How do I use irate and avg for memory?
average memory usage for instances over the past 24 hours You can use avg_over_time: 100 * (1 - ((avg_over_time(node_memory_MemFree[24h]) + avg_over_time(node_memory_Cached[24h]) + avg_over_time(node_memory_Buffers[24h])) / avg_over_time(node_memory_MemTotal[24h]))) For CPU, I was able to use irate irate only looks at the last two samples, and that query is the inverse of how many modes you have and will be constant (it's always 0.1 on my kernel). You want 100 - (avg by (instance) (rate(node_cpu{job="node",mode="idle"}[5m])) * 100) Note that this is a 5 minute moving average and you can change [5m] to whatever period of time you are looking for such as [24h].
Prometheus
48,835,035
20
I'm trying to apply Prometheus metrics using the micrometer @Timed annotations. I found out that they only work on controller endpoints and not "simple" public and private methods. Given this example: @RestController public class TestController { @GetMapping("/test") @Timed("test-endpoint") //does create prometheus metrics public String test() { privateMethod(); publicMethod(); return "test"; } @Timed("test-private") //does NOT create prometheus metrics private void privateMethod() {System.out.println("private stuff");} @Timed("test-public") //does NOT create prometheus metrics public void publicMethod() {System.out.println("public stuff");} } creates the following metrics: ... # HELP test_endpoint_seconds # TYPE test_endpoint_seconds summary test_endpoint_seconds_count{class="com.example.micrometerannotationexample.TestController",exception="none",method="test",} 1.0 test_endpoint_seconds_sum{class="com.example.micrometerannotationexample.TestController",exception="none",method="test",} 0.0076286 # HELP test_endpoint_seconds_max # TYPE test_endpoint_seconds_max gauge test_endpoint_seconds_max{class="com.example.micrometerannotationexample.TestController",exception="none",method="test",} 0.0076286 ... No metrics found for @Timed("test-private") and @Timed("test-public"), why is that? Note: I've read on this github thread, that Spring Boot does not recognize @Timed annotations on arbitrary methods and that you need to manually configure a TimedAspect Bean in order for it to work. I've tried that but still it yields no results. @Configuration @EnableAspectJAutoProxy public class MetricsConfig { @Bean public TimedAspect timedAspect(MeterRegistry registry) { return new TimedAspect(registry); } } To try this locally see necessary gist here
@Timed works only on public methods called by another class. Spring Boot annotations like @Timed / @Transactional need the so-called proxying which happens only between invocations of public methods. A good explanation is this one https://stackoverflow.com/a/3429757/2468241
Prometheus
71,587,610
19
We are getting to grips with alerting so from time to time need to clear out old alerts which we did by calling the HTTP API, to remove the pseudo time series where the alerts were stored, e.g.: DELETE https://prometheus/api/v1/series?match[]={__name__="ALERTS"} We have recently upgraded our Prometheus server from 1.8 to 2.2.1. Calling this endpoint now gives { "status": "error", "errorType": "internal", "error": "not implemented" } I have done some research and found a solution in various locations, which I will summarise in an answer below in case it's of use to my fellow StackOverflowers
Firstly the admin API is not enabled by default in Prometheus 2. This must be made active by starting the server with the option --web.enable-admin-api There is a new endpoint in v2 at https://prometheus/api/v2/admin/tsdb/delete_series This takes a POST specifying the search criteria, e.g. for a time series with the name ALERTS where the alert name is MyTestAlert, post the following application/json to the delete_series endpoint, from the tool of choice (Tested with Postman 6 on Mac) { "matchers": [{ "type": "EQ", "name": "__name__", "value": "ALERTS" }, { "type": "EQ", "name": "alertname", "value": "MyTestAlert" }] } For completeness and to free the disk space where the alerts were persisted, POST an empty payload to https://prometheus/api/v2/admin/tsdb/clean_tombstones Answer aggregated from: https://prometheus.io/docs/prometheus/latest/querying/api/ https://github.com/prometheus/prometheus/issues/3584 https://groups.google.com/forum/#!msg/prometheus-users/ToMKsb9fYp8/az6afuX3CgAJ
Prometheus
49,859,360
19
For a particular job in Prometheus, it seems like the typical config is something like this: static_configs: - targets: ['localhost:9090'] But in the case where I want a dynamic list of hosts, what would be the approach there? I was looking at scrape_config but that doesn't seem to accomplish what I'm after (unless I'm misreading?). Thank you in advance!
There are several ways of providing dynamic targets to your Prometheus. Please refer the link here Some of them are: azure_sd_config (for AzureVM metrics) consul_sd_config (for Consul's Catalog API) dns_sd_config (DNS-based service discovery) ec2_sd_config (for AWS EC2 instances) openstack_sd_config (for Openstack Nova instances) file_sd_config (file-based service discovery) I think what you require is file_sd_config. file_sd_config is a more generic way to configure static targets. You may provide the targets in a yaml or json format. Please follow the link for detailed information.
Prometheus
46,916,328
19
I am new to Prometheus and Grafana. My primary goal is to get the response time per request. For me it seemed to be a simple thing - but whatever I do I do not get the results I require. I need to be able to analyse the service latency in the last minutes/hours/days. The current implementation I found was a simple SUMMARY (without definition of quantiles) which is scraped every 15s. Is it possible to get the average request latency of the last minute from my Prometheus SUMMARY? If YES: How? If NO: What should I do? Currently I am using the following query: rate(http_response_time_sum{application="myapp",handler="myHandler", status="200"}[1m]) / rate(http_response_time_count{application="myapp",handler="myHandler", status="200"}[1m]) I am getting two "datasets". The value of the first is "NaN". I suppose this is the result from a division by zero. (I am using spring-client).
Your query is correct. The result will be NaN if there have been no queries in the past minute.
Prometheus
47,305,424
18
Small question regarding Spring Boot, some of the useful default metrics, and how to properly use them in Grafana please. Currently with a Spring Boot 2.5.1+ (question applicable to 2.x.x.) with Actuator + Micrometer + Prometheus dependencies, there are lots of very handy default metrics that come out of the box. I am seeing many many of them with pattern _max _count _sum. Example, just to take a few: spring_data_repository_invocations_seconds_max spring_data_repository_invocations_seconds_count spring_data_repository_invocations_seconds_sum reactor_netty_http_client_data_received_bytes_max reactor_netty_http_client_data_received_bytes_count reactor_netty_http_client_data_received_bytes_sum http_server_requests_seconds_max http_server_requests_seconds_count http_server_requests_seconds_sum Unfortunately, I am not sure what to do with them, how to correctly use them, and feel like my ignorance makes me miss on some great application insights. Searching on the web, I am seeing some using like this, to compute what seems to be an average with Grafana: irate(http_server_requests_seconds::sum{exception="None", uri!~".*actuator.*"}[5m]) / irate(http_server_requests_seconds::count{exception="None", uri!~".*actuator.*"}[5m]) But Not sure if it is the correct way to use those. May I ask what sort of queries are possible, usually used when dealing with metrics of type _max _count _sum please? Thank you
UPD 2022/11: Recently I've had a chance to work with these metrics myself and I made a dashboard with everything I say in this answer and more. It's available on Github or Grafana.com. I hope this will be a good example of how you can use these metrics. Original answer: count and sum are generally used to calculate an average. count accumulates the number of times sum was increased, while sum holds the total value of something. Let's take http_server_requests_seconds for example: http_server_requests_seconds_sum 10 http_server_requests_seconds_count 5 With the example above one can say that there were 5 HTTP requests and their combined duration was 10 seconds. If you divide sum by count you'll get the average request duration of 2 seconds. Having these you can create at least two useful panels: average request duration (=average latency) and request rate. Request rate Using rate() or irate() function you can get how many there were requests per second: rate(http_server_requests_seconds_count[5m]) rate() works in the following way: Prometheus takes samples from the given interval ([5m] in this example) and calculates difference between current timepoint (not necessarily now) and [5m] ago. The obtained value is then divided by the amount of seconds in the interval. Short interval will make the graph look like a saw (every fluctuation will be noticeable); long interval will make the line more smooth and slow in displaying changes. Average Request Duration You can proceed with http_server_requests_seconds_sum / http_server_requests_seconds_count but it is highly likely that you will only see a straight line on the graph. This is because values of those metrics grow too big with time and a really drastic change must occur for this query to show any difference. Because of this nature, it will be better to calculate average on interval samples of the data. Using increase() function you can get an approximate value of how the metric changed during the interval. Thus: increase(http_server_requests_seconds_sum[5m]) / increase(http_server_requests_seconds_count[5m]) The value is approximate because under the hood increase() is rate() multiplied by [inverval]. The error is insignificant for fast-moving counters (such as the request rate), just be ready that there can be an increase of 2.5 requests. Aggregation and filtering If you already ran one of the queries above, you have noticed that there is not one line, but many. This is due to labels; each unique set of labels that the metric has is considered a separate time series. This can be fixed by using an aggregation function (like sum()). For example, you can aggregate request rate by instance: sum by(instance) (rate(http_server_requests_seconds_count[5m])) This will show you a line for each unique instance label. Now if you want to see only some and not all instances, you can do that with a filter. For example, to calculate a value just for nodeA instance: sum by(instance) (rate(http_server_requests_seconds_count{instance="nodeA"}[5m])) Read more about selectors here. With labels you can create any number of useful panels. Perhaps you'd like to calculate the percentage of exceptions, or their rate of occurrence, or perhaps a request rate by status code, you name it. Note on max From what I found on the web, max shows the maximum recorded value during some interval set in settings (default is 2 minutes if to trust the source). This is somewhat uncommon metric and whether it is useful is up to you. Since it is a Gauge (unlike sum and count it can go both up and down) you don't need extra functions (such as rate()) to see dynamics. Thus http_server_requests_seconds_max ... will show you the maximum request duration. You can augment this with aggregation functions (avg(), sum(), etc) and label filters to make it more useful.
Prometheus
67,964,176
18
I have a Grafana dashboard with template variables for services and instances. When I select a service how can I make it filter the second template variable list based on the first?
You can reference the first variable in the second variables query. I'm not certain if there is a way using the label_values helper though. First variable query: up regex: /.*app="([^"]*).*/ Second variable: query: up{app="$app"} regex: /.*instance="([^"]*).*/
Prometheus
41,773,162
18
I have a new server running Prometheus in docker-compose. I want to be able to re-load the configuration file (prometheus.yml) without have to stop and start the container. Of course since I persist the storage of promethues in a volume the stop and start isn't really a problem but it seems like overkill, especially since prometheus itself has such a handy api to reload configs. I see other people with similar questions (e.g. here) but I have been unable to get those solutions to work for me. Maybe I'm overlooking something there. docker-compose.yml version: "3" services: grafana: restart: always container_name: grafana image: grafana/grafana:6.2.1 ports: - 3000:3000 volumes: - grafanadata:/var/lib/grafana prometheus: restart: always container_name: prometheus image: prom/prometheus:v2.10.0 privileged: true volumes: - ./configuration/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml - prometheusdata:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--web.enable-admin-api' - '--web.enable-lifecycle' ports: - 9090:9090 node: restart: always container_name: node image: prom/node-exporter:v0.18.0 ports: - 9100:9100 volumes: grafanadata: prometheusdata: Alas, my results.. When I run curl -X POST http://localhost:9090/-/reload the docker-compose logs give: prometheus | level=info ts=2019-06-17T15:33:02.690Z caller=main.go:730 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | level=info ts=2019-06-17T15:33:02.691Z caller=main.go:758 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml So the prometheus' end is working fine.. All good so far. However, when I edit ./configuration/prometheus/prometheus.yml the changes don't propogate to the container. Furthermore, when I try to edit /etc/promethus/prometheus.yml in container I see that it is read only (and as an aside, the container does not have a 'sudo' command). Is there a docker native way to hot/live reload these config files to the container directory? As stated, the down/start option works for now but I'm curious if there is a more elegant solution.
docker-compose kill -s SIGHUP prometheus does the trick, so Vishrant was certainly on to something there.
Prometheus
56,585,607
18
I know that CPU utilization is given by the percentage of non-idle time over the total time of CPU. In Prometheus, rate or irate functions calculate the rate of change in a vector array. People often calculate the CPU utilisation by the following PromQL expression: (100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[1m])) * 100)) I don't understand how calculating the per second change of non-idle time is equivalent to calculating the CPU usage. Can somebody explain mathematically why this makes sense?
There are a couple of things to unwrap here. First, rate vs irate. Neither the linked question, nor the blog post address this (but Eitan's answer does touch on it). The difference is that rate estimates the average rate over the requested range (1 minute, in your case) while irate computes the rate based on the last 2 samples only. Leaving aside the "estimate" part (see this answer if you're curious) the practical difference between the 2 is that rate will smooth out the result, whereas irate will return a sampling of CPU usage, which is more likely to show extremes in CPU usage but also more prone to aliasing. E.g. if you look at Prometheus' CPU usage, you'll notice that it's at a somewhat constant baseline, with a spike every time a large rule group is evaluated. Given a time range that was at least as long as Prometheus' evaluation interval, if you used rate you'd get a more or less constant CPU usage over time (i.e. a flat line). With irate (assuming a scrape interval of 5s) you'd get one of 2 things: if your resolution (i.e. step) was not aligned with Prometheus' evaluation interval (e.g. the resolution was 1m and the evaluation interval was 13s) you'd get a random sampling of CPU usage and would hopefully see values close to both the highest and lowest CPU usage over time on a graph; if your resolution was aligned with Prometheus' evaluation interval (e.g. 1m resolution and 15s evaluation interval) then you'd either see the baseline CPU usage everywhere (because you happen to look at 5s intervals set 1 minute apart, when no rule evaluation happens) or the peak CPU usage everywhere (because you happen to look at 5s intervals 1 minute apart that each cover a rule evaluation). Regarding the second point, the apparent confusion over what the node_cpu_seconds_total metric represents, it is a counter. Meaning it's a number that increments continuously and essentially measures the amount of time the CPU was idle since the exporter started. The absolute value is not all that useful (as it depends on when the exporter started and will drop to 0 on every restart). What's interesting about it is by how much it increased over a period of time: from that you can compute for a given period of time a rate of increase per second (average, with rate; instant, with irate) or an absolute increase (with increase). So both rate(node_cpu_seconds_total{mode="idle"}[1m]) and irate(node_cpu_seconds_total{mode="idle"}[1m]) will give you a ratio (between 0.0 and 1.0) of how much the CPU was idle (over the past minute, and respectively between the last 2 samples).
Prometheus
55,556,051
18
Application was working correctly with version 2.2.6 but as the application is upgraded to latest version of spring boot 2.3.0 it stopped working and fails during startup. 2020-05-20T08:43:04.408+01:00 [APP/PROC/WEB/0] [OUT] 2020-05-20 07:43:04.407 ERROR 15 --- [ main] o.s.b.web.embedded.tomcat.TomcatStarter : Error starting Tomcat context. Exception: org.springframework.beans.factory.UnsatisfiedDependencyException. Message: Error creating bean with name 'webMvcMetricsFilter' defined in class path resource [org/springframework/boot/actuate/autoconfigure/metrics/web/servlet/WebMvcMetricsAutoConfiguration.class]: Unsatisfied dependency expressed through method 'webMvcMetricsFilter' parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'compositeMeterRegistry' defined in class path resource [org/springframework/boot/actuate/autoconfigure/metrics/CompositeMeterRegistryConfiguration.class]: Unsatisfied dependency expressed through method 'compositeMeterRegistry' parameter 1; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'prometheusMeterRegistry' defined in class path resource [org/springframework/boot/actuate/autoconfigure/metrics/export/prometheus/PrometheusMetricsExportAutoConfiguration.class]: Unsatisfied dependency expressed through method 'prometheusMeterRegistry' parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'prometheusConfig' defined in class path resource [org/springframework/boot/actuate/autoconfigure/metrics/export/prometheus/PrometheusMetricsExportAutoConfiguration.class]: Unsatisfied dependency expressed through method 'prometheusConfig' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'management.metrics.export.prometheus-org.springframework.boot.actuate.autoconfigure.metrics.export.prometheus.PrometheusProperties': Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [org.springframework.boot.actuate.autoconfigure.metrics.export.prometheus.PrometheusProperties] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@4f3f5b24] Can see below class not found exception at the bottom of start trace.
In your particular case the micrometer-registry-prometheus must be in-line with spring boot's micrometer-core's version. It is 1.5.1 because Spring Boot (Actuator) pulls that dependency in with version 2.3.0. dependencies { implementation group: 'io.micrometer', name: 'micrometer-registry-prometheus', version: '1.5.1' }
Prometheus
61,908,419
17
Hi everyone, I have deployed a Kubernetes cluster based on kubeadm and for the purpose of performing HorizontalPodAutoscaling based on the Custom Metrics, I have deployed prometheus-adpater through Helm. Now, i want to edit the configuration for prometheus-adpater and because i am new to Helm, i don't know how to do this. So could you guid me how to edit the deployed helm charts?
I guess helm upgrade is that are you looking for. This command upgrades a release to a specified version of a chart and/or updates chart values. So if you have deployed prometheus-adapter, you can use command helm fetch Download a chart from a repository and (optionally) unpack it in local directory You will have all yamls, you can edit them and upgrade your current deployed chart via helm upgrade I found an example, which should explain it to you more precisely.
Prometheus
58,256,871
17
I'm using the official stable/prometheus-operator chart do deploy Prometheus with helm. It's working good so far, except for the annoying CPUThrottlingHigh alert that is firing for many pods (including the own Prometheus' config-reloaders containers). This alert is currently under discussion, and I want to silence its notifications for now. The Alertmanager has a silence feature, but it is web-based: Silences are a straightforward way to simply mute alerts for a given time. Silences are configured in the web interface of the Alertmanager. There is a way to mute notifications from CPUThrottlingHigh using a config file?
One option is to route alerts you want silenced to a "null" receiver. In alertmanager.yaml: route: # Other settings... group_wait: 0s group_interval: 1m repeat_interval: 1h # Default receiver. receiver: "null" routes: # continue defaults to false, so the first match will end routing. - match: # This was previously named DeadMansSwitch alertname: Watchdog receiver: "null" - match: alertname: CPUThrottlingHigh receiver: "null" - receiver: "regular_alert_receiver" receivers: - name: "null" - name: regular_alert_receiver <snip>
Prometheus
54,806,336
17
We are about to setup Prometheus for monitoring and alerting for our cloud services including a continous integration & deployment pipeline for the Prometheus service and configuration like alerting rules / thresholds. For that I am thinking about 3 categories I want to write automated tests for: Basic syntax checks for configuration during deployment (we already do this with promtool and amtool) Tests for alert rules (what leads to alerts) during deployment Tests for alert routing (who gets alerted about what) during deployment Recurring check if the alerting system is working properly in production Most important part to me right now is testing the alert rules (category 1) but I have found no tooling to do that. I could imagine setting up a Prometheus instance during deployment, feeding it with some metric samples (worrying how would I do that with the Pull-architecture of Prometheus?) and then running queries against it. The only thing I found so far is a blog post about monitoring the Prometheus Alertmanager chain as a whole related to the third category. Has anyone done something like that or is there anything I missed?
New version of Prometheus (2.5) allows to write tests for alerts, here is a link. You can check points 1 and 2. You have to define data and expected output (for example in test.yml): rule_files: - alerts.yml evaluation_interval: 1m tests: # Test 1. - interval: 1m # Series data. input_series: - series: 'up{job="prometheus", instance="localhost:9090"}' values: '0 0 0 0 0 0 0 0 0 0 0 0 0 0 0' - series: 'up{job="node_exporter", instance="localhost:9100"}' values: '1+0x6 0 0 0 0 0 0 0 0' # 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 # Unit test for alerting rules. alert_rule_test: # Unit test 1. - eval_time: 10m alertname: InstanceDown exp_alerts: # Alert 1. - exp_labels: severity: page instance: localhost:9090 job: prometheus exp_annotations: summary: "Instance localhost:9090 down" description: "localhost:9090 of job prometheus has been down for more than 5 minutes." You can run tests using docker: docker run \ -v $PROJECT/testing:/tmp \ --entrypoint "/bin/promtool" prom/prometheus:v2.5.0 \ test rules /tmp/test.yml promtool will validate if your alert InstanceDown from file alerts.yml was active. Advantage of this approach is that you don't have to start Prometheus.
Prometheus
46,589,949
17
Because Prometheus only supports text metrics and many tool return metrics in json (like Finatra, Spring Boot), I created a simple proxy which translates the json into text. Because I want to use it for multiple sources, the target from which the actual metrics are to be retrieved is set via a query param. The metrics url looks like this: /metrics?prefix=finatra&url=http://<ip>:9990/admin/metrics.json This works fine in a browser or curl. However, in Prometheus the '?' gets encoded to '%3F' and therefore the request fails: /metrics%3Fprefix=finatra&url=http://<ip>:9990/admin/metrics.json How can I prevent Prometheus from encoding the ?? Is this a bug in Prometheus? I already tried escaping with % or \, using unicode etc, but still no luck.
This behaviour is correct, as the metrics path is a path - not an arbitrary suffix on the protocol, host and port. You're looking for the params configuration option: scrape_configs: - job_name: 'somename' params: prefix: ['finatra'] url: ['http://:9090/admin/metrics.json']
Prometheus
40,172,415
17
I have configured grafana dashboard to monitor promethus metrics for some of the spring boot services. I have a single panel and a prom query for every service on it. Now I want to add alerts for each on of those queries. But I couldn't find a way to add multiple alerts on single panel. I could add only only for one of the queries. Is there a way to do it? Or would I need to split panel into multiple panels?
You can specify the query that the alert threshold is evaluating within the 'conditions' but it will still be just one alert. As such your Alert message won't include anything to distinguish which specific query condition triggered the alert, it's just whatever text is in the box (AFAIK there's not currently any way to add variables to the alert message). I've ended up with a separate dashboard which isn't generally viewed, just for alerts with multiple panels for each alert. You can quickly duplicate them by using the panel json and a search/replace for the node name, service name etc.
Prometheus
63,245,770
16
I have a Grafana dashboard panel configured to render the results of a Prometheus query. There are a large number of series returned by the query, with the legend displayed to the right. If the user is looking for a specific series, they have to potentially scroll through all of them, and it's easy to miss the one they're looking for. So I'd like to sort the legend by series name, but I can't find any way to do that. My series name is a concatenation of two labels, so if I could sort the instant vector returned from the PromQL query by label value, I think Grafana would use that order in the legend. But I don't see any way to do that in Prometheus. There is a sort() function, but it sorts by sample value. And I don't see any way to sort the legend in Grafana.
As far as I know, You can only use the function sort() to sort metrics by value. According to this PR, Prometheus does not intend to provide the function sort_by_label(). According to this Issue, Grafana displays the query results from Prometheus without sorting. According to this Issue, Grafana supports sorting by value when displaying legend. In Grafana 7, Prometheus metrics can be transformed from time series format to table format using the Transform module, so that you can sort the metrics by any label or value. In December 2023, prometheus v2.49 finally added sort_by_label() and sort_by_label_desc()
Prometheus
64,395,442
16
I am currently trying to migrate our prometheus lib to spring boot 2.0.3.RELEASE. We use a custom path for prometheus and so far we use a work around to ensure this. As there is the possibility for a custom path for the info- and health-endpoint, uses management.endpoint.<health/info>.path. I tried to specify management.endpoint.prometheus.path, but it was still just accessible under /actuator/prometheus. How can I use a custom path or prometheus? We enable prometheus using the following libs (snippet of our build.gradle) compile "org.springframework.boot:spring-boot-starter-actuator:2.0.3.RELEASE" compile "io.micrometer:micrometer-core:2.0.5" compile "io.micrometer:micrometer-registry-prometheus:2.0.5" we also use the import of the class PrometheusMetricsExportAutoConfiguration Your help is highly appreciated :)
From the reference documentation: By default, endpoints are exposed over HTTP under the /actuator path by using the ID of the endpoint. For example, the beans endpoint is exposed under /actuator/beans. If you want to map endpoints to a different path, you can use the management.endpoints.web.path-mapping property. Also, if you want change the base path, you can use management.endpoints.web.base-path. The following example remaps /actuator/health to /healthcheck: application.properties: management.endpoints.web.base-path=/ management.endpoints.web.path-mapping.health=healthcheck So, to remap the prometheus endpoint to a different path beneath /actuator you can use the following property: management.endpoints.web.path-mapping.prometheus=whatever-you-want The above will make the Prometheus endpoint available at /actuator/whatever-you-want If you want the Prometheus endpoint to be available at the root, you'll have to move all the endpoints there and remap it: management.endpoints.web.base-path=/ management.endpoints.web.path-mapping.prometheus=whatever-you-want The above will make the Prometheus endpoint available at /whatever-you-want but with the side-effect of also moving any other enabled endpoints up to / rather than being beneath /actuator.
Prometheus
51,195,237
16
I'm trying to calculate easter sunday in PromQL using Gauss's Easter algorithm (I need to ignore some alert rules on public holidays). I can calculate the day, but I'm having a problem with the month as I need something like an if/else expression. My recording rule easter_sunday_in_april returns 1 if eastern is in april and 0 if it is in march. (How) can I express the following in PromQL? if(easter_sunday_in_april > 0) return 4 else return 3 For the sake of completeness, I attach my recording rules here: - record: a expr: year(europe_time) % 4 - record: b expr: year(europe_time) % 7 - record: c expr: year(europe_time) % 19 - record: d expr: (19*c + 24) % 30 - record: e expr: (2*a + 4*b + 6*d + 5) % 7 - record: f expr: floor((c + 11*d + 22*e)/451) - record: easter_sunday_day_of_month_temp expr: 22 + d +e - (7*f) - record: easter_sunday_day_of_month_in_april expr: easter_sunday_day_of_month_temp > bool 31 - record: easter_sunday_day_of_month expr: easter_sunday_day_of_month_temp % 31
The if(easter_sunday_in_april > 0) return 4 else return 3 can be expressed as the following PromQL query: (vector(4) and on() (easter_sunday_in_april > 0)) or on() vector(3) It uses and and or logical operators and on() modifier. P.S. This query can be expressed in more easy-to-understand form with MetricsQL via if and default operators: (4 if (easter_sunday_in_april > 0)) default 3 MetricsQL is PromQL-like query language provided by VictoriaMetrics - the project I work on.
Prometheus
64,204,913
16
Consider a Prometheus metric foo_total that counts the total amount of occurences of an event foo, i.e. the metric will only increase as long as the providing service isn't restarted. Is there any way to get the timespan (e.g. amount of seconds) since the last increase of that metric? I know that due to the scrape period, the value for sure isn't that accurate, but an accurancy of a couple of minutes should be sufficent for me. Background: I want to use that kind of query in Grafana to have an overview if some services are used regularly and if some jobs are done within a defined grace period. I don't have any influence on the metric itself.
Below is the JSON for a Singlestat panel that will display the time of the last update to the up{job="prometheus"} metric. This is not exactly what you asked for: it's the last time rather than the timespan since; it's only useful as a Singlestat panel (i.e. you can't take the value and graph it since it's not a single value); and it will only display changes covered by the dashboard's time range. The underlying query is timestamp(changes(up{job="prometheus"}[$__interval]) > 0) * 1000, so the query will basically return all timestamps where there have been any changes during the last $__interval seconds (determined dynamically by the time range and the size of the Singlestat panel in pixels). The Singlestat panel will then display the last value, if there is any. (The * 1000 is there because Grafana expects timestamps in milliseconds.) { "type": "singlestat", "title": "Last Change", "gridPos": { "x": 0, "y": 0, "w": 12, "h": 9 }, "id": 8, "targets": [ { "expr": "timestamp(changes(up{job=\"prometheus\"}[$__interval]) > 0) * 1000", "intervalFactor": 1, "format": "time_series", "refId": "A", "interval": "10s" } ], "links": [], "maxDataPoints": 100, "interval": null, "cacheTimeout": null, "format": "dateTimeAsIso", "prefix": "", "postfix": "", "nullText": null, "valueMaps": [ { "value": "null", "op": "=", "text": "N/A" } ], "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "rangeMaps": [ { "from": "null", "to": "null", "text": "N/A" } ], "mappingType": 1, "nullPointMode": "connected", "valueName": "current", "prefixFontSize": "50%", "valueFontSize": "80%", "postfixFontSize": "50%", "thresholds": "", "colorBackground": false, "colorValue": false, "colors": [ "#299c46", "rgba(237, 129, 40, 0.89)", "#d44a3a" ], "sparkline": { "show": false, "full": false, "lineColor": "rgb(31, 120, 193)", "fillColor": "rgba(31, 118, 189, 0.18)" }, "gauge": { "show": false, "minValue": 0, "maxValue": 100, "thresholdMarkers": true, "thresholdLabels": false }, "tableColumn": "" } If you wanted this to be more reliable, you could define a Prometheus recording rule that with a value equal to the current timestamp if there have been any changes in the last few seconds/minutes (depending on how frequently Prometheus collects the metric) or the rule's previous value otherwise. E.g. (not tested): groups: - name: last-update rules: - record: last-update expr: | timestamp(changes(up{job="prometheus"}[1m]) > 0) or last-update Replacing up{job="prometheus"} with your metric selector and 1m with an interval that is at least as long as your collection interval and ideally quite a bit longer, in order to cover any collection interval jitter or missed scrapes). Then you would use an expression like time() - last-update in Grafana to get the timespan since the last change. And you could use it in any sort of panel, without having to rely on the panel picking the last value for you. Edit: One of the new features expected in the 2.7.0 release of Prometheus (which is due in about 2-3 weeks, if they keep to their 6 week release schedule) is subquery support. Meaning that you should be able to implement the latter, "more reliable" solution without the help of a recording rule. If I understand this correctly, the query should look something like this: time() - max_over_time(timestamp(changes(up{job="prometheus"}[5m]) > 0)[24h:1m]) But, just as before, this will not be a particularly efficient query, particularly over large numbers of series. You may also want to subtract 5 minutes from that and limit it using clamp_min to a non-negative value, to adjust for the 5 minute range.
Prometheus
54,148,451
16
I would like to install Prometheus on port 8080 instead of 9090 (its normal default). To this end I have edited /etc/systemd/system/prometheus.service to contain this line: ExecStart=/usr/local/bin/prometheus \ --config.file=/etc/prometheus.yaml --web.enable-admin-api \ --web.listen-address=":8080" I.e., I am using option --web.listen-address to specifiy the non-default port. However, when I start Prometheus (2.0 beta) with systemctl start prometheus I receive this error message: parse external URL "": invalid external URL "http://<myhost>:8080\"/" So how can I configure Prometheus such that I can reach its web UI at http://<myhost>:8080/ (instead of http://<myhost>:9090)?
The quotes were superfluous. This line will work: ExecStart=/usr/local/bin/prometheus \ --config.file=/etc/prometheus.yaml --web.enable-admin-api \ --web.listen-address=:8080
Prometheus
47,414,593
16
I have a multi-container pod in my kubernetes deployment: java redis nginx For every of those containers, there's a container with Prometheus exporter as well. The question is how can I expose those ports to Prometheus if annotations section supports only one port per pod? annotations: prometheus.io/scrape: 'true' prometheus.io/port: 'xxxx' but I need something like this: annotations: prometheus.io/scrape: 'true' prometheus.io/port_1: 'xxxx' prometheus.io/port_2: 'yyyy' prometheus.io/port_3: 'zzzz' Maybe there's some other method to scrape all metrics from my multi-container pods? Thanks in advance for any kind of help.
Here's an example job for Prometheus. Put it in your own config. Next, add: annotations: prometheus.io/scrape: 'true' to your pod metadata. And on every container, which provides /metrics to prom, create an appropriate port, named metrics. That's it. Prometheus would scrape only those ports, and there would be no situation, like when your redis instance would get http requests on its 6379 port.
Prometheus
43,121,132
16
Is there a way to send logs to Loki directly without having to use one of it's agents? For example, if I have an API, is it possible to send request/response logs directly to Loki from an API, without the interference of, for example, Promtail?
Loki HTTP API Loki HTTP API allows pushing messages directly to Grafana Loki server: POST /loki/api/v1/push /loki/api/v1/push is the endpoint used to send log entries to Loki. The default behavior is for the POST body to be a snappy-compressed protobuf message: Protobuf definition Go client library Alternatively, if the Content-Type header is set to application/json, a JSON post body can be sent in the following format: { "streams": [ { "stream": { "label": "value" }, "values": [ [ "<unix epoch in nanoseconds>", "<log line>" ], [ "<unix epoch in nanoseconds>", "<log line>" ] ] } ] } You can set Content-Encoding: gzip request header and post gzipped JSON. Example: curl -v -H "Content-Type: application/json" -XPOST -s "http://localhost:3100/loki/api/v1/push" --data-raw \ '{"streams": [{ "stream": { "foo": "bar2" }, "values": [ [ "1570818238000000000", "fizzbuzz" ] ] }]}' So it is easy to create JSON-formatted string with logs and send it to the Grafana Loki. Libraries There are some libraries implementing several Grafana Loki protocols. There is also (my) zero-dependency library in pure Java 1.8, which implements pushing logs in JSON format to Grafana Loki. Works on Java SE and Android platform: https://github.com/mjfryc/mjaron-tinyloki-java Security Above API doesn't support any access restrictions as written here - when using over public network, consider e.g. configuring Nginx proxy with HTTPS from Certbot and Basic Authentication.
Prometheus
67,316,535
15
I am trying to generate Prometheus metrics with using Micrometer.io with Spring Boot 2.0.0.RELEASE. When I am trying to expose the size of a List as Gauge, it keeps displaying NaN. In the documentation it says that; It is your responsibility to hold a strong reference to the state object that you are measuring with a Gauge. I have tried some different ways but I could not solve the problem. Here is my code with some trials. import io.micrometer.core.instrument.*; import io.swagger.backend.model.Product; import io.swagger.backend.service.ProductService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; @RestController @RequestMapping("metrics") public class ExampleController { private AtomicInteger atomicInteger = new AtomicInteger(); private ProductService productService; private final Gauge productGauge; @Autowired public HelloController(ProductService productService, MeterRegistry registry) { this.productService = productService; createGauge("product_gauge", productService.getProducts(), registry); } private void createGauge(String metricName, List<Product> products, MeterRegistry registry) { List<Product> products = productService.getProducts(); // #1 // this displays product_gauge as NaN AtomicInteger n = registry.gauge("product_gauge", new AtomicInteger(0)); n.set(1); n.set(2); // #2 // this also displays product_gauge as NaN Gauge .builder("product_gauge", products, List::size) .register(registry); // #3 // this displays also NaN testListReference = Arrays.asList(1, 2); Gauge .builder("random_gauge", testListReference, List::size) .register(registry); // #4 // this also displays NaN AtomicInteger currentHttpRequests = registry.gauge("current.http.requests", new AtomicInteger(0)); } @GetMapping(path = "/product/decrement") public Counter decrementAndGetProductCounter() { // decrement the gague by one } } Is there anyone who can help with this issue? Any help would be appreciated.
In all cases, you must hold a strong reference to the observed instance. When your createGauge() method is exited, all function stack allocated references are eligible for garbage collection. For #1, pass your atomicInteger field like this: registry.gauge("my_ai", atomicInteger);. Then increment/decrement as you wish. Whenever micrometer needs to query it, it will as long as it finds the reference. For #2, pass your productService field and a lambda. Basically whenever the gauge is queried, it will call that lambda with the provided object: registry.gauge("product_gauge", productService, productService -> productService.getProducts().size()); (No guarantee regarding syntax errors.)
Prometheus
50,821,924
15
I am a little unclear on when to exactly use increase and when to use sum_over_time in order to calculate a periodic collection of data in Grafana. I want to calculate the total percentage of availability of my system. Thanks.
The "increase" function calculates how much a counter increased in the specified interval. The "sum_over_time" function calculates the sum of all values in the specified interval. Suppose you have the following data series in the specified interval: 5, 5, 5, 5, 6, 6, 7, 8, 8, 8 Then you would get: increase = 8-5 = 3 sum_over_time = 5+5+5+5+6+6+7+8+8+8 = 63 If your goal is to calculate the total percentage of availability I think it's better to use the "avg_over_time" function.
Prometheus
63,289,864
15
First off a little new to using helm... So I'm struggling to get the helm deployment of this: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack To work the way I would like in my kubernetes cluster. I like what it has done so far but how can I make it scrape a custom endpoint? I have seen this: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus Under the section titled: "Scraping Pod Metrics via Annotations". I have added the following annotations to the pod deployment (and then the node port service) in kubernetes: annotations = { "prometheus.io/scrape" = "true" "prometheus.io/path" = "/appMetrics/prometheusMetrics" "prometheus.io/port" = "443" } However, when I look in the targets page of prometheus I don't see it there. I also don't see it in the configuration file. So that makes me think this helm chart isn't deploying the same prometheus chart. So now the question is, how can I setup a custom scrape endpoint using the helm chart kube-prometheus-stack. From my reading this is the one I should* be using, right?
Try this below in your custom_values.yaml and apply it. prometheus: prometheusSpec: additionalScrapeConfigs: - job_name: your_job_name scrape_interval: 15s kubernetes_sd_configs: - role: pod namespaces: names: - your_namespace relabel_configs: - source_labels: [__meta_kubernetes_namespace] action: replace target_label: namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: pod - source_labels: [__address__] action: replace regex: ([^:]+)(?::\d+)? replacement: ${1}:your_port target_label: __address__ - source_labels: [__meta_kubernetes_pod_label_app] action: keep regex: your_pod_name You need to replace your_job_name, your_namespace, your_port, your_pod_name to your deployment file. After I did the above metric and re-install Prometheus by helm chart, now I can see the target, and the metrics get exposed.
Prometheus
64,452,966
15
In a prometheus alert rule, how do I check for a value to be in a certain range? for eg., (x > 80 && x <= 100); when x is a complex expression it feels unnecessary to evaluate it twice. is there another way to represent this expression?
You can do x < 100 > 80 to chain them.
Prometheus
45,766,533
15
I wish to push a multi-labeled metric into Prometheus using the Pushgateway. The documentation offer a curl example but I need it sent via Python. In addition, I'd like to embed multiple labels into the metric.
First step: Install the client: pip install prometheus_client Second step: Paste the following into a Python interpreter: from prometheus_client import CollectorRegistry, Gauge, push_to_gateway registry = CollectorRegistry() g = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry) g.set_to_current_time() push_to_gateway('localhost:9091', job='batchA', registry=registry)
Prometheus
40,989,737
15
I used Prometheus to measure business metrics like: # HELP items_waiting_total Total number of items in a queue # TYPE items_waiting_total gauge items_waiting_total 149 I would like to keep this data for very long term (5 years retention) and I don't need high frequency in scrape_interval. So I set up scrape_interval: "900s". When I check the graph in Prometheus with 60s resolution, it shows that flapping, but it is not true. The question is, what is the maximum (recommended) scrape_interval in Prometheus?
It's not advisable to go above about 2 minutes. This is as staleness is 5 minutes by default (which is what's causing the gaps), and you want to allow for a failed scrape.
Prometheus
40,230,057
15
I have been able to obtain the metrices by sending an HTTP GET as follows: # TYPE net_conntrack_dialer_conn_attempted_total untyped net_conntrack_dialer_conn_attempted_total{dialer_name="federate",instance="localhost:9090",job="prometheus"} 1 1608520832877 Now I need to parse this data and obtain control over every piece of data so that I can convert tand format like json. I have been looking into the ebnf package in Go: ebnf package Can somebody point me the right direction to parse the above data?
There's a nice package already available to do that and it's by the Prometheus's Authors itself. They have written a bunch of Go libraries that are shared across Prometheus components and libraries. They are considered internal to Prometheus but you can use them. Refer: github.com/prometheus/common doc. There's a package called expfmt that can decode and encode the Prometheus's Exposition Format (Link). Yes, it follows the EBNF syntax so ebnf package could also be used but you're getting expfmt right out of the box. Package used: expfmt Sample Input: # HELP net_conntrack_dialer_conn_attempted_total # TYPE net_conntrack_dialer_conn_attempted_total untyped net_conntrack_dialer_conn_attempted_total{dialer_name="federate",instance="localhost:9090",job="prometheus"} 1 1608520832877 Sample Program: package main import ( "flag" "fmt" "log" "os" dto "github.com/prometheus/client_model/go" "github.com/prometheus/common/expfmt" ) func fatal(err error) { if err != nil { log.Fatalln(err) } } func parseMF(path string) (map[string]*dto.MetricFamily, error) { reader, err := os.Open(path) if err != nil { return nil, err } var parser expfmt.TextParser mf, err := parser.TextToMetricFamilies(reader) if err != nil { return nil, err } return mf, nil } func main() { f := flag.String("f", "", "set filepath") flag.Parse() mf, err := parseMF(*f) fatal(err) for k, v := range mf { fmt.Println("KEY: ", k) fmt.Println("VAL: ", v) } } Sample Output: KEY: net_conntrack_dialer_conn_attempted_total VAL: name:"net_conntrack_dialer_conn_attempted_total" type:UNTYPED metric:<label:<name:"dialer_name" value:"federate" > label:<name:"instance" value:"localhost:9090" > label:<name:"job" value:"prometheus" > untyped:<value:1 > timestamp_ms:1608520832877 > So, expfmt is a good choice for your use-case. Update: Formatting problem in OP's posted input: Refer: https://github.com/prometheus/pushgateway/issues/147#issuecomment-368215305 https://github.com/prometheus/pushgateway#command-line Note that in the text protocol, each line has to end with a line-feed character (aka 'LF' or '\n'). Ending a line in other ways, e.g. with 'CR' aka '\r', 'CRLF' aka '\r\n', or just the end of the packet, will result in a protocol error. But from the error message, I could see \r char is present in in the put which is not acceptable by design. So use \n for line endings.
Prometheus
65,388,098
14
I am using counters to count the number of requests. Is there any way to get current value of a prometheus counter? My aim is to reuse existing counter without allocating another variable. Golang prometheus client version is 1.1.0.
It's easy, have a function to fetch Prometheus counter value import ( "github.com/prometheus/client_golang/prometheus" dto "github.com/prometheus/client_model/go" "github.com/prometheus/common/log" ) func GetCounterValue(metric *prometheus.CounterVec) float64 { var m = &dto.Metric{} if err := metric.WithLabelValues("label1", "label2").Write(m); err != nil { log.Error(err) return 0 } return m.Counter.GetValue() }
Prometheus
57,952,695
14
I have activated the spring actuator prometheus endpont /actuator/prometheus. By adding the dependencies for micrometer and actuator and enabled prometheus endpont. How can i get there custom metrics?
You'll need to register your metrics with the Micrometer Registry. The following example creates the metrics in the constructor. The Micrometer Registry is injected as a constructor parameter: @Component public class MyComponent { private final Counter myCounter; public MyComponent(MeterRegistry registry) { myCounter = Counter .builder("mycustomcounter") .description("this is my custom counter") .register(registry); } public String countedCall() { myCounter.increment(); } } Once this is available, you'll have a metric mycustomcounter_total in the registry available in the /prometheus URL. The suffix "total" is added to comply with the Prometheus naming conventions.
Prometheus
50,406,296
14
I have below labels in prometheus, how to create wildcard query while templating something like “query”: “label_values(application_*Count_Total,xyx)” . These values are generated from a Eclipse Microprofile REST-API application_getEnvVariablesCount_total application_getFEPmemberCount_total application_getLOBDetailsCount_total application_getPropertiesCount_total { "allValue": null, "current": { "isNone": true, "selected": false, "text": "None", "value": "" }, "datasource": "bcnc-prometheus", "definition": "microprofile1", "hide": 0, "includeAll": false, "label": null, "multi": false, "name": "newtest", "options": [ { "isNone": true, "selected": true, "text": "None", "value": "" } ], "query": "microprofile1", "refresh": 0, "regex": "{__name__=~\"application_.*Count_total\"}", "skipUrlSync": false, "sort": 0, "tagValuesQuery": "", "tags": [], "tagsQuery": "", "type": "query", "useTags": false },
Prometheus treats metric names the same way as label values with a special label - __name__. So the following query should select all the values for label xyx across metrics with names matching application_.*Count_total regexp: label_values({__name__=~"application_.*Count_total"}, xyx)
Prometheus
59,684,225
13
I'm looking at Prometheus metrics in a Grafana dashboard, and I'm confused by a few panels that display metrics based on an ID that is unfamiliar to me. I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c points to a single pod, and I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c/<another-long-string> resolves to a container in the pod, but how do I resolve this ID to the pod name and a container i.e. how to do I map this ID to the pod name I see when I run kubectl get pods? I already tried running kubectl describe pods --all-namespaces | grep "99b2fe2a-104d-11e8-baa7-06145aa73a4c" but that didn't turn up anything. Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths? Lastly, where can I learn more about what manages /kubepods? Prometheus Query: sum (container_memory_working_set_bytes{id!="/",kubernetes_io_hostname=~"^$Node$"}) by (id) / Thanks for reading. Eric
OK, now that I've done some digging around, I'll attempt to answer all 3 of my own questions. I hope this helps someone else. How to do I map this ID to the pod name I see when I run kubectl get pods? Given the following, /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c, the last bit is the pod UID, and can be resolved to a pod by looking at the metadata.uid property on the pod manifest: kubectl get pod --all-namespaces -o json | jq '.items[] | select(.metadata.uid == "99b2fe2a-104d-11e8-baa7-06145aa73a4c")' Once you've resolved the UID to a pod, we can resolve the second UID (container ID) to a container by matching it with the .status.containerStatuses[].containerID in the pod manifest: ~$ kubectl get pod my-pod-6f47444666-4nmbr -o json | jq '.status.containerStatuses[] | select(.containerID == "docker://5339636e84de619d65e1f1bd278c5007904e4993bc3972df8628668be6a1f2d6")' Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths? Burstable, BestEffort, and Guaranteed are Quality of Service (QoS) classes that Kubernetes assigns to pods based on the memory and cpu allocations in the pod spec. More information on QoS classes can be found here https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/. To quote: For a Pod to be given a QoS class of Guaranteed: Every Container in the Pod must have a memory limit and a memory request, and they must be the same. Every Container in the Pod must have a cpu limit and a cpu request, and they must be the same. A Pod is given a QoS class of Burstable if: The Pod does not meet the criteria for QoS class Guaranteed. At least one Container in the Pod has a memory or cpu request. For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or cpu limits or requests. Lastly, where can I learn more about what manages /kubepods? /kubepods/burstable, /kubepods/besteffort, and /kubepods/guaranteed are all a part of the cgroups hierarchy, which is located in /sys/fs/cgroup directory. Cgroups is what manages resource usage for container processes such as CPU, memory, disk I/O, and network. Each resource has its own place in the cgroup hierarchy filesystem, and in each resource sub-directory are /kubepods subdirectories. More info on cgroups and Docker containers here: https://docs.docker.com/config/containers/runmetrics/#control-groups
Prometheus
49,035,724
13
I'm transforming a Spring Boot application from Spring Boot 1 (with the Prometheus Simpleclient) to Spring Boot 2 (which uses Micrometer). I'm stumped at transforming the labels we have with Spring Boot 1 and Prometheus to concepts in Micrometer. For example (with Prometheus): private static Counter requestCounter = Counter.build() .name("sent_requests_total") .labelNames("method", "path") .help("Total number of rest requests sent") .register(); ... requestCounter.labels(request.getMethod().name(), path).inc(); The tags of Micrometer seem to be something different than the labels of Prometheus: All values have to be predeclared, not only the keys. Can one use Prometheus' labels with Spring (Boot) and Micrometer?
Further digging showed that only the keys of micrometer tags have to be predeclared - but the constructor really takes pairs of key/values; the values don't matter. And the keys have to be specified when using the metric. This works: private static final String COUNTER_BATCHMANAGER_SENT_REQUESTS = "batchmanager.sent.requests"; private static final String METHOD_TAG = "method"; private static final String PATH_TAG = "path"; private final Counter requestCounter; ... requestCounter = Counter.builder(COUNTER_BATCHMANAGER_SENT_REQUESTS) .description("Total number of rest requests sent") .tags(METHOD_TAG, "", PATH_TAG, "") .register(meterRegistry); ... Metrics.counter(COUNTER_BATCHMANAGER_SENT_REQUESTS, METHOD_TAG, methodName, PATH_TAG, path) .increment();
Prometheus
49,170,093
13
Use helm installed Prometheus and Grafana on minikube at local. $ helm install stable/prometheus $ helm install stable/grafana Prometheus server, alertmanager grafana can run after set port-forward: $ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 9090 $ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 9093 $ export POD_NAME=$(kubectl get pods --namespace default -l "app=excited-crocodile-grafana,component=grafana" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 3000 Add Data Source from grafana, got HTTP Error Bad Gateway error: Import dashboard 315 from: https://grafana.com/dashboards/315 Then check Kubernetes cluster monitoring (via Prometheus), got Templating init failed error: Why?
In the HTTP settings of Grafana you set Access to Proxy, which means that Grafana wants to access Prometheus. Since Kubernetes uses an overlay network, it is a different IP. There are two ways of solving this: Set Access to Direct, so the browser directly connects to Prometheus. Use the Kubernetes-internal IP or domain name. I don't know about the Prometheus Helm-chart, but assuming there is a Service named prometheus, something like http://prometheus:9090 should work.
Prometheus
48,338,122
13
I have a Spring boot app throwing out open metric stats using micrometer. For each of my HTTP endpoints, I can see the following metric which I believe tracks the number of requests for the given endpoint: http_server_requests_seconds_count My question is how do I use this in a Grafana query to present the number number of requests calling my endpoint say every minute? I tried http_client_requests_seconds_count{} and sum(rate(http_client_requests_seconds_count{}[1m])) but neither work. Thanks in advance.
rate(http_client_requests_seconds_count{}[1m]) will provide you the number of request your service received at a per-second rate. However by using [1m] it will only look at the last minute to calculate that number, and requires that you collect samples at a rate quicker than a minute. Meaning, you need to have collected 2 scrapes in that timeframe. increase(http_client_requests_seconds_count{}[1m]) would return how much that count increased in that timeframe, which is probably what you would want, though you still need to have 2 data points in that window to get a result. Other way you could accomplish your result: increase(http_client_requests_seconds_count{}[2m]) / 2 By looking over 2 minutes then dividing it, you will have more data and it will flatten spikes, so you'll get a smoother chart. rate(http_client_requests_seconds_count{}[1m]) * 60 By multiplying the rate by 60 you can change the per-second rate to a per-minute value. Here is a writeup you can dig into to learn more about how they are calculated and why increases might not exactly align with integer values: https://promlabs.com/blog/2021/01/29/how-exactly-does-promql-calculate-rates
Prometheus
66,282,512
13
In the below promQL query execution: "Resolution" is mentioned as 14 seconds and "total time series" is mentioned as 1 What does "resolution" with value 14 seconds mean? What is "time series"? What does "time series" with value 1 mean? What does "load time" indicate?
Resolution is the distance (in time) between points on the graph. 14s was automatically selected based on your query duration (which you've set to 1 hour). This resolution does not necessarily line up with the "real" samples, you can think of it as a sensible interpretation of your data for the given amount of time. For example, I might scrape an application every 60s, and in this case I can still get a point for every 14s via interpolation. The inverse is also true, for queries over a much larger duration (e.g. 1 day), you can have a resolution that is larger than the scrape interval, effectively down-sampling. Time series is a huge topic that you probably just need to google. But to say it briefly, the data you're returning is for only one thing (the "prometheus" job), or in other words, it is a single series of samples across time. If you had 2 jobs, "prometheus" and "foobar", and changed your label matcher to be {job=~".+"}, you would instead return 2 series (one for each label dimension). You would thus have a separate line for "prometheus" and "foobar". Load time is quite simply how long Prometheus took to handle your request.
Prometheus
69,620,345
13
I have an application that will be monitored by Prometheus, but the application need the custom header key like : x-auth-token: <customrandomtoken> What should I do with prometheus.yml?
Prometheus itself does not have a way to define custom headers in order to reach an exporter. The idea of adding the feature was discussed in this GitHub issue. Tl;dr: if you need a custom header, inject it with a forward proxy (I posted an example in another answer). The prometheus-blackbox-exporter tag suggest that the question is about the exporter that makes probes, which is a separate thing and it does have a way to set headers. Only, it does not scrape metrics, it makes them. Blackbox exporter has it's own configuration file and it consists of modules. A module is a set of parameters, defining how to do the probe and what result to expect. Here is an example of a module that looks for 200-299 response code and uses X-Auth-Token header: modules: http_2xx_with_header: prober: http http: headers: X-Auth-Token: skdjfh98732hjf22exampletoken More examples can be found here and the list of configuration options - here. When you made the blackbox exporter to load the new configuration, you need to adjust Prometheus configuration as well: scrape_configs: - job_name: 'blackbox' metrics_path: /probe params: module: [http_2xx_with_header] # <- Here goes the name of the new module static_configs: - targets: - http://prometheus.io relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: 127.0.0.1:9115
Prometheus
66,032,498
13
This is the official prometheus golang-client example: package main import ( "log" "net/http" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" ) var cpuTemp = prometheus.NewGauge(prometheus.GaugeOpts{ Name: "cpu_temperature_celsius", Help: "Current temperature of the CPU.", }) func init() { // Metrics have to be registered to be exposed: prometheus.MustRegister(cpuTemp) } func main() { cpuTemp.Set(65.3) // The Handler function provides a default handler to expose metrics // via an HTTP server. "/metrics" is the usual endpoint for that. http.Handle("/metrics", promhttp.Handler()) log.Fatal(http.ListenAndServe(":8080", nil)) } In this code, the http server uses the promhttp library. How to modify the metrics handler when using the gin framework? I did not find answers in the documentation.
We just utilize promhttp handler. package main import ( "github.com/gin-gonic/gin" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" ) var cpuTemp = prometheus.NewGauge(prometheus.GaugeOpts{ Name: "cpu_temperature_celsius", Help: "Current temperature of the CPU.", }) func init() { prometheus.MustRegister(cpuTemp) } func prometheusHandler() gin.HandlerFunc { h := promhttp.Handler() return func(c *gin.Context) { h.ServeHTTP(c.Writer, c.Request) } } func main() { cpuTemp.Set(65.3) r := gin.New() r.GET("/", func(c *gin.Context) { c.JSON(200, "Hello world!") }) r.GET("/metrics", prometheusHandler()) r.Run() } Or we always can switch to Prometheus middleware - https://github.com/zsais/go-gin-prometheus
Prometheus
65,608,610
13
We have two different teams working on different applications.I would like send alert notifications to different slack channels via using same alerts expressions. I found some examples but not understand what is the main reason to use receiver: 'default' when try to add new route? What is the role of this and what if it affects if ı change this? Meanwhile will be appreciate if you can help how should I send the notifations to multiple slack channels.. New one is what I tried. Current alertmanager.yml receivers: - name: 'team-1' slack_configs: - api_url: 'https://hooks.slack.com/services/1' channel: '#hub-alerts' route: group_wait: 10s group_interval: 5m receiver: 'team-1' repeat_interval: 1h group_by: [datacenter] New alertmanager.yml alertmanager.yml: receivers: - name: 'team-1' slack_configs: - api_url: 'https://hooks.slack.com/services/1' channel: '#channel-1' send_resolved: true - name: 'team-2' slack_configs: - api_url: 'https://hooks.slack.com/services/2' channel: '#channel-2' send_resolved: true route: group_wait: 10s group_interval: 5m repeat_interval: 1h group_by: [datacenter] receiver: 'default' routes: - receiver: 'team-1' - receiver: 'team-2'
You need to set the continue property on your route to true. By default it is false. The default behaviour of AlertManager is to traverse your routes for a match and exit at the first node it finds a match at. What you want to do is fire an alert at the match and continue to search for other matches and fire those too. Relevant documentation section: https://prometheus.io/docs/alerting/latest/configuration/#route An example using this: https://awesome-prometheus-alerts.grep.to/alertmanager.html In-lined the example above in case it ever breaks. # alertmanager.yml route: # When a new group of alerts is created by an incoming alert, wait at # least 'group_wait' to send the initial notification. # This way ensures that you get multiple alerts for the same group that start # firing shortly after another are batched together on the first # notification. group_wait: 10s # When the first notification was sent, wait 'group_interval' to send a batch # of new alerts that started firing for that group. group_interval: 5m # If an alert has successfully been sent, wait 'repeat_interval' to # resend them. repeat_interval: 30m # A default receiver receiver: "slack" # All the above attributes are inherited by all child routes and can # overwritten on each. routes: - receiver: "slack" group_wait: 10s match_re: severity: critical|warning continue: true - receiver: "pager" group_wait: 10s match_re: severity: critical continue: true receivers: - name: "slack" slack_configs: - api_url: 'https://hooks.slack.com/services/XXXXXXXXX/XXXXXXXXX/xxxxxxxxxxxxxxxxxxxxxxxxxxx' send_resolved: true channel: 'monitoring' text: "{{ range .Alerts }}<!channel> {{ .Annotations.summary }}\n{{ .Annotations.description }}\n{{ end }}" - name: "pager" webhook_config: - url: http://a.b.c.d:8080/send/sms send_resolved: true
Prometheus
62,672,730
13
What's the difference between probe_success and up? I see various examples where alerting is done based on either of them (eg. site down, instance down). Am I missing something?
up indicates whether Prometheus could talk to and successfully scrape a target, such as the blackbox exporter. probe_success is a metric exposed by the blackbox exporter indicating if a probe succeeded. For alerting you need both, as if the blackbox exporter is down or timing out then that's indicated by up and if the probe itself is failing that'll be indicated by probe_success.
Prometheus
51,984,837
13
For Prometheus metrics collection, like title, I could not really find a use case which only can be done via the type Summary, seems that they all somehow can be done via the type Histogram also. Lets take the request concurrency metrics as example, no doubt this can be perfectly done via type Summary, but i can also achieve the same effect by using type Histogram, as below: rate(http_request_duration_seconds_sum[1s]) / rate(http_request_duration_seconds_count[1s]) The only difference I can see is: for a summary the percentiles are computed in the client, it is made of a count and sum counters (like in Histogram type) and resulting quantile values. So I am a bit lost on what use cases really make the type Summary necessary/unique, please help to inspire me.
The Summary metric is not unique, many other instrumentation systems offer similar - such as Dropwizard's Histogram type (it's a histogram internally, but exposed as a quantile). This is one reason it exists, so such types from other instrumentation systems can be mapped more cleanly. Another reason it exists is historical. In Prometheus the Summary came before the Histogram, and the general recommendation is to use a Histogram as it's aggregatable where the Summary's quantiles are not. On the other hand histograms require you to pre-select buckets in order to be aggregatable and allow analysis over arbitrary time frames. There is a longer comparison of the two types in the docs.
Prometheus
51,146,578
13
I have a question about calculating response times with Prometheus summary metrics. I created a summary metric that does not only contain the service name but also the complete path and the http-method. Now I try to calculate the average response time for the complete service. I read the article about "rate then sum" and either I do not understand how the calculation is done or the calculation is IMHO not correct. As far as I read this should be the correct way to calculate the response time per second: sum by(service_id) ( rate(request_duration_sum{status_code=~"2.*"}[5m]) / rate(request_duration_count{status_code=~"2.*"}[5m]) ) What I understand here is create the "duration per second" (rate sum / rate count) value for each subset and then creates the sum per service_id. This looks absolutely wrong for me - but I think it does not work in the way I understand it. Another way to get an equal looking result is this: sum without (path,host) ( rate(request_duration_sum{status_code=~"2.*"}[5m]) / rate(request_duration_count{status_code=~"2.*"}[5m]) ) But what is the difference? What is really happening here? And why do I honestly only get measurable values if I use "max" instead of "sum"? If I would ignore everything I read I would try it in the following way: rate(sum by(service_id) request_duration_sum{status_code=~"2.*"}[5m]) / rate(sum by(service_id) request_duration_count{status_code=~"2.*"}[5m]) But this will not work at all... (instant vector vs range vector and so on...).
All of these examples are aggregating incorrectly, as you're averaging an average. You want: sum without (path,host) ( rate(request_duration_sum{status_code=~"2.*"}[5m]) ) / sum without (path,host) ( rate(request_duration_count{status_code=~"2.*"}[5m]) ) Which will return the average latency per status_code plus any other remaining labels.
Prometheus
51,064,821
13
What is the meaning of =~ operator in prometheus metrics? Can any help me what is the exact difference between = and =~ operator? for ex . process_cpu_seconds_total{instance="test"} process_cpu_seconds_total{instance=~"test"} The results are different.
"=~: Select labels that regex-match the provided string (or substring). For example, this selects all http_requests_total time series for staging, testing, and development environments and HTTP methods other than GET." http_requests_total{environment=~"staging|testing|development",method!="GET"} Taken from the Prometheus.io docs.
Prometheus
47,473,363
13
Why is Java Vector considered a legacy class, obsolete or deprecated? Isn't its use valid when working with concurrency? And if I don't want to manually synchronize objects and just want to use a thread-safe collection without needing to make fresh copies of the underlying array (as CopyOnWriteArrayList does), then is it fine to use Vector? What about Stack, which is a subclass of Vector, what should I use instead of it?
Vector synchronizes on each individual operation. That's almost never what you want to do. Generally you want to synchronize a whole sequence of operations. Synchronizing individual operations is both less safe (if you iterate over a Vector, for instance, you still need to take out a lock to avoid anyone else changing the collection at the same time, which would cause a ConcurrentModificationException in the iterating thread) but also slower (why take out a lock repeatedly when once will be enough)? Of course, it also has the overhead of locking even when you don't need to. Basically, it's a very flawed approach to synchronization in most situations. As Mr Brian Henk pointed out, you can decorate a collection using the calls such as Collections.synchronizedList - the fact that Vector combines both the "resized array" collection implementation with the "synchronize every operation" bit is another example of poor design; the decoration approach gives cleaner separation of concerns. As for a Stack equivalent - I'd look at Deque/ArrayDeque to start with.
Vector
1,386,275
747
I can create an array and initialize it like this: int a[] = {10, 20, 30}; How do I create a std::vector and initialize it similarly elegant? The best way I know is: std::vector<int> ints; ints.push_back(10); ints.push_back(20); ints.push_back(30); Is there a better way?
If your compiler supports C++11, you can simply do: std::vector<int> v = {1, 2, 3, 4}; This is available in GCC as of version 4.4. Unfortunately, VC++ 2010 seems to be lagging behind in this respect. Alternatively, the Boost.Assign library uses non-macro magic to allow the following: #include <boost/assign/list_of.hpp> ... std::vector<int> v = boost::assign::list_of(1)(2)(3)(4); Or: #include <boost/assign/std/vector.hpp> using namespace boost::assign; ... std::vector<int> v; v += 1, 2, 3, 4; But keep in mind that this has some overhead (basically, list_of constructs a std::deque under the hood) so for performance-critical code you'd be better off doing as Yacoby says.
Vector
2,236,197
723
How to check if a vector contains a given value?
Both the match() (returns the first appearance) and %in% (returns a Boolean) functions are designed for this. v <- c('a','b','c','e') 'b' %in% v ## returns TRUE match('b',v) ## returns the first location of 'b', in this case: 2
Vector
1,169,248
638
I have a vector of numbers: numbers <- c(4,23,4,23,5,43,54,56,657,67,67,435, 453,435,324,34,456,56,567,65,34,435) How can I have R count the number of times a value x appears in the vector?
You can just use table(): > a <- table(numbers) > a numbers 4 5 23 34 43 54 56 65 67 324 435 453 456 567 657 2 1 2 2 1 1 2 1 2 1 3 1 1 1 1 Then you can subset it: > a[names(a)==435] 435 3 Or convert it into a data.frame if you're more comfortable working with that: > as.data.frame(table(numbers)) numbers Freq 1 4 2 2 5 1 3 23 2 4 34 2 ...
Vector
1,923,273
490
How do I print out the contents of a std::vector to the screen? A solution that implements the following operator<< would be nice as well: template<container C, class T, String delim = ", ", String open = "[", String close = "]"> std::ostream & operator<<(std::ostream & o, const C<T> & x) { // ... What can I write here? } Here is what I have so far, without a separate function: #include <iostream> #include <fstream> #include <string> #include <cmath> #include <vector> #include <sstream> #include <cstdio> using namespace std; int main() { ifstream file("maze.txt"); if (file) { vector<char> vec(istreambuf_iterator<char>(file), (istreambuf_iterator<char>())); vector<char> path; int x = 17; char entrance = vec.at(16); char firstsquare = vec.at(x); if (entrance == 'S') { path.push_back(entrance); } for (x = 17; isalpha(firstsquare); x++) { path.push_back(firstsquare); } for (int i = 0; i < path.size(); i++) { cout << path[i] << " "; } cout << endl; return 0; } }
If you have a C++11 compiler, I would suggest using a range-based for-loop (see below); or else use an iterator. But you have several options, all of which I will explain in what follows. Range-based for-loop (C++11) In C++11 (and later) you can use the new range-based for-loop, which looks like this: std::vector<char> path; // ... for (char i: path) std::cout << i << ' '; The type char in the for-loop statement should be the type of the elements of the vector path and not an integer indexing type. In other words, since path is of type std::vector<char>, the type that should appear in the range-based for-loop is char. However, you will likely often see the explicit type replaced with the auto placeholder type: for (auto i: path) std::cout << i << ' '; Regardless of whether you use the explicit type or the auto keyword, the object i has a value that is a copy of the actual item in the path object. Thus, all changes to i in the loop are not preserved in path itself: std::vector<char> path{'a', 'b', 'c'}; for (auto i: path) { i = '_'; // 'i' is a copy of the element in 'path', so although // we can change 'i' here perfectly fine, the elements // of 'path' have not changed std::cout << i << ' '; // will print: "_ _ _" } for (auto i: path) { std::cout << i << ' '; // will print: "a b c" } If you would like to proscribe being able to change this copied value of i in the for-loop as well, you can force the type of i to be const char like this: for (const auto i: path) { i = '_'; // this will now produce a compiler error std::cout << i << ' '; } If you would like to modify the items in path so that those changes persist in path outside of the for-loop, then you can use a reference like so: for (auto& i: path) { i = '_'; // changes to 'i' will now also change the // element in 'path' itself to that value std::cout << i << ' '; } and even if you don't want to modify path, if the copying of objects is expensive you should use a const reference instead of copying by value: for (const auto& i: path) std::cout << i << ' '; Iterators Before C++11 the canonical solution would have been to use an iterator, and that is still perfectly acceptable. They are used as follows: std::vector<char> path; // ... for (std::vector<char>::const_iterator i = path.begin(); i != path.end(); ++i) std::cout << *i << ' '; If you want to modify the vector's contents in the for-loop, then use iterator rather than const_iterator. Supplement: typedef / type alias (C++11) / auto (C++11) This is not another solution, but a supplement to the above iterator solution. If you are using the C++11 standard (or later), then you can use the auto keyword to help the readability: for (auto i = path.begin(); i != path.end(); ++i) std::cout << *i << ' '; Here the type of i will be non-const (i.e., the compiler will use std::vector<char>::iterator as the type of i). This is because we called the begin method, so the compiler deduced the type for i from that. If we call the cbegin method instead ("c" for const), then i will be a std::vector<char>::const_iterator: for (auto i = path.cbegin(); i != path.cend(); ++i) { *i = '_'; // will produce a compiler error std::cout << *i << ' '; } If you're not comfortable with the compiler deducing types, then in C++11 you can use a type alias to avoid having to type the vector out all the time (a good habit to get into): using Path = std::vector<char>; // C++11 onwards only Path path; // 'Path' is an alias for std::vector<char> // ... for (Path::const_iterator i = path.begin(); i != path.end(); ++i) std::cout << *i << ' '; If you do not have access to a C++11 compiler (or don't like the type alias syntax for whatever reason), then you can use the more traditional typedef: typedef std::vector<char> Path; // 'Path' now a synonym for std::vector<char> Path path; // ... for (Path::const_iterator i = path.begin(); i != path.end(); ++i) std::cout << *i << ' '; Side note: At this point, you may or may not have come across iterators before, and you may or may not have heard that iterators are what you are "supposed" to use, and may be wondering why. The answer is not easy to appreciate, but, in brief, the idea is that iterators are an abstraction that shield you from the details of the operation. It is convenient to have an object (the iterator) that does the operation you want (like sequential access) rather than you writing the details yourself (the "details" being the code that does the actual accessing of the elements of the vector). You should notice that in the for-loop you are only ever asking the iterator to return you a value (*i, where i is the iterator) -- you are never interacting with path directly itself. The logic goes like this: you create an iterator and give it the object you want to loop over (iterator i = path.begin()), and then all you do is ask the iterator to get the next value for you (*i); you never had to worry exactly how the iterator did that -- that's its business, not yours. OK, but what's the point? Well, imagine if getting a value wasn't simple. What if it involves a bit of work? You don't need to worry, because the iterator has handled that for you -- it sorts out the details, all you need to do is ask it for a value. Additionally, what if you change the container from std::vector to something else? In theory, your code doesn't change even if the details of how accessing elements in the new container does: remember, the iterator sorts all the details out for you behind the scenes, so you don't need to change your code at all -- you just ask the iterator for the next value in the container, same as before. So, whilst this may seem like confusing overkill for looping through a vector, there are good reasons behind the concept of iterators and so you might as well get used to using them. Indexing You can also use a integer type to index through the elements of the vector in the for-loop explicitly: for (int i=0; i<path.size(); ++i) std::cout << path[i] << ' '; If you are going to do this, it's better to use the container's member types, if they are available and appropriate. std::vector has a member type called size_type for this job: it is the type returned by the size method. typedef std::vector<char> Path; // 'Path' now a synonym for std::vector<char> for (Path::size_type i=0; i<path.size(); ++i) std::cout << path[i] << ' '; Why not use this in preference to the iterator solution? For simple cases, you can do that, but using an iterator brings several advantages, which I have briefly outlined above. As such, my advice would be to avoid this method unless you have good reasons for it. std::copy (C++11) See Joshua's answer. You can use the STL algorithm std::copy to copy the vector contents onto the output stream. I don't have anything to add, except to say that I don't use this method; but there's no good reason for that besides habit. std::ranges::copy (C++20) For completeness, C++20 introduced ranges, which can act on the whole range of a std::vector, so no need for begin and end: #include <iterator> // for std::ostream_iterator #include <algorithm> // for std::ranges::copy depending on lib support std::vector<char> path; // ... std::ranges::copy(path, std::ostream_iterator<char>(std::cout, " ")); Unless you have a recent compiler (on GCC apparently at least version 10.1), likely you will not have ranges support even if you might have some C++20 features available. Overload std::ostream::operator<< See also Chris's answer below. This is more a complement to the other answers since you will still need to implement one of the solutions above in the overloading, but the benefit is much cleaner code. This is how you could use the std::ranges::copy solution above: #include <iostream> #include <vector> #include <iterator> // for std::ostream_iterator #include <algorithm> // for std::ranges::copy depending on lib support using Path = std::vector<char>; // type alias for std::vector<char> std::ostream& operator<< (std::ostream& out, const Path& v) { if ( !v.empty() ) { out << '['; std::ranges::copy(v, std::ostream_iterator<char>(out, ", ")); out << "\b\b]"; // use two ANSI backspace characters '\b' to overwrite final ", " } return out; } int main() { Path path{'/', 'f', 'o', 'o'}; // will output: "path: [/, f, o, o]" std::cout << "path: " << path << std::endl; return 0; } Now you can pass your Path objects to your output stream just like fundamental types. Using any of the other solutions above should also be equally straightforward. Conclusion Any of the solutions presented here will work. It's up to you (and context or your coding standards) on which one is the "best". Anything more detailed than this is probably best left for another question where the pros/cons can be properly evaluated, but as always user preference will always play a part: none of the solutions presented are objectively wrong, but some will look nicer to each coder. Addendum This is an expanded solution of an earlier one I posted. Since that post kept getting attention, I decided to expand on it and refer to the other excellent solutions posted here, at least those that I have personally used in the past at least once. I would, however, encourage the reader to look at the answers below because there are probably good suggestions that I have forgotten, or do not know, about.
Vector
10,750,057
426
What are the differences between the two data structures ArrayList and Vector, and where should you use each of them?
Differences Vectors are synchronized, ArrayLists are not. Data Growth Methods Use ArrayLists if there is no specific requirement to use Vectors. Synchronization If multiple threads access an ArrayList concurrently then we must externally synchronize the block of code which modifies the list either structurally or simply modifies an element. Structural modification means addition or deletion of element(s) from the list. Setting the value of an existing element is not a structural modification. Collections.synchronizedList is normally used at the time of creation of the list to avoid any accidental unsynchronized access to the list. Data growth Internally, both the ArrayList and Vector hold onto their contents using an Array. When an element is inserted into an ArrayList or a Vector, the object will need to expand its internal array if it runs out of room. A Vector defaults to doubling the size of its array, while the ArrayList increases its array size by 50 percent.
Vector
2,986,296
418
Suppose I have a std::vector (let's call it myVec) of size N. What's the simplest way to construct a new vector consisting of a copy of elements X through Y, where 0 <= X <= Y <= N-1? For example, myVec [100000] through myVec [100999] in a vector of size 150000. If this cannot be done efficiently with a vector, is there another STL datatype that I should use instead?
vector<T>::const_iterator first = myVec.begin() + 100000; vector<T>::const_iterator last = myVec.begin() + 101000; vector<T> newVec(first, last); It's an O(N) operation to construct the new vector, but there isn't really a better way.
Vector
421,573
404
Should I use std::sort(numbers.begin(), numbers.end(), std::greater<int>()); or std::sort(numbers.rbegin(), numbers.rend()); // note: reverse iterators to sort a vector in descending order? Are there any benefits or drawbacks with one approach or the other?
With c++14 you can do this: std::sort(numbers.begin(), numbers.end(), std::greater<>());
Vector
9,025,084
404
What are the difference between a std::vector and an std::array in C++? When should one be preferred over another? What are the pros and cons of each? All my textbook does is list how they are the same.
std::vector is a template class that encapsulate a dynamic array1, stored in the heap, that grows and shrinks automatically if elements are added or removed. It provides all the hooks (begin(), end(), iterators, etc) that make it work fine with the rest of the STL. It also has several useful methods that let you perform operations that on a normal array would be cumbersome, like e.g. inserting elements in the middle of a vector (it handles all the work of moving the following elements behind the scenes). Since it stores the elements in memory allocated on the heap, it has some overhead in respect to static arrays. std::array is a template class that encapsulate a statically-sized array, stored inside the object itself, which means that, if you instantiate the class on the stack, the array itself will be on the stack. Its size has to be known at compile time (it's passed as a template parameter), and it cannot grow or shrink. It's more limited than std::vector, but it's often more efficient, especially for small sizes, because in practice it's mostly a lightweight wrapper around a C-style array. However, it's more secure, since the implicit conversion to pointer is disabled, and it provides much of the STL-related functionality of std::vector and of the other containers, so you can use it easily with STL algorithms & co. Anyhow, for the very limitation of fixed size it's much less flexible than std::vector. For an introduction to std::array, have a look at this article; for a quick introduction to std::vector and to the the operations that are possible on it, you may want to look at its documentation. Actually, I think that in the standard they are described in terms of maximum complexity of the different operations (e.g. random access in constant time, iteration over all the elements in linear time, add and removal of elements at the end in constant amortized time, etc), but AFAIK there's no other method of fulfilling such requirements other than using a dynamic array. As stated by @Lucretiel, the standard actually requires that the elements are stored contiguously, so it is a dynamic array, stored where the associated allocator puts it.
Vector
4,424,579
396
I need to take a C++ vector with potentially a lot of elements, erase duplicates, and sort it. I currently have the below code, but it doesn't work. vec.erase( std::unique(vec.begin(), vec.end()), vec.end()); std::sort(vec.begin(), vec.end()); How can I correctly do this? Additionally, is it faster to erase the duplicates first (similar to coded above) or perform the sort first? If I do perform the sort first, is it guaranteed to remain sorted after std::unique is executed? Or is there another (perhaps more efficient) way to do all this?
I agree with R. Pate and Todd Gardner; a std::set might be a good idea here. Even if you're stuck using vectors, if you have enough duplicates, you might be better off creating a set to do the dirty work. Let's compare three approaches: Just using vector, sort + unique sort( vec.begin(), vec.end() ); vec.erase( unique( vec.begin(), vec.end() ), vec.end() ); Convert to set (manually) set<int> s; unsigned size = vec.size(); for( unsigned i = 0; i < size; ++i ) s.insert( vec[i] ); vec.assign( s.begin(), s.end() ); Convert to set (using a constructor) set<int> s( vec.begin(), vec.end() ); vec.assign( s.begin(), s.end() ); Here's how these perform as the number of duplicates changes: Summary: when the number of duplicates is large enough, it's actually faster to convert to a set and then dump the data back into a vector. And for some reason, doing the set conversion manually seems to be faster than using the set constructor -- at least on the toy random data that I used.
Vector
1,041,620
380
Is there something in <algorithm> which allows you to check if a std:: container contains something? Or, a way to make one, for example: if(a.x == b.x && a.y == b.y) return true; return false; Can this only be done with std::map since it uses keys? Thanks
Checking if v contains the element x: #include <algorithm> if(std::find(v.begin(), v.end(), x) != v.end()) { /* v contains x */ } else { /* v does not contain x */ } Checking if v contains elements (is non-empty): if(!v.empty()){ /* v is non-empty */ } else { /* v is empty */ }
Vector
3,450,860
375
What are the good ways of finding the sum of all the elements in a std::vector? Suppose I have a vector std::vector<int> vector with a few elements in it. Now I want to find the sum of all the elements. What are the different ways for the same?
Actually there are quite a few methods. int sum_of_elems = 0; C++03 Classic for loop: for(std::vector<int>::iterator it = vector.begin(); it != vector.end(); ++it) sum_of_elems += *it; Using a standard algorithm: #include <numeric> sum_of_elems = std::accumulate(vector.begin(), vector.end(), 0); Important Note: The last argument's type is used not just for the initial value, but for the type of the result as well. If you put an int there, it will accumulate ints even if the vector has float. If you are summing floating-point numbers, change 0 to 0.0 or 0.0f (thanks to nneonneo). See also the C++11 solution below. C++11 and higher b. Automatically keeping track of the vector type even in case of future changes: #include <numeric> sum_of_elems = std::accumulate(vector.begin(), vector.end(), decltype(vector)::value_type(0)); Using std::for_each: std::for_each(vector.begin(), vector.end(), [&] (int n) { sum_of_elems += n; }); Using a range-based for loop (thanks to Roger Pate): for (auto& n : vector) sum_of_elems += n; C++17 and above Using std::reduce which also takes care of the result type, e.g if you have std::vector<int>, you get int as result. If you have std::vector<float>, you get float. Or if you have std::vector<std::string>, you get std::string (all strings concatenated). Interesting, isn't it? #include <numeric> auto result = std::reduce(v.begin(), v.end()); There are other overloads of this function which you can run even parallelly, in case if you have a large collection and you want to get the result quickly.
Vector
3,221,812
365
vector<int> myVector; and lets say the values in the vector are this (in this order): 5 9 2 8 0 7 If I wanted to erase the element that contains the value of "8", I think I would do this: myVector.erase(myVector.begin()+4); Because that would erase the 4th element. But is there any way to erase an element based off of the value "8"? Like: myVector.eraseElementWhoseValueIs(8); Or do I simply just need to iterate through all the vector elements and test their values?
How about std::remove() instead: #include <algorithm> ... vec.erase(std::remove(vec.begin(), vec.end(), 8), vec.end()); This combination is also known as the erase-remove idiom.
Vector
3,385,229
356
I noticed in Effective STL that vector is the type of sequence that should be used by default. What's does it mean? It seems that ignore the efficiency vector can do anything. Could anybody offer me a scenario where vector is not a feasible option but list must be used?
std::vector std::list Contiguous memory. Non-contiguous memory. Pre-allocates space for future elements, so extra space required beyond what's necessary for the elements themselves. No pre-allocated memory. The memory overhead for the list itself is constant. Each element only requires the space for the element type itself (no extra pointers). Each element requires extra space for the node which holds the element, including pointers to the next and previous elements in the list. Can re-allocate memory for the entire vector any time that you add an element. Never has to re-allocate memory for the whole list just because you add an element. Insertions at the end are constant, amortized time, but insertions elsewhere are a costly O(n). Insertions and erasures are cheap no matter where in the list they occur. Erasures at the end of the vector are constant time, but for the rest it's O(n). It's cheap to combine lists with splicing. You can randomly access its elements. You cannot randomly access elements, so getting at a particular element in the list can be expensive. Iterators are invalidated if you add or remove elements to or from the vector. Iterators remain valid even when you add or remove elements from the list. You can easily get at the underlying array if you need an array of the elements. If you need an array of the elements, you'll have to create a new one and add them all to it, since there is no underlying array. In general, use vector when you don't care what type of sequential container that you're using, but if you're doing many insertions or erasures to and from anywhere in the container other than the end, you're going to want to use list. Or if you need random access, then you're going to want vector, not list. Other than that, there are naturally instances where you're going to need one or the other based on your application, but in general, those are good guidelines.
Vector
2,209,224
314
When I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, an (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module. The thing is that I don't want to implement it manually to preserve the speed of the program. Example code is shown below: a = np.array([[5, 1, 3], [1, 1, 1], [1, 2, 1]]) b = np.array([1, 2, 3]) print a*b >> [[5 2 9] [1 2 3] [1 4 3]] What I want is: print a*b >> [16 6 8]
Simplest solution Use numpy.dot or a.dot(b). See the documentation here. >>> a = np.array([[ 5, 1 ,3], [ 1, 1 ,1], [ 1, 2 ,1]]) >>> b = np.array([1, 2, 3]) >>> print a.dot(b) array([16, 6, 8]) This occurs because numpy arrays are not matrices, and the standard operations *, +, -, / work element-wise on arrays. Note that while you can use numpy.matrix (as of early 2021) where * will be treated like standard matrix multiplication, numpy.matrix is deprecated and may be removed in future releases.. See the note in its documentation (reproduced below): It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. Thanks @HopeKing. Other Solutions Also know there are other options: As noted below, if using python3.5+ and numpy v1.10+, the @ operator works as you'd expect: >>> print(a @ b) array([16, 6, 8]) If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own. >>> np.einsum('ji,i->j', a, b) array([16, 6, 8]) As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices. >>> np.matmul(a, b) array([16, 6, 8]) numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations). >>> np.inner(a, b) array([16, 6, 8]) # Beware using for matrix-matrix multiplication though! >>> b = a.T >>> np.dot(a, b) array([[35, 9, 10], [ 9, 3, 4], [10, 4, 6]]) >>> np.inner(a, b) array([[29, 12, 19], [ 7, 4, 5], [ 8, 5, 6]]) If you have multiple 2D arrays to dot together, you may consider the np.linalg.multi_dot function, which simplifies the syntax of many nested np.dots. Note that this only works with 2D arrays (i.e. not for matrix-vector multiplication). >>> np.dot(np.dot(a, a.T), a).dot(a.T) array([[1406, 382, 446], [ 382, 106, 126], [ 446, 126, 152]]) >>> np.linalg.multi_dot((a, a.T, a, a.T)) array([[1406, 382, 446], [ 382, 106, 126], [ 446, 126, 152]]) Rarer options for edge cases If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1: >>> np.tensordot(a, b, axes=1) array([16, 6, 8]) Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).
Vector
21,562,986
290
In our C++ course they suggest not to use C++ arrays on new projects anymore. As far as I know Stroustrup himself suggests not to use arrays. But are there significant performance differences?
Using C++ arrays with new (that is, using dynamic arrays) should be avoided. There is the problem that you have to keep track of the size, and you need to delete them manually and do all sorts of housekeeping. Using arrays on the stack is also discouraged because you don't have range checking, and passing the array around will lose any information about its size (array to pointer conversion). You should use std::array in that case, which wraps a C++ array in a small class and provides a size function and iterators to iterate over it. Now, std::vector vs. native C++ arrays (taken from the internet): // Comparison of assembly code generated for basic indexing, dereferencing, // and increment operations on vectors and arrays/pointers. // Assembly code was generated by gcc 4.1.0 invoked with g++ -O3 -S on a // x86_64-suse-linux machine. #include <vector> struct S { int padding; std::vector<int> v; int * p; std::vector<int>::iterator i; }; int pointer_index (S & s) { return s.p[3]; } // movq 32(%rdi), %rax // movl 12(%rax), %eax // ret int vector_index (S & s) { return s.v[3]; } // movq 8(%rdi), %rax // movl 12(%rax), %eax // ret // Conclusion: Indexing a vector is the same damn thing as indexing a pointer. int pointer_deref (S & s) { return *s.p; } // movq 32(%rdi), %rax // movl (%rax), %eax // ret int iterator_deref (S & s) { return *s.i; } // movq 40(%rdi), %rax // movl (%rax), %eax // ret // Conclusion: Dereferencing a vector iterator is the same damn thing // as dereferencing a pointer. void pointer_increment (S & s) { ++s.p; } // addq $4, 32(%rdi) // ret void iterator_increment (S & s) { ++s.i; } // addq $4, 40(%rdi) // ret // Conclusion: Incrementing a vector iterator is the same damn thing as // incrementing a pointer. Note: If you allocate arrays with new and allocate non-class objects (like plain int) or classes without a user defined constructor and you don't want to have your elements initialized initially, using new-allocated arrays can have performance advantages because std::vector initializes all elements to default values (0 for int, for example) on construction (credits to @bernie for reminding me).
Vector
381,621
257
I'm using multitreading and want to merge the results. For example: std::vector<int> A; std::vector<int> B; std::vector<int> AB; I want AB to have to contents of A and the contents of B in that order. What's the most efficient way of doing something like this?
AB.reserve( A.size() + B.size() ); // preallocate memory AB.insert( AB.end(), A.begin(), A.end() ); AB.insert( AB.end(), B.begin(), B.end() );
Vector
3,177,241
255
I've always thought it's the general wisdom that std::vector is "implemented as an array," blah blah blah. Today I went down and tested it, and it seems to be not so: Here's some test results: UseArray completed in 2.619 seconds UseVector completed in 9.284 seconds UseVectorPushBack completed in 14.669 seconds The whole thing completed in 26.591 seconds That's about 3 - 4 times slower! Doesn't really justify for the "vector may be slower for a few nanosecs" comments. And the code I used: #include <cstdlib> #include <vector> #include <iostream> #include <string> #include <boost/date_time/posix_time/ptime.hpp> #include <boost/date_time/microsec_time_clock.hpp> class TestTimer { public: TestTimer(const std::string & name) : name(name), start(boost::date_time::microsec_clock<boost::posix_time::ptime>::local_time()) { } ~TestTimer() { using namespace std; using namespace boost; posix_time::ptime now(date_time::microsec_clock<posix_time::ptime>::local_time()); posix_time::time_duration d = now - start; cout << name << " completed in " << d.total_milliseconds() / 1000.0 << " seconds" << endl; } private: std::string name; boost::posix_time::ptime start; }; struct Pixel { Pixel() { } Pixel(unsigned char r, unsigned char g, unsigned char b) : r(r), g(g), b(b) { } unsigned char r, g, b; }; void UseVector() { TestTimer t("UseVector"); for(int i = 0; i < 1000; ++i) { int dimension = 999; std::vector<Pixel> pixels; pixels.resize(dimension * dimension); for(int i = 0; i < dimension * dimension; ++i) { pixels[i].r = 255; pixels[i].g = 0; pixels[i].b = 0; } } } void UseVectorPushBack() { TestTimer t("UseVectorPushBack"); for(int i = 0; i < 1000; ++i) { int dimension = 999; std::vector<Pixel> pixels; pixels.reserve(dimension * dimension); for(int i = 0; i < dimension * dimension; ++i) pixels.push_back(Pixel(255, 0, 0)); } } void UseArray() { TestTimer t("UseArray"); for(int i = 0; i < 1000; ++i) { int dimension = 999; Pixel * pixels = (Pixel *)malloc(sizeof(Pixel) * dimension * dimension); for(int i = 0 ; i < dimension * dimension; ++i) { pixels[i].r = 255; pixels[i].g = 0; pixels[i].b = 0; } free(pixels); } } int main() { TestTimer t1("The whole thing"); UseArray(); UseVector(); UseVectorPushBack(); return 0; } Am I doing it wrong or something? Or have I just busted this performance myth? I'm using Release mode in Visual Studio 2005. In Visual C++, #define _SECURE_SCL 0 reduces UseVector by half (bringing it down to 4 seconds). This is really huge, IMO.
Using the following: g++ -O3 Time.cpp -I <MyBoost> ./a.out UseArray completed in 2.196 seconds UseVector completed in 4.412 seconds UseVectorPushBack completed in 8.017 seconds The whole thing completed in 14.626 seconds So array is twice as quick as vector. But after looking at the code in more detail this is expected; as you run across the vector twice and the array only once. Note: when you resize() the vector you are not only allocating the memory but also running through the vector and calling the constructor on each member. Re-Arranging the code slightly so that the vector only initializes each object once: std::vector<Pixel> pixels(dimensions * dimensions, Pixel(255,0,0)); Now doing the same timing again: g++ -O3 Time.cpp -I <MyBoost> ./a.out UseVector completed in 2.216 seconds The vector now performance only slightly worse than the array. IMO this difference is insignificant and could be caused by a whole bunch of things not associated with the test. I would also take into account that you are not correctly initializing/Destroying the Pixel object in the UseArrray() method as neither constructor/destructor is not called (this may not be an issue for this simple class but anything slightly more complex (ie with pointers or members with pointers) will cause problems.
Vector
3,664,272
250
So, I wrote a bunch of code that accesses elements in an stl vector by index[], but now I need to copy just a chunk of the vector. It looks like vector.insert(pos, first, last) is the function I want... except I only have first and last as ints. Is there any nice way I can get an iterator to these values?
Try this: vector<Type>::iterator nth = v.begin() + index;
Vector
671,423
249
I want to examine the contents of a std::vector in GDB, how do I do it? Let's say it's a std::vector<int> for the sake of simplicity.
With GCC 4.1.2, to print the whole of a std::vector<int> called myVector, do the following: print *(myVector._M_impl._M_start)@myVector.size() To print only the first N elements, do: print *(myVector._M_impl._M_start)@N Explanation This is probably heavily dependent on your compiler version, but for GCC 4.1.2, the pointer to the internal array is: myVector._M_impl._M_start And the GDB command to print N elements of an array starting at pointer P is: print P@N Or, in a short form (for a standard .gdbinit): p P@N
Vector
253,099
244
Ok, this is really difficult to confess, but I do have a strong temptation at the moment to inherit from std::vector. I need about 10 customized algorithms for vector and I want them to be directly members of the vector. But naturally I want also to have the rest of std::vector's interface. Well, my first idea, as a law-abiding citizen, was to have an std::vector member in MyVector class. But then I would have to manually reprovide all of the std::vector's interface. Too much to type. Next, I thought about private inheritance, so that instead of reproviding methods I would write a bunch of using std::vector::member's in the public section. This is tedious too actually. And here I am, I really do think that I can simply inherit publicly from std::vector, but provide a warning in the documentation that this class should not be used polymorphically. I think most developers are competent enough to understand that this shouldn't be used polymorphically anyway. Is my decision absolutely unjustifiable? If so, why? Can you provide an alternative which would have the additional members actually members but would not involve retyping all of vector's interface? I doubt it, but if you can, I'll just be happy. Also, apart from the fact that some idiot can write something like std::vector<int>* p = new MyVector is there any other realistic peril in using MyVector? By saying realistic I discard things like imagine a function which takes a pointer to vector ... Well, I've stated my case. I have sinned. Now it's up to you to forgive me or not :)
Actually, there is nothing wrong with public inheritance of std::vector. If you need this, just do that. I would suggest doing that only if it is really necessary. Only if you can't do what you want with free functions (e.g. should keep some state). The problem is that MyVector is a new entity. It means a new C++ developer should know what the hell it is before using it. What's the difference between std::vector and MyVector? Which one is better to use here and there? What if I need to move std::vector to MyVector? May I just use swap() or not? Do not produce new entities just to make something to look better. These entities (especially, such common) aren't going to live in vacuum. They will live in mixed environment with constantly increased entropy.
Vector
4,353,203
244
I have found an interesting performance regression in a small C++ snippet, when I enable C++11: #include <vector> struct Item { int a; int b; }; int main() { const std::size_t num_items = 10000000; std::vector<Item> container; container.reserve(num_items); for (std::size_t i = 0; i < num_items; ++i) { container.push_back(Item()); } return 0; } With g++ (GCC) 4.8.2 20131219 (prerelease) and C++03 I get: milian:/tmp$ g++ -O3 main.cpp && perf stat -r 10 ./a.out Performance counter stats for './a.out' (10 runs): 35.206824 task-clock # 0.988 CPUs utilized ( +- 1.23% ) 4 context-switches # 0.116 K/sec ( +- 4.38% ) 0 cpu-migrations # 0.006 K/sec ( +- 66.67% ) 849 page-faults # 0.024 M/sec ( +- 6.02% ) 95,693,808 cycles # 2.718 GHz ( +- 1.14% ) [49.72%] <not supported> stalled-cycles-frontend <not supported> stalled-cycles-backend 95,282,359 instructions # 1.00 insns per cycle ( +- 0.65% ) [75.27%] 30,104,021 branches # 855.062 M/sec ( +- 0.87% ) [77.46%] 6,038 branch-misses # 0.02% of all branches ( +- 25.73% ) [75.53%] 0.035648729 seconds time elapsed ( +- 1.22% ) With C++11 enabled on the other hand, the performance degrades significantly: milian:/tmp$ g++ -std=c++11 -O3 main.cpp && perf stat -r 10 ./a.out Performance counter stats for './a.out' (10 runs): 86.485313 task-clock # 0.994 CPUs utilized ( +- 0.50% ) 9 context-switches # 0.104 K/sec ( +- 1.66% ) 2 cpu-migrations # 0.017 K/sec ( +- 26.76% ) 798 page-faults # 0.009 M/sec ( +- 8.54% ) 237,982,690 cycles # 2.752 GHz ( +- 0.41% ) [51.32%] <not supported> stalled-cycles-frontend <not supported> stalled-cycles-backend 135,730,319 instructions # 0.57 insns per cycle ( +- 0.32% ) [75.77%] 30,880,156 branches # 357.057 M/sec ( +- 0.25% ) [75.76%] 4,188 branch-misses # 0.01% of all branches ( +- 7.59% ) [74.08%] 0.087016724 seconds time elapsed ( +- 0.50% ) Can someone explain this? So far my experience was that the STL gets faster by enabling C++11, esp. thanks to move semantics. EDIT: As suggested, using container.emplace_back(); instead the performance gets on par with the C++03 version. How can the C++03 version achieve the same for push_back? milian:/tmp$ g++ -std=c++11 -O3 main.cpp && perf stat -r 10 ./a.out Performance counter stats for './a.out' (10 runs): 36.229348 task-clock # 0.988 CPUs utilized ( +- 0.81% ) 4 context-switches # 0.116 K/sec ( +- 3.17% ) 1 cpu-migrations # 0.017 K/sec ( +- 36.85% ) 798 page-faults # 0.022 M/sec ( +- 8.54% ) 94,488,818 cycles # 2.608 GHz ( +- 1.11% ) [50.44%] <not supported> stalled-cycles-frontend <not supported> stalled-cycles-backend 94,851,411 instructions # 1.00 insns per cycle ( +- 0.98% ) [75.22%] 30,468,562 branches # 840.991 M/sec ( +- 1.07% ) [76.71%] 2,723 branch-misses # 0.01% of all branches ( +- 9.84% ) [74.81%] 0.036678068 seconds time elapsed ( +- 0.80% )
I can reproduce your results on my machine with those options you write in your post. However, if I also enable link time optimization (I also pass the -flto flag to gcc 4.7.2), the results are identical: (I am compiling your original code, with container.push_back(Item());) $ g++ -std=c++11 -O3 -flto regr.cpp && perf stat -r 10 ./a.out Performance counter stats for './a.out' (10 runs): 35.426793 task-clock # 0.986 CPUs utilized ( +- 1.75% ) 4 context-switches # 0.116 K/sec ( +- 5.69% ) 0 CPU-migrations # 0.006 K/sec ( +- 66.67% ) 19,801 page-faults # 0.559 M/sec 99,028,466 cycles # 2.795 GHz ( +- 1.89% ) [77.53%] 50,721,061 stalled-cycles-frontend # 51.22% frontend cycles idle ( +- 3.74% ) [79.47%] 25,585,331 stalled-cycles-backend # 25.84% backend cycles idle ( +- 4.90% ) [73.07%] 141,947,224 instructions # 1.43 insns per cycle # 0.36 stalled cycles per insn ( +- 0.52% ) [88.72%] 37,697,368 branches # 1064.092 M/sec ( +- 0.52% ) [88.75%] 26,700 branch-misses # 0.07% of all branches ( +- 3.91% ) [83.64%] 0.035943226 seconds time elapsed ( +- 1.79% ) $ g++ -std=c++98 -O3 -flto regr.cpp && perf stat -r 10 ./a.out Performance counter stats for './a.out' (10 runs): 35.510495 task-clock # 0.988 CPUs utilized ( +- 2.54% ) 4 context-switches # 0.101 K/sec ( +- 7.41% ) 0 CPU-migrations # 0.003 K/sec ( +-100.00% ) 19,801 page-faults # 0.558 M/sec ( +- 0.00% ) 98,463,570 cycles # 2.773 GHz ( +- 1.09% ) [77.71%] 50,079,978 stalled-cycles-frontend # 50.86% frontend cycles idle ( +- 2.20% ) [79.41%] 26,270,699 stalled-cycles-backend # 26.68% backend cycles idle ( +- 8.91% ) [74.43%] 141,427,211 instructions # 1.44 insns per cycle # 0.35 stalled cycles per insn ( +- 0.23% ) [87.66%] 37,366,375 branches # 1052.263 M/sec ( +- 0.48% ) [88.61%] 26,621 branch-misses # 0.07% of all branches ( +- 5.28% ) [83.26%] 0.035953916 seconds time elapsed As for the reasons, one needs to look at the generated assembly code (g++ -std=c++11 -O3 -S regr.cpp). In C++11 mode the generated code is significantly more cluttered than for C++98 mode and inlining the function void std::vector<Item,std::allocator<Item>>::_M_emplace_back_aux<Item>(Item&&) fails in C++11 mode with the default inline-limit. This failed inline has a domino effect. Not because this function is being called (it is not even called!) but because we have to be prepared: If it is called, the function argments (Item.a and Item.b) must already be at the right place. This leads to a pretty messy code. Here is the relevant part of the generated code for the case where inlining succeeds: .L42: testq %rbx, %rbx # container$D13376$_M_impl$_M_finish je .L3 #, movl $0, (%rbx) #, container$D13376$_M_impl$_M_finish_136->a movl $0, 4(%rbx) #, container$D13376$_M_impl$_M_finish_136->b .L3: addq $8, %rbx #, container$D13376$_M_impl$_M_finish subq $1, %rbp #, ivtmp.106 je .L41 #, .L14: cmpq %rbx, %rdx # container$D13376$_M_impl$_M_finish, container$D13376$_M_impl$_M_end_of_storage jne .L42 #, This is a nice and compact for loop. Now, let's compare this to that of the failed inline case: .L49: testq %rax, %rax # D.15772 je .L26 #, movq 16(%rsp), %rdx # D.13379, D.13379 movq %rdx, (%rax) # D.13379, *D.15772_60 .L26: addq $8, %rax #, tmp75 subq $1, %rbx #, ivtmp.117 movq %rax, 40(%rsp) # tmp75, container.D.13376._M_impl._M_finish je .L48 #, .L28: movq 40(%rsp), %rax # container.D.13376._M_impl._M_finish, D.15772 cmpq 48(%rsp), %rax # container.D.13376._M_impl._M_end_of_storage, D.15772 movl $0, 16(%rsp) #, D.13379.a movl $0, 20(%rsp) #, D.13379.b jne .L49 #, leaq 16(%rsp), %rsi #, leaq 32(%rsp), %rdi #, call _ZNSt6vectorI4ItemSaIS0_EE19_M_emplace_back_auxIIS0_EEEvDpOT_ # This code is cluttered and there is a lot more going on in the loop than in the previous case. Before the function call (last line shown), the arguments must be placed appropriately: leaq 16(%rsp), %rsi #, leaq 32(%rsp), %rdi #, call _ZNSt6vectorI4ItemSaIS0_EE19_M_emplace_back_auxIIS0_EEEvDpOT_ # Even though this is never actually executed, the loop arranges the things before: movl $0, 16(%rsp) #, D.13379.a movl $0, 20(%rsp) #, D.13379.b This leads to the messy code. If there is no function call because inlining succeeds, we have only 2 move instructions in the loop and there is no messing going with the %rsp (stack pointer). However, if the inlining fails, we get 6 moves and we mess a lot with the %rsp. Just to substantiate my theory (note the -finline-limit), both in C++11 mode: $ g++ -std=c++11 -O3 -finline-limit=105 regr.cpp && perf stat -r 10 ./a.out Performance counter stats for './a.out' (10 runs): 84.739057 task-clock # 0.993 CPUs utilized ( +- 1.34% ) 8 context-switches # 0.096 K/sec ( +- 2.22% ) 1 CPU-migrations # 0.009 K/sec ( +- 64.01% ) 19,801 page-faults # 0.234 M/sec 266,809,312 cycles # 3.149 GHz ( +- 0.58% ) [81.20%] 206,804,948 stalled-cycles-frontend # 77.51% frontend cycles idle ( +- 0.91% ) [81.25%] 129,078,683 stalled-cycles-backend # 48.38% backend cycles idle ( +- 1.37% ) [69.49%] 183,130,306 instructions # 0.69 insns per cycle # 1.13 stalled cycles per insn ( +- 0.85% ) [85.35%] 38,759,720 branches # 457.401 M/sec ( +- 0.29% ) [85.43%] 24,527 branch-misses # 0.06% of all branches ( +- 2.66% ) [83.52%] 0.085359326 seconds time elapsed ( +- 1.31% ) $ g++ -std=c++11 -O3 -finline-limit=106 regr.cpp && perf stat -r 10 ./a.out Performance counter stats for './a.out' (10 runs): 37.790325 task-clock # 0.990 CPUs utilized ( +- 2.06% ) 4 context-switches # 0.098 K/sec ( +- 5.77% ) 0 CPU-migrations # 0.011 K/sec ( +- 55.28% ) 19,801 page-faults # 0.524 M/sec 104,699,973 cycles # 2.771 GHz ( +- 2.04% ) [78.91%] 58,023,151 stalled-cycles-frontend # 55.42% frontend cycles idle ( +- 4.03% ) [78.88%] 30,572,036 stalled-cycles-backend # 29.20% backend cycles idle ( +- 5.31% ) [71.40%] 140,669,773 instructions # 1.34 insns per cycle # 0.41 stalled cycles per insn ( +- 1.40% ) [88.14%] 38,117,067 branches # 1008.646 M/sec ( +- 0.65% ) [89.38%] 27,519 branch-misses # 0.07% of all branches ( +- 4.01% ) [86.16%] 0.038187580 seconds time elapsed ( +- 2.05% ) Indeed, if we ask the compiler to try just a little bit harder to inline that function, the difference in performance goes away. So what is the take away from this story? That failed inlines can cost you a lot and you should make full use of the compiler capabilities: I can only recommend link time optimization. It gave a significant performance boost to my programs (up to 2.5x) and all I needed to do is to pass the -flto flag. That's a pretty good deal! ;) However, I do not recommend trashing your code with the inline keyword; let the compiler decide what to do. (The optimizer is allowed to treat the inline keyword as white space anyway.) Great question, +1!
Vector
20,977,741
240
What is the cheapest way to initialize a std::vector from a C-style array? Example: In the following class, I have a vector, but due to outside restrictions, the data will be passed in as C-style array: class Foo { std::vector<double> w_; public: void set_data(double* w, int len){ // how to cheaply initialize the std::vector? } Obviously, I can call w_.resize() and then loop over the elements, or call std::copy(). Are there any better methods?
Don't forget that you can treat pointers as iterators: w_.assign(w, w + len);
Vector
2,434,196
233
Are all of the following statements true? vector<Type> vect; //allocates vect on stack and each of the Type (using std::allocator) also will be on the stack vector<Type> *vect = new vector<Type>; //allocates vect on heap and each of the Type will be allocated on stack vector<Type*> vect; //vect will be on stack and Type* will be on heap. How is the memory allocated internally for Type in a vector or any other STL container?
vector<Type> vect; will allocate the vector, i.e. the header info, on the stack, but the elements on the free store ("heap"). vector<Type> *vect = new vector<Type>; allocates everything on the free store (except vect pointer, which is on the stack). vector<Type*> vect; will allocate the vector on the stack and a bunch of pointers on the free store, but where these point is determined by how you use them (you could point element 0 to the free store and element 1 to the stack, say).
Vector
8,036,474
233
Is there a built-in vector function in C++ to reverse a vector in place? Or do you just have to do it manually?
There's a function std::reverse in the algorithm header for this purpose. #include <vector> #include <algorithm> int main() { std::vector<int> a; std::reverse(a.begin(), a.end()); return 0; }
Vector
8,877,448
226
Is it even possible to concatenate vectors in Rust? If so, is there an elegant way to do so? I have something like this: let mut a = vec![1, 2, 3]; let b = vec![4, 5, 6]; for val in &b { a.push(val); } Does anyone know of a better way?
The structure std::vec::Vec has method append(): fn append(&mut self, other: &mut Vec<T>) Moves all the elements of other into Self, leaving other empty. From your example, the following code will concatenate two vectors by mutating a and b: fn main() { let mut a = vec![1, 2, 3]; let mut b = vec![4, 5, 6]; a.append(&mut b); assert_eq!(a, [1, 2, 3, 4, 5, 6]); assert_eq!(b, []); } Alternatively, you can use Extend::extend() to append all elements of something that can be turned into an iterator (like Vec) to a given vector: let mut a = vec![1, 2, 3]; let b = vec![4, 5, 6]; a.extend(b); assert_eq!(a, [1, 2, 3, 4, 5, 6]); // b is moved and can't be used anymore Note that the vector b is moved instead of emptied. If your vectors contain elements that implement Copy, you can pass an immutable reference to one vector to extend() instead in order to avoid the move. In that case the vector b is not changed: let mut a = vec![1, 2, 3]; let b = vec![4, 5, 6]; a.extend(&b); assert_eq!(a, [1, 2, 3, 4, 5, 6]); assert_eq!(b, [4, 5, 6]);
Vector
40,792,801
226
I have a dataframe such as: a1 = c(1, 2, 3, 4, 5) a2 = c(6, 7, 8, 9, 10) a3 = c(11, 12, 13, 14, 15) aframe = data.frame(a1, a2, a3) I tried the following to convert one of the columns to a vector, but it doesn't work: avector <- as.vector(aframe['a2']) class(avector) [1] "data.frame" This is the only solution I could come up with, but I'm assuming there has to be a better way to do this: class(aframe['a2']) [1] "data.frame" avector = c() for(atmp in aframe['a2']) { avector <- atmp } class(avector) [1] "numeric" Note: My vocabulary above may be off, so please correct me if so. I'm still learning the world of R. Additionally, any explanation of what's going on here is appreciated (i.e. relating to Python or some other language would help!)
I'm going to attempt to explain this without making any mistakes, but I'm betting this will attract a clarification or two in the comments. A data frame is a list. When you subset a data frame using the name of a column and [, what you're getting is a sublist (or a sub data frame). If you want the actual atomic column, you could use [[, or somewhat confusingly (to me) you could do aframe[,2] which returns a vector, not a sublist. So try running this sequence and maybe things will be clearer: avector <- as.vector(aframe['a2']) class(avector) avector <- aframe[['a2']] class(avector) avector <- aframe[,2] class(avector)
Vector
7,070,173
223
It seems that Vector was late to the Scala collections party, and all the influential blog posts had already left. In Java ArrayList is the default collection - I might use LinkedList but only when I've thought through an algorithm and care enough to optimise. In Scala should I be using Vector as my default Seq, or trying to work out when List is actually more appropriate?
As a general rule, default to using Vector. It’s faster than List for almost everything and more memory-efficient for larger-than-trivial sized sequences. See this documentation of the relative performance of Vector compared to the other collections. There are some downsides to going with Vector. Specifically: Updates at the head are slower than List (though not by as much as you might think) Another downside before Scala 2.10 was that pattern matching support was better for List, but this was rectified in 2.10 with generalized +: and :+ extractors. There is also a more abstract, algebraic way of approaching this question: what sort of sequence do you conceptually have? Also, what are you conceptually doing with it? If I see a function that returns an Option[A], I know that function has some holes in its domain (and is thus partial). We can apply this same logic to collections. If I have a sequence of type List[A], I am effectively asserting two things. First, my algorithm (and data) is entirely stack-structured. Second, I am asserting that the only things I’m going to do with this collection are full, O(n) traversals. These two really go hand-in-hand. Conversely, if I have something of type Vector[A], the only thing I am asserting is that my data has a well defined order and a finite length. Thus, the assertions are weaker with Vector, and this leads to its greater flexibility.
Vector
6,928,327
218
I want to initialize a vector like we do in case of an array. Example int vv[2] = {12, 43}; But when I do it like this, vector<int> v(2) = {34, 23}; OR vector<int> v(2); v = {0, 9}; it gives an error: expected primary-expression before ‘{’ token AND error: expected ‘,’ or ‘;’ before ‘=’ token respectively.
With the new C++ standard (may need special flags to be enabled on your compiler) you can simply do: std::vector<int> v { 34,23 }; // or // std::vector<int> v = { 34,23 }; Or even: std::vector<int> v(2); v = { 34,23 }; On compilers that don't support this feature (initializer lists) yet you can emulate this with an array: int vv[2] = { 12,43 }; std::vector<int> v(&vv[0], &vv[0]+2); Or, for the case of assignment to an existing vector: int vv[2] = { 12,43 }; v.assign(&vv[0], &vv[0]+2); Like James Kanze suggested, it's more robust to have functions that give you the beginning and end of an array: template <typename T, size_t N> T* begin(T(&arr)[N]) { return &arr[0]; } template <typename T, size_t N> T* end(T(&arr)[N]) { return &arr[0]+N; } And then you can do this without having to repeat the size all over: int vv[] = { 12,43 }; std::vector<int> v(begin(vv), end(vv));
Vector
8,906,545
217
Can anyone tell me how to find the common elements from multiple vectors? a <- c(1,3,5,7,9) b <- c(3,6,8,9,10) c <- c(2,3,4,5,7,9) I want to get the common elements from the above vectors (ex: 3 and 9)
There might be a cleverer way to go about this, but intersect(intersect(a,b),c) will do the job. EDIT: More cleverly, and more conveniently if you have a lot of arguments: Reduce(intersect, list(a,b,c))
Vector
3,695,677
216
Suppose I have a line segment going from (x1,y1) to (x2,y2). How do I calculate the normal vector perpendicular to the line? I can find lots of stuff about doing this for planes in 3D, but no 2D stuff. Please go easy on the maths (links to worked examples, diagrams or algorithms are welcome), I'm a programmer more than I'm a mathematician ;)
If we define dx = x2 - x1 and dy = y2 - y1, then the normals are (-dy, dx) and (dy, -dx). Note that no division is required, and so you're not risking dividing by zero.
Vector
1,243,614
214
The question's pretty self-explanatory really. I know vaguely about vectors in maths, but I don't really see the link to C++ vectors.
It's called a vector because Alex Stepanov, the designer of the Standard Template Library, was looking for a name to distinguish it from built-in arrays. He admits now that he made a mistake, because mathematics already uses the term 'vector' for a fixed-length sequence of numbers. C++11 compounds this mistake by introducing a class 'array' that behaves similarly to a mathematical vector. Alex's lesson: be very careful every time you name something.
Vector
581,426
210
R offers max and min, but I do not see a really fast way to find another value in the order, apart from sorting the whole vector and then picking a value x from this vector. Is there a faster way to get the second highest value, for example?
Use the partial argument of sort(). For the second highest value: n <- length(x) sort(x,partial=n-1)[n-1]
Vector
2,453,326
197
I am running into issues trying to use large objects in R. For example: > memory.limit(4000) > a = matrix(NA, 1500000, 60) > a = matrix(NA, 2500000, 60) > a = matrix(NA, 3500000, 60) Error: cannot allocate vector of size 801.1 Mb > a = matrix(NA, 2500000, 60) Error: cannot allocate vector of size 572.2 Mb # Can't go smaller anymore > rm(list=ls(all=TRUE)) > a = matrix(NA, 3500000, 60) # Now it works > b = matrix(NA, 3500000, 60) Error: cannot allocate vector of size 801.1 Mb # But that is all there is room for I understand that this is related to the difficulty of obtaining contiguous blocks of memory (from here): Error messages beginning cannot allocate vector of size indicate a failure to obtain memory, either because the size exceeded the address-space limit for a process or, more likely, because the system was unable to provide the memory. Note that on a 32-bit build there may well be enough free memory available, but not a large enough contiguous block of address space into which to map it. How can I get around this? My main difficulty is that I get to a certain point in my script and R can't allocate 200-300 Mb for an object... I can't really pre-allocate the block because I need the memory for other processing. This happens even when I dilligently remove unneeded objects. EDIT: Yes, sorry: Windows XP SP3, 4Gb RAM, R 2.12.0: > sessionInfo() R version 2.12.0 (2010-10-15) Platform: i386-pc-mingw32/i386 (32-bit) locale: [1] LC_COLLATE=English_Caribbean.1252 LC_CTYPE=English_Caribbean.1252 [3] LC_MONETARY=English_Caribbean.1252 LC_NUMERIC=C [5] LC_TIME=English_Caribbean.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base
Consider whether you really need all this data explicitly, or can the matrix be sparse? There is good support in R (see Matrix package for e.g.) for sparse matrices. Keep all other processes and objects in R to a minimum when you need to make objects of this size. Use gc() to clear now unused memory, or, better only create the object you need in one session. If the above cannot help, get a 64-bit machine with as much RAM as you can afford, and install 64-bit R. If you cannot do that there are many online services for remote computing. If you cannot do that the memory-mapping tools like package ff (or bigmemory as Sascha mentions) will help you build a new solution. In my limited experience ff is the more advanced package, but you should read the High Performance Computing topic on CRAN Task Views.
Vector
5,171,593
196
I'm trying to learn R and I can't figure out how to append to a list. If this were Python I would . . . #Python vector = [] values = ['a','b','c','d','e','f','g'] for i in range(0,len(values)): vector.append(values[i]) How do you do this in R? #R Programming > vector = c() > values = c('a','b','c','d','e','f','g') > for (i in 1:length(values)) + #append value[i] to empty vector
Appending to an object in a for loop causes the entire object to be copied on every iteration, which causes a lot of people to say "R is slow", or "R loops should be avoided". As BrodieG mentioned in the comments: it is much better to pre-allocate a vector of the desired length, then set the element values in the loop. Here are several ways to append values to a vector. All of them are discouraged. Appending to a vector in a loop # one way for (i in 1:length(values)) vector[i] <- values[i] # another way for (i in 1:length(values)) vector <- c(vector, values[i]) # yet another way?!? for (v in values) vector <- c(vector, v) # ... more ways help("append") would have answered your question and saved the time it took you to write this question (but would have caused you to develop bad habits). ;-) Note that vector <- c() isn't an empty vector; it's NULL. If you want an empty character vector, use vector <- character(). Pre-allocate the vector before looping If you absolutely must use a for loop, you should pre-allocate the entire vector before the loop. This will be much faster than appending for larger vectors. set.seed(21) values <- sample(letters, 1e4, TRUE) vector <- character(0) # slow system.time( for (i in 1:length(values)) vector[i] <- values[i] ) # user system elapsed # 0.340 0.000 0.343 vector <- character(length(values)) # fast(er) system.time( for (i in 1:length(values)) vector[i] <- values[i] ) # user system elapsed # 0.024 0.000 0.023
Vector
22,235,809
196
So, I have the following: std::vector< std::vector <int> > fog; and I am initializing it very naively like: for(int i=0; i<A_NUMBER; i++) { std::vector <int> fogRow; for(int j=0; j<OTHER_NUMBER; j++) { fogRow.push_back(0); } fog.push_back(fogRow); } And it feels very wrong... Is there another way of initializing a vector like this?
Use the std::vector::vector(count, value) constructor that accepts an initial size and a default value: std::vector<std::vector<int> > fog( ROW_COUNT, std::vector<int>(COLUMN_COUNT)); // Defaults to zero initial value If a value other than zero, say 4 for example, was required to be the default then: std::vector<std::vector<int> > fog( ROW_COUNT, std::vector<int>(COLUMN_COUNT, 4)); I should also mention uniform initialization was introduced in C++11, which permits the initialization of vector, and other containers, using {}: std::vector<std::vector<int> > fog { { 1, 1, 1 }, { 2, 2, 2 } };
Vector
17,663,186
194
I'm trying to use std::vector as a char array. My function takes in a void pointer: void process_data(const void *data); Before I simply just used this code: char something[] = "my data here"; process_data(something); Which worked as expected. But now I need the dynamicity of std::vector, so I tried this code instead: vector<char> something; *cut* process_data(something); The question is, how do I pass the char vector to my function so I can access the vector raw data (no matter which format it is – floats, etc.)? I tried this: process_data(&something); And this: process_data(&something.begin()); But it returned a pointer to gibberish data, and the latter gave warning: warning C4238: nonstandard extension used : class rvalue used as lvalue.
&something gives you the address of the std::vector object, not the address of the data it holds. &something.begin() gives you the address of the iterator returned by begin() (as the compiler warns, this is not technically allowed because something.begin() is an rvalue expression, so its address cannot be taken). Assuming the container has at least one element in it, you need to get the address of the initial element of the container, which you can get via &something[0] or &something.front() (the address of the element at index 0), or &*something.begin() (the address of the element pointed to by the iterator returned by begin()). In C++11, a new member function was added to std::vector: data(). This member function returns the address of the initial element in the container, just like &something.front(). The advantage of this member function is that it is okay to call it even if the container is empty.
Vector
6,485,496
186
I am trying to efficiently make a copy of a vector. I see two possible approaches: std::vector<int> copyVecFast1(const std::vector<int>& original) { std::vector<int> newVec; newVec.reserve(original.size()); std::copy(original.begin(), original.end(), std::back_inserter(newVec)); return newVec; } std::vector<int> copyVecFast2(std::vector<int>& original) { std::vector<int> newVec; newVec.swap(original); return newVec; } Which of these is preferred, and why? I am looking for the most efficient solution that will avoid unnecessary copying.
They aren't the same though, are they? One is a copy, the other is a swap. Hence the function names. My favourite is: a = b; Where a and b are vectors.
Vector
644,673
180
I read that Vectors are not seqs, but Lists are. I'm not sure what the rationale is for using one over the other. It seems that vectors are used the most, but is there a reason for that?
Once again, it seems I've answered my own question by getting impatient and asking it in #clojure on Freenode. Good thing answering your own questions is encouraged on Stackoverflow.com :D I had a quick discussion with Rich Hickey, and here is the gist of it. [12:21] <Raynes> Vectors aren't seqs, right? [12:21] <rhickey> Raynes: no, but they are sequential [12:21] <rhickey> ,(sequential? [1 2 3]) [12:21] <clojurebot> true [12:22] <Raynes> When would you want to use a list over a vector? [12:22] <rhickey> when generating code, when generating back-to-front [12:23] <rhickey> not too often in Clojure
Vector
1,147,975
169
Item 18 of Scott Meyers's book Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library says to avoid vector <bool> as it's not an STL container and it doesn't really hold bools. The following code: vector <bool> v; bool *pb =&v[0]; will not compile, violating a requirement of STL containers. Error: cannot convert 'std::vector<bool>::reference* {aka std::_Bit_reference*}' to 'bool*' in initialization vector<T>::operator [] return type is supposed to be T&, but why is it a special case for vector<bool>? What does vector<bool> really consist of? The Item further says: deque<bool> v; // is a STL container and it really contains bools Can this be used as an alternative to vector<bool>? Can anyone please explain this?
For space-optimization reasons, the C++ standard (as far back as C++98) explicitly calls out vector<bool> as a special standard container where each bool uses only one bit of space rather than one byte as a normal bool would (implementing a kind of "dynamic bitset"). In exchange for this optimization it doesn't offer all the capabilities and interface of a normal standard container. In this case, since you can't take the address of a bit within a byte, things such as operator[] can't return a bool& but instead return a proxy object that allows to manipulate the particular bit in question. Since this proxy object is not a bool&, you can't assign its address to a bool* like you could with the result of such an operator call on a "normal" container. In turn this means that bool *pb =&v[0]; isn't valid code. On the other hand deque doesn't have any such specialization called out so each bool takes a byte and you can take the address of the value return from operator[]. Finally note that the MS standard library implementation is (arguably) suboptimal in that it uses a small chunk size for deques, which means that using deque as a substitute isn't always the right answer.
Vector
17,794,569
167
Is it possible to iterate a vector from the end to the beginning? for (vector<my_class>::iterator i = my_vector.end(); i != my_vector.begin(); /* ?! */ ) { } Or is that only possible with something like that: for (int i = my_vector.size() - 1; i >= 0; --i) { }
One way is: for (vector<my_class>::reverse_iterator riter = my_vector.rbegin(); riter != my_vector.rend(); ++riter) { // do stuff } rbegin()/rend() were especially designed for that purpose. (And yes, incrementing a reverse_iterator moves it backward.) Now, in theory, your method (using begin()/end() & --i) would work, std::vector's iterator being bidirectional, but remember, end() isn't the last element — it's one beyond the last element, so you'd have to decrement first, and you are done when you reach begin() — but you still have to do your processing. vector<my_class>::iterator iter = my_vector.end(); while (iter != my_vector.begin()) { --iter; /*do stuff */ }
Vector
3,610,933
164