question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
Using logstash 2.3.4-1 on centos 7 with kafka-input plugin I sometimes get {:timestamp=>"2016-09-07T13:41:46.437000+0000", :message=>#0, :events_consumed=>822, :worker_count=>1, :inflight_count=>0, :worker_states=>[{:status=>"dead", :alive=>false, :index=>0, :inflight_count=>0}], :output_info=>[{:type=>"http", :config=>{"http_method"=>"post", "url"=>"${APP_URL}/", "headers"=>["AUTHORIZATION", "Basic ${CREDS}"], "ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"", headers=>{..}, codec=>"UTF-8">, workers=>1, request_timeout=>60, socket_timeout=>10, connect_timeout=>10, follow_redirects=>true, pool_max=>50, pool_max_per_route=>25, keepalive=>true, automatic_retries=>1, retry_non_idempotent=>false, validate_after_inactivity=>200, ssl_certificate_validation=>true, keystore_type=>"JKS", truststore_type=>"JKS", cookies=>true, verify_ssl=>true, format=>"json">]>, :busy_workers=>1}, {:type=>"stdout", :config=>{"ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"\n">, workers=>1>]>, :busy_workers=>0}], :thread_info=>[], :stalling_threads_info=>[]}>, :level=>:warn} this is the config input { kafka { bootstrap_servers => "${KAFKA_ADDRESS}" topics => ["${LOGSTASH_KAFKA_TOPIC}"] } } filter { ruby { code => "require 'json' require 'base64' def good_event?(event_metadata) event_metadata['key1']['key2'].start_with?('good') rescue true end def has_url?(event_data) event_data['line'] && event_data['line'].any? { |i| i['url'] && !i['url'].blank? } rescue false end event_payload = JSON.parse(event.to_hash['message'])['payload'] event.cancel unless good_event?(event_payload['event_metadata']) event.cancel unless has_url?(event_payload['event_data']) " } } output { http { http_method => 'post' url => '${APP_URL}/' headers => ["AUTHORIZATION", "Basic ${CREDS}"] } stdout { } } Which is odd, since it is written to logstash.log and not logstash.err What does this error mean and how can I avoid it? (only restarting logstash solves it, until the next time it happens)
According to this github issue your ruby code could be causing the issue. Basically any ruby exception will cause the filter worker to die. Without seeing your ruby code, it's impossible to debug further, but you could try wrapping your ruby code in an exception handler and logging the exception somewhere (at least until logstash is updated to log it).
Logstash
39,532,116
12
I am new to ELK stack and playing around with it in a development environment. That's why I end up deleting an index (DELETE /index_name) and recreating multiple times. Deleting an index that I created works fine, but I notice that there are few lingering system indices, like .monitoring-es-2-2017.02.05. What is the purpose of these indices? Is each one of those created for a corresponding index? How do I delete them? NOTE: I have seen the suggestion to use /* to delete everything. But that sounds risky. I don't want to delete index pattern or Visualization or the templates. I only want to delete the data and repopulate with updated data.
These indices are created by the Elastic X-Pack monitoring component. X-Pack components are elasticsearch plugins and thus store their data, like Kibana, in elasticsearch. Unlike the .kibana index these indices are created daily because they contain timeseries monitoring data about elasticsearch's performance. Deleting them will have no impact on your other indices. As @Val pointed out in the comments, you can use /.monitoring-* to only delete these indices and ensure you do not wipe out any other indices, you may find the data in these indices useful as you evaluate the ELK stack and leaving them should not negatively impact you except in the disk space and small amount of memory they occupy.
Logstash
42,121,684
12
I am currently using filebeat to forward logs to logstash and then to elasticsearch. Now, I am thinking about forwarding logs by rsyslog to logstash. The benefit of this would be that, I would not need to install and configure filebeat on every server, and also I can forward logs in JSON format which is easy to parse and filter. I can use TCP/UDP to forward logs to logstash by rsyslog. I want to know the more benefits and drawbacks of rsyslog over filebeat, in terms of performance, reliability and ease of use.
When you couple Beats with Logstash you have something called "back pressure management" - Beats will stop flooding the Logstash server with messages in case something goes wrong on the network, for instance. Another advantage of using Beats is that in Logstash you can have persisted queues, which prevents you from losing log messages in case your elasticsearch cluster goes down. So Logstash will persist messages on disk. Be careful because Logstash can't ensure you wont lose messages if you are using UDP, this link will be helpful.
Logstash
44,387,910
12
I try to install logstash with a docker-compose but docker exited with code 0 just after Installation successful when I try to install a logstash plugin. The part of docker-compose file for logstash is: logstash: image: docker.elastic.co/logstash/logstash-oss:7.0.1 ports: ['9600:9600'] command: bin/logstash-plugin install logstash-filter-metricize volumes: - ./logstash/pipeline/:/usr/share/logstash/pipeline/ And the logs are: logstash_1 |Validating logstash-filter-metricize logstash_1 |Installing logstash-filter-metricize logstash_1 |Installation successful logstash_1 exited with code 0 If I try without install the plugin, my docker-compose works fine. I don't understand what I'm doing wrong when I install the plugin.
I use a Dockerfile to fix it. My Dockerfile: FROM docker.elastic.co/logstash/logstash-oss:7.0.1 RUN rm -f /usr/share/logstash/pipeline/logstash.conf && \ bin/logstash-plugin install logstash-filter-metricize My part of docker-compose: logstash: build: context: ./logstash ports: ['9600:9600'] volumes: - ./logstash/pipeline/:/usr/share/logstash/pipeline/
Logstash
56,041,596
12
Edit : I changed the title because the issue was not what I initially thought. The fact is that logstash takes more than a minute to starts, which can be misinterpreted as "silence"... I'm trying to make logstash running, so I've followed the instruction on the official site for a standalone installation : http://logstash.net/docs/1.1.4/tutorials/getting-started-simple Basically, I got the logstash-1.1.4-monolithic.jar file, then build a very simple config file : (example.conf) input { stdin { type => "stdin-type" } } output { stdout { debug_format => "json" } } But when I run logstash, nothing is coming out (I'm typing random text in STDIN, but get no response) : # java -jar logstash-1.1.4-monolithic.jar agent -f example.conf Test toto hey ??? Wakeup !!! ^C (For information : Java version is correct) # java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) Client VM (build 20.1-b02, mixed mode, sharing) Please, can someone tell me what I'm missing ?
Ok, I've found by myself. Everything was working just fine. It's just that logstash is soooooo long to launch. More than 60 seconds on my (humble) server !! Add to that huge starting time the fact that nothing is printed when launched...
Logstash
13,270,004
11
I am using logstash-1.4.1, elasticsearch-1.1.1 and kibana-3.1.0 for analyzing my logs. I am able to view and query my logs. There's a need in which an alert/notification is needed when a particular log/event happens. Eg: When a Login failed log occurs again and again, an alert/notification (popup, via mail, etc) is required. Presently I can query my log like for Failed login, but I want it whenever such a log appears a notification/popup appears than me manually querying for it. Can this be done using the above three ?? How can this be achieved?
You can use Watcher for monitoring your Elasticsearch. It alerts you via mail. For further details, refer to this link: https://www.elastic.co/products/watcher You can follow these steps to configure Watcher: Step 1 – Install Plugin for Watcher (for 1.7): bin/plugin --install elasticsearch/watcher/latest bin/plugin --install elasticsearch/license/latest Step 2 – Restart Elasticsearch: ES_HOME/bin/elasticsearch Step 3 – Verify that Watcher is set up: curl -XGET 'http://localhost:9200/_watcher/stats?pretty' Step 4 – Watch the log data for errors: PUT /_watcher/watch/log_error_watch { "trigger": { "schedule": { "interval": "10m" } }, "input": { "search": { "request": { "indices": ["logs"], "body": { "query": { "match": { "message": "error" } } } } } }, "condition": { "compare": { "ctx.payload.hits.total": { "gt": 0 } } }, "actions": { "send_email": { "email": { "to": "<username>@<domainname>", "subject": "Cluster logs", "body": "Cluster Error Logs ", "attach_data": true } } } } Step 5 – Configure email (add the lines below into elasticsearch.yml): watcher.actions.email.service.account: work: profile: gmail email_defaults: from: <email> smtp: auth: true starttls.enable: true host: smtp.gmail.com port: 587 user: <username> password: <password> Step 6 – To delete Watcher: curl -XDELETE'http://localhost:9200/_watcher/watch/log_error_watch'
Logstash
23,948,695
11
I have setup logstash to use an embedded elastisearch. I can log events. My logstash conf looks thus: https://gist.github.com/khebbie/42d72d212cf3727a03a0 Now I would like to add another udp input and have that input be indexed in another index. Is that somehow possible? I would do it to make reporting easier, so I could have system log events in one index, and business log events in another index.
Use an if conditional in your output section, based on e.g. the message type or whatever message field is significant to the choice of index. input { udp { ... type => "foo" } file { ... type => "bar" } } output { if [type] == "foo" { elasticsearch { ... index => "foo-index" } } else { elasticsearch { ... index => "bar-index" } } } Or, if the message type can go straight into the index name you can have a single output declaration: elasticsearch { ... index => "%{type}-index" }
Logstash
27,146,032
11
So I wrote now several patterns for logs which are working. The thing is now, that I have these multiple logs, with multiple patterns, in one single file. How does logstash know what kind of pattern it has to use for which line in the log? ( I am using grok for my filtering ) And if you guys would be super kind, could you give me the link to the docs, because I weren't able to find anything regarding this :/
You could use multiple patterns for your grok filter, grok { match => ["fieldname", "pattern1", "pattern2", ..., "patternN"] } and they will be applied in order but a) it's not the best option performance-wise and b) you probably want to treat different types of logs differently anyway, so I suggest you use conditionals based on the type or tags of a message: if [type] == "syslog" { grok { match => ["message", "your syslog pattern"] } } Set the type in the input plugin. The documentation for the currently released version of Logstash is at http://logstash.net/docs/1.4.2/. It probably doesn't address your question specifically but it can be inferred.
Logstash
28,450,501
11
I have ELK installed and working in my machine, but now I want to do a more complex filtering and field adding depending on event messages. Specifically, I want to set "id_error" and "descripcio" depending on the message pattern. I have been trying a lot of code combinations in "logstash.conf" file, but I am not able to get the expected behavior. Can someone tell me what I am doing wrong, what I have to do or if this is not possible? Thanks in advance. This is my "logstash.conf" file, with the last test I have made, resulting in no events captured in Kibana: input { file { path => "C:\xxx.log" } } filter { grok { patterns_dir => "C:\elk\patterns" match => [ "message", "%{ERROR2:error2}" ] add_field => [ "id_error", "2" ] add_field => [ "descripcio", "error2!!!" ] } grok { patterns_dir => "C:\elk\patterns" match => [ "message", "%{ERROR1:error1}" ] add_field => [ "id_error", "1" ] add_field => [ "descripcio", "error1!!!" ] } if ("_grokparsefailure" in [tags]) { drop {} } } output { elasticsearch { host => "localhost" protocol => "http" index => "xxx-%{+YYYY.MM.dd}" } } I also have tried the following code, resulting in fields "id_error" and "descripcio" with both vaules "[1,2]" and "[error1!!!,error2!!!]" respectively, in each matched event. As "break_on_match" is set "true" by default, I expect getting only the fields behind the matching clause, but this doesn't occur. input { file { path => "C:\xxx.log" } } filter { grok { patterns_dir => "C:\elk\patterns" match => [ "message", "%{ERROR1:error1}" ] add_field => [ "id_error", "1" ] add_field => [ "descripcio", "error1!!!" ] match => [ "message", "%{ERROR2:error2}" ] add_field => [ "id_error", "2" ] add_field => [ "descripcio", "error2!!!" ] } if ("_grokparsefailure" in [tags]) { drop {} } } output { elasticsearch { host => "localhost" protocol => "http" index => "xxx-%{+YYYY.MM.dd}" } }
I have solved the problem. I get the expected results with the following code in "logstash.conf": input { file { path => "C:\xxx.log" } } filter { grok { patterns_dir => "C:\elk\patterns" match => [ "message", "%{ERROR1:error1}" ] match => [ "message", "%{ERROR2:error2}" ] } if [message] =~ /error1_regex/ { grok { patterns_dir => "C:\elk\patterns" match => [ "message", "%{ERROR1:error1}" ] } mutate { add_field => [ "id_error", "1" ] add_field => [ "descripcio", "Error1!" ] remove_field => [ "message" ] remove_field => [ "error1" ] } } else if [message] =~ /error2_regex/ { grok { patterns_dir => "C:\elk\patterns" match => [ "message", "%{ERROR2:error2}" ] } mutate { add_field => [ "id_error", "2" ] add_field => [ "descripcio", "Error2!" ] remove_field => [ "message" ] remove_field => [ "error2" ] } } if ("_grokparsefailure" in [tags]) { drop {} } } output { elasticsearch { host => "localhost" protocol => "http" index => "xxx-%{+YYYY.MM.dd}" } }
Logstash
29,826,619
11
I downloaded Logstash-1.5.0 on Windows 8.1 and tried to run it in the command prompt. First I checked the java version. Then changed the directory to logstash-1.5.0/bin then entered the command logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } stdout { } }' it gave the following error: Cannot locate java installation, specified by JAVA_HOME The Logstash folder is on C: and the version of Java is 1.7.0_25. I've set the JAVA_HOME environmental variables to the jdk /bin directory, but still it doesn't work. I'm new to Logstash. Can somebody tell me in detail why this happens and help me fix it?
Set the JAVA_HOME and PATH environmental variables like this: JAVA_HOME = C:\Program Files\Java\jdk1.7.0_25 PATH = C:\Program Files\Java\jdk1.7.0_25\bin
Logstash
30,427,355
11
I'm running into some issues sending log data to my logstash instance from a simple java application. For my use case, I'm trying to avoid using log4j logback and instead batch json events on separate lines through a raw tcp socket. The reason for that is I'm looking to send data through a aws lambda function to logstash which means storing logs to disk may not work out. My logstash configuration file looks like the following: input { tcp { port => 5400 codec => json } } filter{ json{ source => "message" } } output { elasticsearch { host => "localhost" protocol => "http" index => "someIndex" } } My java code right now is just opening a tcp socket to the logstash server and sending an event directly. Socket socket = new Socket("logstash-hostname", 5400); DataOutputStream os = new DataOutputStream(new BufferedOutputStream(socket.getOutputStream())); os.writeBytes("{\"message\": {\"someField\":\"someValue\"} }"); os.flush(); socket.close(); The application is connecting to the logstash host properly (if logstash is not up an exception is thrown when connecting), but no events are showing up in our ES cluster. Any ideas on how to do this are greatly appreciated! I don't see any relevant logs in logstash.err, logstash.log, or logstash.stdout pointing to what may be going wrong.
The problem is that your data is already deserialized on your input and you are trying to deserialize it again on your filter. Simply remove the json filter. Here is how I recreated your scenario: # the json input root@monitoring:~# cat tmp.json {"message":{"someField":"someValue"}} # the logstash configuration file root@monitoring:~# cat /etc/logstash/conf.d/test.conf input { tcp { port => 5400 codec => json } } filter{ } output { stdout { codec => rubydebug } } # starting the logstash server /opt/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf # sending the json to logstash with netcat nc localhost 5400 < tmp.json # the logstash output via stdout { "message" => { "someField" => "someValue" }, "@version" => "1", "@timestamp" => "2016-02-02T13:31:18.703Z", "host" => "0:0:0:0:0:0:0:1", "port" => 56812 } Hope it helps,
Logstash
35,143,576
11
I'm using only kibana to search ElasticSearch and i have several fields that can only take a few values (worst case, servername, 30 different values). I do understand what analyze do to bigger, more complex fields like this, but the small and simple ones i fail to understand the advance/disadvantage of anaylyzed/not_analyzed fields. So what are the benefits of using analyzed and not_analyzed for a "limited set of values" field (example. servername: server[0-9]* , no special characters to break)? What kind of search types will i lose in kibana? Will i gain any search speed or disk space? Testing on one of then i saw that the .raw version of the field is now empty but kibana still flags the field as analyzed, so i find my tests inconclusive.
I will to try to keep it simple, if you need more clarification just let me know and I'll elaborate a better answer. the "analyzed" field is going to create a token using the analyzer that you had defined for that specific table in your mapping. if you are using the default analyzer (as you refer to something without especial characters lets say server[1-9]) using the default analyzer (alnum-lowercase word-braker(this is not the name just what it does basically)) is going to tokenize : this -> HelloWorld123 into -> token1:helloworld123 OR this -> Hello World 123 into -> token1:hello && token2:world && token3:123 in this case if you do a search: HeLlO it will become -> "hello" and it will match this document because the token "hello" is there. in the case of not_analized fields it doesnt apply any tokenizer at all, your token is your keyword so that being said: this -> Hello World 123 into -> token1:(Hello World 123) if you search that field for "hello world 123" is not going to match because is "case sensitive" (you can still use wildcards though (Hello*), lets address that in another time). in a nutshell: use "analyzed" fields for fields that you are going to search and you want elasticsearch to score them. example: titles that contain the word "jobs". query:"title:jobs". doc1 : title:developer jobs in montreal doc2 : title:java coder jobs in vancuver doc3 : title:unix designer jobs in toronto doc4 : title:database manager vacancies in montreal this is going to retrieve title1 title2 title3. in those case "analyzed" fields is what you want. if you know in advance what kind of data would be on that field and you're going to query exactly what you want then "not_analyzed" is what you want. example: get all the logs from server123. query:"server:server123". doc1 :server:server123,log:randomstring,date:01-jan doc2 :server:server986,log:randomstring,date:01-jan doc3 :server:server777,log:randomstring,date:01-jan doc4 :server:server666,log:randomstring,date:01-jan doc5 :server:server123,log:randomstring,date:02-jan results only from server1 and server5. and well i hope you get the point. as i said keep it simple is about what you need. analyzed -> more space on disk (LOT MORE if the analyze filds are big). analyzed -> more time for indexation. analyzed -> better for matching documents. not_analyzed -> less space on disk. not_analyzed -> less time for indexation. not_analyzed -> exact match for fields or using wildcards. Regards, Daniel
Logstash
37,532,648
11
I'm trying to have logstash output to elasticsearch but I'm not sure how to use the mapping I defined in elasticsearch... In Kibana, I did this: Created an index and mapping like this: PUT /kafkajmx2 { "mappings": { "kafka_mbeans": { "properties": { "@timestamp": { "type": "date" }, "@version": { "type": "integer" }, "host": { "type": "keyword" }, "metric_path": { "type": "text" }, "type": { "type": "keyword" }, "path": { "type": "text" }, "metric_value_string": { "type": "keyword" }, "metric_value_number": { "type": "float" } } } } } Can write data to it like this: POST /kafkajmx2/kafka_mbeans { "metric_value_number":159.03478490788203, "path":"/home/usrxxx/logstash-5.2.0/bin/jmxconf", "@timestamp":"2017-02-12T23:08:40.934Z", "@version":"1","host":"localhost", "metric_path":"node1.kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec.FifteenMinuteRate", "type":null } now my logstash output looks like this: input { kafka { kafka details here } } output { elasticsearch { hosts => "http://elasticsearch:9050" index => "kafkajmx2" } } and it just writes it to the kafkajmx2 index but doesn't use the map, when I query it like this in kibana: get /kafkajmx2/kafka_mbeans/_search?q=* { } I get this back: { "_index": "kafkajmx2", "_type": "logs", "_id": "AVo34xF_j-lM6k7wBavd", "_score": 1, "_source": { "@timestamp": "2017-02-13T14:31:53.337Z", "@version": "1", "message": """ {"metric_value_number":0,"path":"/home/usrxxx/logstash-5.2.0/bin/jmxconf","@timestamp":"2017-02-13T14:31:52.654Z","@version":"1","host":"localhost","metric_path":"node1.kafka.server:type=SessionExpireListener,name=ZooKeeperAuthFailuresPerSec.Count","type":null} """ } } how do I tell it to use the map kafka_mbeans in the logstash output? -----EDIT----- I tried my output like this but still get the same results: output { elasticsearch { hosts => "http://10.204.93.209:9050" index => "kafkajmx2" template_name => "kafka_mbeans" codec => plain { format => "%{message}" } } } the data in elastic search should look like this: { "@timestamp": "2017-02-13T14:31:52.654Z", "@version": "1", "host": "localhost", "metric_path": "node1.kafka.server:type=SessionExpireListener,name=ZooKeeperAuthFailuresPerSec.Count", "metric_value_number": 0, "path": "/home/usrxxx/logstash-5.2.0/bin/jmxconf", "type": null } --------EDIT 2-------------- I atleast got the message to parse into json by adding a filter like this: input { kafka { ...kafka details.... } } filter { json { source => "message" remove_field => ["message"] } } output { elasticsearch { hosts => "http://node1:9050" index => "kafkajmx2" template_name => "kafka_mbeans" } } It doesn't use the template still but this atleast parses the json correctly...so now I get this: { "_index": "kafkajmx2", "_type": "logs", "_id": "AVo4a2Hzj-lM6k7wBcMS", "_score": 1, "_source": { "metric_value_number": 0.9967205071482902, "path": "/home/usrxxx/logstash-5.2.0/bin/jmxconf", "@timestamp": "2017-02-13T16:54:16.701Z", "@version": "1", "host": "localhost", "metric_path": "kafka1.kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent.Value", "type": null } }
What you need to change is very simple. First use the json codec in your kafka input. No need for the json filter, you can remove it. kafka { ...kafka details.... codec => "json" } Then in your elasticsearch output you're missing the mapping type (parameter document_type below), which is important otherwise it defaults to logs (as you can see) and that doesn't match your kafka_mbeans mapping type. Moreover, you don't really need to use template since your index already exists. Make the following modification: elasticsearch { hosts => "http://node1:9050" index => "kafkajmx2" document_type => "kafka_mbeans" }
Logstash
42,206,826
11
We are currently use azure scale set (many VMs on one source group with load balance and one availability set), we used to use NLog to log our web app action and errors, but now we asked/needs to use Elastic Search and also use centralized log for all azure vm instances instead of file per each instance. I am new to ES and LogStash concepts, Do I need to replace NLog with something else? and How I move to use ES and unify all logs in one (I think to make nlog store in azure storage table as unify results or do I needs to use LogStash or you prefer something else)? What is the most logging that give support for .net core app on azure multi VMs as described above? Any help please?
For NLog there is a target "NLog.Targets.ElasticSearch" (nuget) which uses the Elasticsearch.Net package. Usage: <nlog> <extensions> <add assembly="NLog.Targets.ElasticSearch"/> </extensions> <targets> <target name="elastic" xsi:type="BufferingWrapper" flushTimeout="5000"> <target xsi:type="ElasticSearch" requireAuth="true" username="myUserName" password="coolpassword"/> </target> </targets> <rules> <logger name="*" minlevel="Info" writeTo="elastic" /> </rules> </nlog> Docs for the parameters: https://github.com/ReactiveMarkets/NLog.Targets.ElasticSearch/wiki Please note: If you need to use Elasticsearch.Net 6 (equivalent to Elastic Search version 6 or later), you need NLog.Targets.ElasticSearch version 5. For Elasticsearch.Net 5 you need to use NLog.Targets.ElasticSearch 4
Logstash
49,595,422
11
When I try start logstash server on my machine I get this error: Sending Logstash's logs to D:/kibana/logstash-6.3.2/logs which is now configured via log4j2.properties ERROR: Pipelines YAML file is empty. Location: D:/kibana/logstash-6.3.2/config/p ipelines.yml usage: bin/logstash -f CONFIG_PATH [-t] [-r] [] [-w COUNT] [-l LOG] bin/logstash --modules MODULE_NAME [-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAM E.VARIABLE_NAME=VALUE"] [-t] [-w COUNT] [-l LOG] bin/logstash -e CONFIG_STR [-t] [--log.level fatal|error|warn|info|debug|trace ] [-w COUNT] [-l LOG] bin/logstash -i SHELL [--log.level fatal|error|warn|info|debug|trace] bin/logstash -V [--log.level fatal|error|warn|info|debug|trace] bin/logstash --help [2018-12-14T16:16:10,809][ERROR][org.logstash.Logstash ] java.lang.IllegalSta teException: Logstash stopped processing because of an error: (SystemExit) exit I delite all and exctract new code(default configs) and get this error anyway.
Looks like you're trying to start your Logstash with multiple pipelines configuration: When you start Logstash without arguments, it will read the pipelines.yml file and instantiate all pipelines specified in the file. On the other hand, when you use -e or -f, Logstash ignores the pipelines.yml file and logs a warning about it. You can start Logstash specifying config file location: logstash -f mypipeline.conf or you can just configure your pipelines.yml file. I strongly advise to use pipelines configuration becuase it will be easier to expand Logstash in the future and you can specify resources for each pipeline. An example configuration of pipelines.yml: - pipeline.id: my_pipeline_name path.config: "/path/to/your/config/file.cfg" queue.type: persisted
Logstash
53,780,598
11
GrayLog2 requires both ElasticSearch and MongoDB, while Logstash uses only ElasticSearch for persisting and searching the logs. what does mongo provide in graylog2?
Graylog2 uses mongodb for the web interface entities. Streams, alerts, users, settings, cached stream counts etc. Pretty much everything you see and edit in the web interface except for the logs themselves.
Logstash
17,399,567
10
With the logstash 1.2.1 one can now have conditional to do various stuff. Even the earlier version's conf file can get complicated if one is managing many log files and implement metric extraction. After looking at this comprehensive example, I really wondered my self, how can I detect any breakages in this configuration? Any ideas.
For a syntax check, there is --configtest: java -jar logstash.jar agent --configtest --config <yourconfigfile> To test the logic of the configuration you can write rspec tests. This is an example rspec file to test a haproxy log filter: require "test_utils" describe "haproxy logs" do extend LogStash::RSpec config <<-CONFIG filter { grok { type => "haproxy" add_tag => [ "HTTP_REQUEST" ] pattern => "%{HAPROXYHTTP}" } date { type => 'haproxy' match => [ 'accept_date', 'dd/MMM/yyyy:HH:mm:ss.SSS' ] } } CONFIG sample({'@message' => '<150>Oct 8 08:46:47 syslog.host.net haproxy[13262]: 10.0.1.2:44799 [08/Oct/2013:08:46:44.256] frontend-name backend-name/server.host.net 0/0/0/1/2 404 1147 - - ---- 0/0/0/0/0 0/0 {client.host.net||||Apache-HttpClient/4.1.2 (java 1. 5)} {text/html;charset=utf-8|||} "GET /app/status HTTP/1.1"', '@source_host' => '127.0.0.1', '@type' => 'haproxy', '@source' => 'tcp://127.0.0.1:60207/', }) do insist { subject["@fields"]["backend_name"] } == [ "backend-name" ] insist { subject["@fields"]["http_request"] } == [ "/app/status" ] insist { subject["tags"].include?("HTTP_REQUEST") } insist { subject["@timestamp"] } == "2013-10-08T06:46:44.256Z" reject { subject["@timestamp"] } == "2013-10-08T06:46:47Z" end end This will, based on a given filter configuration, run input samples and test if the expected output is produced. To run the test, save the test as haproxy_spec.rb and run `logstash rspec: java -jar logstash.jar rspec haproxy_spec.rb There are lots of spec examples in the Logstash source repository.
Logstash
18,823,917
10
I'm using Grok & Logstash to send access logs from Nginx to Elastic search. I'm giving Logstash all my access logs (with a wildcard, works well) and I would like to get the filename (some part of it, to be exact) and use it as a field. My config is as follows : input { file { path => "/var/log/nginx/*.access.log" type => "nginx_access" } } filter { if [type] == "nginx_access" { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } match => { "path" => "%{GREEDYDATA}/%{GREEDYDATA:app}.access.log" } add_field => { "app" => "%{app}" } } } } output{ # whatever } But it doesn't seem to work : the app field is added, but has a value of %{app} (not replaced). I tried different things but to no avail. I may be missing something ... Any ideas ? Thanks a lot
Ok, found it. grok breaks on match by default. So the first match being good, it skips the second one. I solved it like that : filter { if [type] == "nginx_access" { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } match => { "path" => "%{GREEDYDATA}/%{GREEDYDATA:app}.access.log" } break_on_match => false } } }
Logstash
23,780,000
10
I am using Logstash to parse postfix logs. I am mainly focused to get bounced email logs from postfix logs, and store it in database. In order to get logs, first I need to find ID generated by postfix corresponding to my message-id, and using that Id, I need to find status of an email. For following configuation, I am able to get the logs. grok { patterns_dir => "patterns" match => [ "message", "%{SYSLOGBASE} %{POSTFIXCLEANUP}", "message", "%{SYSLOGBASE} %{POSTFIXBOUNCE}" ] named_captures_only => true } I am using following if condition to store logs that match the patterns: if "_grokparsefailure" not in [tags] { #database call } As you have seen, I am using two patterns to find corresponding two different logs from one log file. Now, I want to differentiate both pattern based on tags. So I have modified my configuration as follows: grok { patterns_dir => "patterns" match => [ "message", "%{SYSLOGBASE} %{POSTFIXBOUNCE}" ] add_tag => ["BOUNCED"] remove_tag => ["_grokparsefailure"] named_captures_only => true } grok { patterns_dir => "patterns" match => [ "message", "%{SYSLOGBASE} %{POSTFIXCLEANUP}" ] add_tag => ["INTIALIZATION"] remove_tag => ["_grokparsefailure"] named_captures_only => true } Now, it only store %{POSTFIXCLEANUP} pattern logs. If I reverse the order, it only store %{POSTFIXBOUNCE} pattern. so, after removing that if condition, I found that message being parsed from first filter have "_grokparsefailure" tag and first filter tag, and because of that it is not storing that record. Can anybody tell me what need to be done to rectify this? Am I am making any mistake?
You need to protect the 2nd grok block -- ie don't execute it if the first one succeeds. if ("BOUNCED" not in [tags]) { grok { patterns_dir => "patterns" match => [ "message", "%{SYSLOGBASE} %{POSTFIXCLEANUP}" ] add_tag => ["INTIALIZATION"] remove_tag => ["_grokparsefailure"] named_captures_only => true } }
Logstash
24,705,450
10
I want to have a centralized logging server. I have created two Ubuntu 12.04 LTS servers based on this iso. After successfully and strictly following this tutorial steps, I have: One Logging Server with Logstash + ElasticSearch + Kibana. And one Application Server with a Logstash-Forwarder, Tomcat 6 and another app, which logs are being registered in the catalina.out file. My question is: What 'for-dummies' steps should I follow now in order to send catalina.out logs from the App Server to the Logging Server in such a way that those logs are properly structured and query-able in ElasticSearch + Kibana? Mind that catalina.out has not just one uniform format, and that multi-line Java exceptions are going to be registered as well. I have read some tutorials regarding this, but I feel they weren't sufficiently well-explained for a dummy like me: Lacking examples of full paths to files, appropriate file names, required ubuntu commands, and so on. Thanks in advance!
Here is what I have, note that the access logs use a custom log format (documented below) and I extract a bit more information out of the Tomcat logs (it is useful to have logLevel as a field, for example): input { file { type => "access-log" path => [ "C:/apache-tomcat-6.0.18/logs/*.txt" ] } file { type => "tomcat" path => [ "C:/apache-tomcat-6.0.18/logs/*.log" ] codec => multiline { negate => true pattern => "(^%{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM))" what => "previous" } } } filter { if [type] == "access-log" { grok { # Access log pattern is %a %{waffle.servlet.NegotiateSecurityFilter.PRINCIPAL}s %t %m %U%q %s %B %T &quot;%{Referer}i&quot; &quot;%{User-Agent}i&quot; match => [ "message" , "%{IPV4:clientIP} %{NOTSPACE:user} \[%{DATA:timestamp}\] %{WORD:method} %{NOTSPACE:request} %{NUMBER:status} %{NUMBER:bytesSent} %{NUMBER:duration} \"%{NOTSPACE:referer}\" \"%{DATA:userAgent}\"" ] remove_field => [ "message" ] } grok{ match => [ "request", "/%{USERNAME:app}/" ] tag_on_failure => [ ] } date { match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ] remove_field => [ "timestamp" ] } geoip { source => ["clientIP"] } dns { reverse => [ "clientIP" ] } mutate { lowercase => [ "user" ] convert => [ "bytesSent", "integer", "duration", "float" ] } if [referer] == "-" { mutate { remove_field => [ "referer" ] } } if [user] == "-" { mutate { remove_field => [ "user" ] } } } if [type] == "tomcat" { if [message] !~ /(.+)/ { drop { } } grok{ patterns_dir => "./patterns" match => [ "message", "%{CATALINA_DATESTAMP:timestamp} %{NOTSPACE:className} %{WORD:methodName}\r\n%{LOGLEVEL: logLevel}: %{GREEDYDATA:message}" ] overwrite => [ "message" ] } grok{ match => [ "path", "/%{USERNAME:app}.20%{NOTSPACE}.log"] tag_on_failure => [ ] } #Aug 25, 2014 11:23:31 AM date{ match => [ "timestamp", "MMM dd, YYYY hh:mm:ss a" ] remove_field => [ "timestamp" ] } } } output { elasticsearch { host => somehost} }
Logstash
25,429,377
10
When i see results in Kibana, i see that there are no fields from JSON, more over, message field contains only "status" : "FAILED". Is it possible to parse fields from json and to show them in Kibana? I have following config: input { file { type => "json" path => "/home/logstash/test.json" codec => json sincedb_path => "/home/logstash/sincedb" } } output { stdout {} elasticsearch { protocol => "http" codec => "json" host => "elasticsearch.dev" port => "9200" } } And following JSON file: [{"uid":"441d1d1dd296fe60","name":"test_buylinks","title":"Testbuylinks","time":{"start":1419621623182,"stop":1419621640491,"duration":17309},"severity":"NORMAL","status":"FAILED"},{"uid":"a88c89b377aca0c9","name":"test_buylinks","title":"Testbuylinks","time":{"start":1419621623182,"stop":1419621640634,"duration":17452},"severity":"NORMAL","status":"FAILED"},{"uid":"32c3f8b52386c85c","name":"test_buylinks","title":"Testbuylinks","time":{"start":1419621623185,"stop":1419621640826,"duration":17641},"severity":"NORMAL","status":"FAILED"}]
Yes. you need to add a filter to your config, something like this. filter{ json{ source => "message" } } It's described pretty well in the docs here EDIT The json codec doesn't seem to like having an array passed in. A single element works with this config: Input: {"uid":"441d1d1dd296fe60","name":"test_buylinks","title":"Testbuylinks","time":{"start":1419621623182, "stop":1419621640491,"duration":17309 }, "severity":"NORMAL", "status":"FAILED" } Logstash Result: { "message" => "{\"uid\":\"441d1d1dd296fe60\",\"name\":\"test_buylinks\",\"title\":\"Testbuylinks\",\"time\":{\"start\":1419621623182, \"stop\":1419621640491,\"duration\":17309 }, \"severity\":\"NORMAL\", \"status\":\"FAILED\" }", "@version" => "1", "@timestamp" => "2015-02-26T23:25:12.011Z", "host" => "emmet.local", "uid" => "441d1d1dd296fe60", "name" => "test_buylinks", "title" => "Testbuylinks", "time" => { "start" => 1419621623182, "stop" => 1419621640491, "duration" => 17309 }, "severity" => "NORMAL", "status" => "FAILED" } Now with an array: Input [{"uid":"441d1d1dd296fe60","name":"test_buylinks","title":"Testbuylinks","time":{"start":1419621623182, "stop":1419621640491,"duration":17309 }, "severity":"NORMAL", "status":"FAILED" }, {"uid":"441d1d1dd296fe60","name":"test_buylinks","title":"Testbuylinks","time":{"start":1419621623182, "stop":1419621640491,"duration":17309 }, "severity":"NORMAL", "status":"FAILED" }] Result: Trouble parsing json {:source=>"message", :raw=>"[{\"uid\":\"441d1d1dd296fe60\",\"name\":\"test_buylinks\",\"title\":\"Testbuylinks\",\"time\":{\"start\":1419621623182, \"stop\":1419621640491,\"duration\":17309 }, \"severity\":\"NORMAL\", \"status\":\"FAILED\" }, {\"uid\":\"441d1d1dd296fe60\",\"name\":\"test_buylinks\",\"title\":\"Testbuylinks\",\"time\":{\"start\":1419621623182, \"stop\":1419621640491,\"duration\":17309 }, \"severity\":\"NORMAL\", \"status\":\"FAILED\" }]", :exception=>#<TypeError: can't convert Array into Hash>, :level=>:warn} { "message" => "[{\"uid\":\"441d1d1dd296fe60\",\"name\":\"test_buylinks\",\"title\":\"Testbuylinks\",\"time\":{\"start\":1419621623182, \"stop\":1419621640491,\"duration\":17309 }, \"severity\":\"NORMAL\", \"status\":\"FAILED\" }, {\"uid\":\"441d1d1dd296fe60\",\"name\":\"test_buylinks\",\"title\":\"Testbuylinks\",\"time\":{\"start\":1419621623182, \"stop\":1419621640491,\"duration\":17309 }, \"severity\":\"NORMAL\", \"status\":\"FAILED\" }]", "@version" => "1", "@timestamp" => "2015-02-26T23:28:21.195Z", "host" => "emmet.local", "tags" => [ [0] "_jsonparsefailure" ] } This looks like a bug in the codec, can you change your messages to an object rather than an array?
Logstash
28,753,921
10
I'm trying to setup a ELK stack on EC2, Ubuntu 14.04 instance. But everything install, and everything is working just fine, except for one thing. Logstash is not creating an index on Elasticsearch. Whenever I try to access Kibana, it wants me to choose an index, from Elasticsearch. Logstash is in the ES node, but the index is missing. Here's the message I get: "Unable to fetch mapping. Do you have indices matching the pattern?" Am I missing something out? I followed this tutorial: Digital Ocean EDIT: Here's the screenshot of the error I'm facing: Yet another screenshot:
I got identical results on Amazon AMI (Centos/RHEL clone) In fact exactly as per above… Until I injected some data into Elastic - this creates the first day index - then Kibana starts working. My simple .conf is: input { stdin { type => "syslog" } } output { stdout {codec => rubydebug } elasticsearch { host => "localhost" port => 9200 protocol => http } } then cat /var/log/messages | logstash -f your.conf Why stdin you ask? Well it's not super clear anywhere (also a new Logstash user - found this very unclear) that Logstash will never terminate (e.g. when using the file plugin) - it's designed to keep watching. But using stdin - Logstash will run - send data to Elastic (which creates index) then go away. If I did the same thing above with the file input plugin, it would never create the index - I don't know why this is.
Logstash
29,227,392
10
I'm actually using node-bunyan to manage log information through elasticsearch and logstash and I m facing a problem. In fact, my log file has some informations, and fills great when I need it. The problem is that elastic search doesn't find anything on http://localhost:9200/logstash-*/ I have an empty object and so, I cant deliver my log to kibana. Here's my logstash conf file : input { file { type => "nextgen-app" path => [ "F:\NextGen-dev\RestApi\app\logs\*.log" ] codec => "json" } } output { elasticsearch { host => "localhost" protocol => "http" } } And my js code : log = bunyan.createLogger({ name: 'myapp', streams: [ { level: 'info', path: './app/logs/nextgen-info-log.log' }, { level: 'error', path: './app/logs/nextgen-error-log.log' } ] }) router.all('*', (req, res, next)=> log.info(req.url) log.info(req.method) next() ) NB : the logs are well written in the log files. The problem is between logstash and elasticsearch :-/ EDIT : querying http://localhost:9200/logstash-*/ gives me "{}" an empty JSON object Thanks for advance
Here is how we managed to fix this and other problems with Logstash not processing files correctly on Windows: Install the ruby-filewatch patch as explained here: logstash + elasticsearch : reloads the same data Properly configure the Logstash input plugin: input { file { path => ["C:/Path/To/Logs/Directory/*.log"] codec => json { } sincedb_path => ["C:/Path/To/Config/Dir/sincedb"] start_position => "beginning" } } ... "sincedb" keeps track of your log files length, so it should have one line per log file; if not, then there's something else wrong. Hope this helps.
Logstash
29,701,796
10
Let's imagine that I have a Logstash instance running, but would like to stop it cleanly, to change it's configs for example. How can I stop the Logstash instance, while ensuring that it finish sending the bulks to Elasticsearch? I don't want to loose any logs while stopping logstash.
Logstash 1.5 flushes the pipeline before shutting down in response to a SIGTERM signal, so there you should be able to shut it down with service logstash stop, the init.d script, or whatever it is that you usually use. With Logstash 1.4.x a SIGTERM signal shuts down Logstash abruptly without allowing the pipeline to flush all in-flight messages, but you can send SIGINT to force a flush. However, some plugins (like the redis input plugin) don't handle this gracefully and hang indefinitely.
Logstash
29,742,313
10
You know how there is a Ruby filter for Logstash which enables me to write code in Ruby and it is usually included in the config file as follows filter { ruby { code => "...." } } Now I have two Jar files that I would like to include in my filter so that the input I have can be processed according to the operations I have in these Jar files. However, I cannot (apparently) include the Jar file in the ruby code. I've been looking for a solution.
So to answer this, I found this wonderful tutorial from Elastc.co: Shows the steps to create a new gem and use it as a filter for Logstash later on. https://www.elastic.co/guide/en/logstash/current/_how_to_write_a_logstash_filter_plugin.html
Logstash
32,370,646
10
I was checking the nginx error logs at our server and found that they start with date formatted as: 2015/08/30 05:55:20 i.e. YYYY/MM/DD HH:mm:ss. I was trying to find an existing grok date pattern which might help me in parsing this quickly but sadly could not find any such date format. Eventually, I had to write the pattern as: %{YEAR}/%{MONTHNUM}/%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}? I am still hoping if there is a shorter pattern for the same ?
No. You find the included patterns on github. The comment to datestamp seems to fit to your YYYY/MM/DD, but DATE_US and DATE_EU are different. I suggest overload the DATE pattern using grok option patterns_dir and go with DATESTAMP. DATE_YMD %{YEAR}/%{MONTHNUM}/%{MONTHDAY} DATE %{DATE_US}|%{DATE_EU}|%{DATE_YMD} or just add your pattern into a patterns-file and use grok's patterns_dir option.
Logstash
32,415,944
10
In my Celery application I am getting 2 types of logs on the console i.e celery application logs and task level logs (inside task I am using logger.INFO(str) syntax for logging) I wanted to send both of them to a custom handler (in my case python-logstash handler ) For django logs I was successfull, by setting handler and logger in settings.py but I am helpless with celery
def initialize_logstash(logger=None,loglevel=logging.DEBUG, **kwargs): # logger = logging.getLogger('celery') handler = logstash.TCPLogstashHandler('localhost', 5959,tags=['worker']) handler.setLevel(loglevel) logger.addHandler(handler) # logger.setLevel(logging.DEBUG) return logger from celery.signals import after_setup_task_logger after_setup_task_logger.connect(initialize_logstash) from celery.signals import after_setup_logger after_setup_logger.connect(initialize_logstash) using both after_setup_task_logger and after_setup_logger signals solved the problem
Logstash
39,265,344
10
Is it possible to launch a Ruby debugger from within the Logstash Ruby filter plugin? It would be very handy for debugging.
The good Logstash folks have thought of this already as they included pry into Logstash core. So all you have to do is to require pry in your ruby filter code as shown in the sample config below: input { file { path => "/tmp/myfile.csv" sincedb_path => "/dev/null" start_position => "beginning" } } filter { ruby { code => "require 'pry' ... your code ... # start a REPL session binding.pry ... your code ... " } } When Logstash runs, you'll get a REPL session in your terminal which looks like this and you'll be able to do whatever pry allows you to do. [1] pry(#<LogStash::Filters::Ruby>)>
Logstash
40,226,282
10
I have application where some critical issues are reported with console.error but are not thrown so application might continue to run - possibly in crippled state. It's necessary to report also console.error issues, but Sentry (Raven) library send to server only thrown exceptions. Does someone knows how to solve this nicely ? (ideally without need to rewrite all console.error calls, cause also some vendor libraries might still write output just into console)
As user @kumar303 mentioned in his comment to the question ... you can use the JS console integration Sentry.Integrations.CaptureConsole. See the documentation. At the end you JS code to setup Sentry looks as follows: import * as Sentry from '@sentry/browser'; import { CaptureConsole } from '@sentry/integrations'; Sentry.init({ dsn: 'https://your-sentry-server-dsn', integrations: [ new CaptureConsole({ levels: ['error'] }) ], release: '1.0.0', environment: 'prod', maxBreadcrumbs: 50 }) If then someone calls console.error a new event will sent to sentry.
Sentry
50,633,580
64
I just installed Sentry for a client-side JavaScript app using the standard code snippet they provided. How do I test that it's working properly? I've tried manually throwing an error from my browser console and it didn't appear in Sentry. Is there any documentation on the right way to do this?
The browser console can not be used as it is sandboxed. A simple trick is to attach the code to an HTML element like this: <h1 onClick="throw new Error('Test')"> My Website </h1> And click on the heading afterwards. This can be done in the browser inspector and so your source code doesn't have to be modified.
Sentry
47,333,846
46
Since the recently introduced new structure of the Program.cs startup code, the documentation confuses me a bit. In the officially provided Serilog.AspNetCore example and in the Serilog.Sentry example, they use .UseSerilog() on the WebHostBuilder. I cannot find this method. This is what I have tried: using Serilog; var builder = WebApplication.CreateBuilder(args); // adding services... builder.Logging.AddSerilog(); // <- is this even necessary? var app = builder.Build(); app.UseSerilogRequestLogging(); // configure request pipeline app.Run(); But how / where can I configure the sinks, e.g. Debug, Console, Sentry, ...? I have the feeling that docs are a bit outdated or I am just a bit blind.
You'll need to make sure you have the following packages installed: Serilog Serilog.Extensions.Hosting (this provides the .UseSerilog extension method. If you have the Serilog.AspNetCore package, you do not need to explicitly include this) Then you'll need a using: using Serilog; Which should allow you to access .UseSerilog via builder.Host: using Serilog; var builder = WebApplication.CreateBuilder(args); builder.Host.UseSerilog(); var app = builder.Build(); app.MapGet("/", () => "Hello World!"); app.Run(); You can use a different overload to get the hosting context, services, and configuration. From there you can configure sinks, etc.: builder.Host.UseSerilog((hostContext, services, configuration) => { configuration.WriteTo.Console(); });
Sentry
71,599,246
33
Sometimes I get ReferenceError in my sentry with this instantSearchSDKJSBridgeClearHighlight. Google says nothing. All I found is https://github.com/algolia/instantsearch-android and https://github.com/algolia/instantsearch-ios that may be related to my issue. I got 53 issues from 5 different users and all of them Edge Mobile on iphone. Maybe someone knows what this thing is (or know method how to know)? Edit: I also found this issue using github search. Same issue as mine and created by bot
This is a bug in the Bing Instant Search feature in Edge on iOS; the feature tries to call a function that no longer exists. Thanks for the bug; I've passed it along to the feature owners. The basic idea is that for Edge on iOS the actual web engine is not our normal one (Blink); it is instead Safari's WkWebView. In order to implement features like Bing's instant search, we have to inject JavaScript into the pages we load. Then our outer browser calls those JavaScript functions we injected. Here, someone goofed and got rid of (or renamed) the injected JavaScript function, but failed to remove/update the browser code that tries to call that injected JavaScript. So users who are watching the browser's error logs see an error message saying "Hey, there's no such function." This is normally harmless, but if you have "Sentry" code that watches for error messages and complains about them to the website developers, it starts complaining about this error message we're causing.
Sentry
69,261,499
30
I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)
See my final update in the question itself. Events are aggregated on a combination of 'project', 'logger', 'culprit' and 'checksum' properties. The first three of these are relatively easy to control - the fourth, 'checksum' is a function of the type of data sent as part of the event. Sentry uses the concept of 'interfaces' to control the structure of data passed in, and each interface comes with an implementation of get_hash, which is used to return a hash value for the data passed in. Sentry comes with a number of standard interfaces ('Message', 'User', 'HTTP', 'Stacktrace', 'Query', 'Exception'), and these each have their own implemenation of get_hash. The default (inherited from the Interface base class) is a empty list, which would not affect the checksum. In the absence of any valid interfaces, the event message itself is hashed and returned as the checksum, meaning that the message would need to be unique for the event to be grouped.
Sentry
13,331,973
26
We are running a Django server and using Sentry to capture exceptions. When we configure Sentry we add RAVEN_CONFIG our different settings.py files: INSTALLED_APPS = ( 'raven.contrib.django.raven_compat' ) RAVEN_CONFIG = { 'dsn': 'https://*****@app.getsentry.com/PORT_NUMBER', } We read here that we can just use an empty string DSN property. Though when we run python manage.py raven test as depicted here we get: raven.exceptions.InvalidDsn: Unsupported Sentry DSN scheme: () The best solution would be that we could always use a Raven client and the settings file would define whether exceptions are sent or not. Another requirement is that we would like to use the Client module and capture exceptions. For this we have to set some DSN value: from raven import Client client = Client('https://<key>:<secret>@app.getsentry.com/<project>') So not setting a DSN value isn't possible
We read here that we can just use an empty string DSN property. You should not be setting DSN to an empty string, but instead in your development settings configuration don't specify the DSN setting in the first place: RAVEN_CONFIG = {}
Sentry
35,888,806
26
I'm using sentry-python SDK for capture exceptions from my django server. I don't want to capture django.security.DisallowedHost like above. How to remove sentry handling for that logger? I attached my server configuration below. settings.py LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'null': { 'level': 'DEBUG', 'class': 'logging.NullHandler', }, }, 'loggers': { # Silence SuspiciousOperation.DisallowedHost exception ('Invalid # HTTP_HOST' header messages). Set the handler to 'null' so we don't # get those annoying emails. 'django.security.DisallowedHost': { 'handlers': ['null'], 'propagate': False, }, } } sentry_sdk.init( dsn=os.environ['SENTRY_DSN'], integrations=[DjangoIntegration()], send_default_pii=True, release=f"{os.environ['STAGE']}@{os.environ['VERSION']}", )
Quick answer See LoggingIntegration, eg: from sentry_sdk.integrations.logging import ignore_logger ignore_logger("a.spammy.logger") logger = logging.getLogger("a.spammy.logger") logger.error("hi") # no error sent to sentry A more elaborate but generic way to ignore events by certain characteristics See before_breadcrumb and before_send, eg: import sentry_sdk def before_breadcrumb(crumb, hint): if crumb.get('category', None) == 'a.spammy.Logger': return None return crumb def before_send(event, hint): if event.get('logger', None) == 'a.spammy.Logger': return None return event sentry_sdk.init(before_breadcrumb=before_breadcrumb, before_send=before_send)
Sentry
52,927,353
26
There are two ways to set up sourcemaps: having them hosted on the site and referenced in the bundled files or uploading them directly to a service like sentry. I'm trying to accomplish the latter. The problem is that there seems to be no way to generate sourcemaps using angular cli without having the filepath written to the bundled files. My first thought was to have two builds - one with sourcemaps generate and one without. I would then just deploy the build without sourcemaps and upload the build with them to sentry. That doesn't work because the bundle filenames are different (angular cli uses the file hash as the filename for cache busting and when you generate sourcemaps it adds the path to the .map file as a comment at the end causing a change in hash and filename). My other option would be to build with sourcemaps, upload them to sentry, and then delete the map files before deploying the site. The problem there though is that the bundle files still contain a reference to a now non-existent map file. That shouldn't be an issue in and of itself but it might raise an issue with extensions or browsers down the line and just seems like a hackish solution. How would you implementing something like this into the build process?
As mentioned in the comments, you can enable sourceMaps in the angular.json file like this: "configurations": { "production": { "sourceMap": { "scripts": true, "styles": true, "hidden": true }, Also, I recommend you remove the .map files after uploading to sentry and before deploying. So in your ci add this line: rm -rf dist/YOUR_PROJECT/*.map
Sentry
52,489,770
24
How to config SMTP Settings in Sentry? I set my SMTP mail-server configuration on onpremise/config.yml, then I did as follows: sudo docker-compose run --rm web upgrade sudo docker-compose up -d (before that, I removed previous consider containers) But in Sentry mail setting panel not appeared my SMTP configs: NOTE: I'm using onpremise sentry docker package. What should I do? Any help with this would be greatly appreciated.
Problem solved: I updated my Sentry version from 8.22.0 to 9.0.0 with Dockerfile and configure config.yml file as following: A piece of config.yml on onpremise package: ############### # Mail Server # ############### mail.backend: 'smtp' # Use dummy if you want to disable email entirely mail.host: 'smtp.gmail.com' mail.port: 587 mail.username: 'account@gmail.com' mail.password: '********' mail.use-tls: true # The email address to send on behalf of mail.from: 'account@gmail.com' Dockerfile: FROM sentry:9.0-onbuild Or you can do $ git pull in onpremise path (to get latest changes). Finally: docker-compose build docker-compose run --rm web upgrade docker-compose up -d
Sentry
50,344,403
21
Framework / SDK versions: Flutter: 3.10.4 Dart: 3.0.3 Here goes my main() code: Future<void> main() async { //debugPaintSizeEnabled = true; //BindingBase.debugZoneErrorsAreFatal = true; WidgetsFlutterBinding.ensureInitialized(); EasyLocalization.ensureInitialized() .then((value) => Fimber.plantTree(DebugTree())) .then((value) => SentryFlutter.init( (options) { options.dsn = '***'; // Set tracesSampleRate to 1.0 to capture 100% of transactions for performance monitoring. // We recommend adjusting this value in production. options.tracesSampleRate = 1.0; //options.attachScreenshot = true; }, appRunner: () => runApp( EasyLocalization( supportedLocales: const [Locale('en', 'US'), Locale('de', 'DE')], path: '../assets/translations/', fallbackLocale: const Locale('en', 'US'), assetLoader: const CodegenLoader(), child: MyApp(), ), ), )); } And I am getting the following error, that I can't locate: Exception caught by Flutter framework ===================================================== The following assertion was thrown during runApp: Zone mismatch. The Flutter bindings were initialized in a different zone than is now being used. This will likely cause confusion and bugs as any zone-specific configuration will inconsistently use the configuration of the original binding initialization zone or this zone based on hard-to-predict factors such as which zone was active when a particular callback was set. It is important to use the same zone when calling `ensureInitialized` on the binding as when calling `runApp` later. To make this warning fatal, set BindingBase.debugZoneErrorsAreFatal to true before the bindings are initialized (i.e. as the first statement in `void main() { }`). When the exception was thrown, this was the stack: dart-sdk/lib/_internal/js_dev_runtime/patch/core_patch.dart 942:28 get current packages/flutter/src/foundation/binding.dart 497:29 <fn> packages/flutter/src/foundation/binding.dart 501:14 debugCheckZone packages/flutter/src/widgets/binding.dart 1080:17 runApp packages/ens_price_calculator/main.dart 52:30 <fn> packages/sentry/src/sentry.dart 136:26 <fn> dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 45:50 <fn> dart-sdk/lib/async/zone.dart 1407:47 _rootRunUnary dart-sdk/lib/async/zone.dart 1308:19 runUnary dart-sdk/lib/async/future_impl.dart 147:18 handleValue dart-sdk/lib/async/future_impl.dart 784:44 handleValueCallback dart-sdk/lib/async/future_impl.dart 813:13 _propagateToListeners dart-sdk/lib/async/future_impl.dart 584:5 [_completeWithValue] dart-sdk/lib/async/future_impl.dart 657:7 <fn> dart-sdk/lib/async/zone.dart 1399:13 _rootRun dart-sdk/lib/async/zone.dart 1301:19 run dart-sdk/lib/async/zone.dart 1209:7 runGuarded dart-sdk/lib/async/zone.dart 1249:23 callback dart-sdk/lib/async/schedule_microtask.dart 40:11 _microtaskLoop dart-sdk/lib/async/schedule_microtask.dart 49:5 _startMicrotaskLoop dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 177:15 <fn> ================================================================================================= Has anyone been able to get rid of this? Any suggestions appreciated.
You can find the solution at https://github.com/getsentry/sentry-dart/tree/main/flutter#usage. ensureInitialized has to be called within the runZonedGuarded import 'dart:async'; import 'package:flutter/widgets.dart'; import 'package:sentry_flutter/sentry_flutter.dart'; Future<void> main() async { // creates a zone await runZonedGuarded(() async { WidgetsFlutterBinding.ensureInitialized(); // Initialize other stuff here... await SentryFlutter.init( (options) { options.dsn = 'https://example@sentry.io/add-your-dsn-here'; }, ); // or here runApp(MyApp()); }, (exception, stackTrace) async { await Sentry.captureException(exception, stackTrace: stackTrace); }); }
Sentry
76,472,459
18
While my DSN is in a .env file and hidden from the repo browsers, I find it disturbing that my auth token is in the sentry.properties file for all to see. I'm having trouble understanding what this means and how much of a security risk is it to let people outside my organization read this file? (I have outsourced developers doing odd jobs for me on the repo)
We recommend treating a sentry.properties like an .env file. It is basically the same, so you should add it to your e.g. .gitignore. The reason why it's called sentry.properties is because of android gradle, we needed it to be read natively.
Sentry
48,368,670
17
Sentry can detect additional data associated with an exception such as: How do you raise such an exception from Python (it's a Django app) with your own additional data fields?.
I log exceptions using the logging library so after debugging the code a bit, I noticed the extra parameter: import logging logger = logging.getLogger('my_app_name') def do_something(): try: #do some stuff here that might break except Exception, e: logger.error(e, exc_info=1, extra={'extra-data': 'blah', }) Passing exc_info=1 is the same as calling logger.exception. However, exception() does not accept kwargs, which are required for using the extra parameter. These values will show up in the 'Additional Data' section of the Sentry Error dashboard.
Sentry
15,951,136
16
I have a Spring Boot application that uses Sentry for exception tracking and I'm getting some errors that look like this: ClientAbortExceptionorg.apache.catalina.connector.OutputBuffer in realWriteBytes errorjava.io.IOException: Broken pipe My understanding is that it's just a networking error and thus I should generally ignore them. What I want to do is report all other IOExceptions and log broken pipes to Librato so I can keep an eye on how many I'm getting (a spike might mean there's an issue with the client, which is also developed by me in Java): I came up with this: @ControllerAdvice @Priority(1) @Order(1) public class RestExceptionHandler { @ExceptionHandler(ClientAbortException.class) @ResponseStatus(HttpStatus.SERVICE_UNAVAILABLE) public ResponseEntity<?> handleClientAbortException(ClientAbortException ex, HttpServletRequest request) { Throwable rootCause = ex; while (ex.getCause() != null) { rootCause = ex.getCause(); } if (rootCause.getMessage().contains("Broken pipe")) { logger.info("count#broken_pipe=1"); } else { Sentry.getStoredClient().sendException(ex); } return null; } } Is that an acceptable way to deal with this problem? I have Sentry configured following the documentation this way: @Configuration public class FactoryBeanAppConfig { @Bean public HandlerExceptionResolver sentryExceptionResolver() { return new SentryExceptionResolver(); } @Bean public ServletContextInitializer sentryServletContextInitializer() { return new SentryServletContextInitializer(); } }
If you look at the class SentryExceptionResolver public class SentryExceptionResolver implements HandlerExceptionResolver, Ordered { @Override public ModelAndView resolveException(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) { Sentry.capture(ex); // null = run other HandlerExceptionResolvers to actually handle the exception return null; } @Override public int getOrder() { // ensure this resolver runs first so that all exceptions are reported return Integer.MIN_VALUE; } } By returning Integer.MIN_VALUE in getOrder it makes sure that it gets called first. Even though you have set the Priority to 1, it won't work. So you want to change your @Configuration public class FactoryBeanAppConfig { @Bean public HandlerExceptionResolver sentryExceptionResolver() { return new SentryExceptionResolver(); } @Bean public ServletContextInitializer sentryServletContextInitializer() { return new SentryServletContextInitializer(); } } to @Configuration public class FactoryBeanAppConfig { @Bean public HandlerExceptionResolver sentryExceptionResolver() { return new SentryExceptionResolver() { @Override public int getOrder() { // ensure we can get some resolver earlier than this return 10; } }; } @Bean public ServletContextInitializer sentryServletContextInitializer() { return new SentryServletContextInitializer(); } } This will ensure you can have your handler can be run earlier. In your code the loop to get rootCause is incorrect while (ex.getCause() != null) { rootCause = ex.getCause(); } This is a infinite loop as you have used ex instead of rootCause. Even if you correct it, it can still become an infinite loop. When the exception cause returns itself it will be stuck. I have not thoroughly tested it but I believe it should be like below while (rootCause.getCause() != null && rootCause.getCause() != rootCause) { rootCause = rootCause.getCause(); } The is one way of solving your problem. But you need to send the exception yourself to Sentry. So there is another way to handle your requirement Way 2 In this case you can do the whole logic in your Configuration and change it to below @Configuration public class FactoryBeanAppConfig { @Bean public HandlerExceptionResolver sentryExceptionResolver() { return new SentryExceptionResolver() { @Override public ModelAndView resolveException(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) { Throwable rootCause = ex; while (rootCause .getCause() != null && rootCause.getCause() != rootCause) { rootCause = rootCause.getCause(); } if (!rootCause.getMessage().contains("Broken pipe")) { super.resolveException(request, response, handler, ex); } return null; } }; } @Bean public ServletContextInitializer sentryServletContextInitializer() { return new SentryServletContextInitializer(); } }
Sentry
48,914,391
15
I want to setup a Sentry logger for a Django project. I will define a sentry handler and will put that handler in the root logger with error level. According to the documentation of logging module, there a special root key: root - this will be the configuration for the root logger. Processing of the configuration will be as for any logger, except that the propagate setting will not be applicable. At the same time in other places a logger with name '' is used to contain configuration for the root logger. Does this have the same effect? What is preferable? >>> import logging >>> logging.getLogger('') is logging.root True >>>
Either way will work, because the logger named '' is the root logger. Specifying the top-level key root makes it clearer what you're doing if you're configuring a lot of loggers - the '' logger configuration could be lost inside a group of others, whereas the root key is adjacent to the loggers key and so (in theory) should stand out more. To reiterate, the key named root is a top-level key; it's not under loggers.
Sentry
20,258,986
13
We're are using Sentry for our React App and faced this issue. Don't know where exactly this issue is coming from? This variable '_avast_submit' (or related named variable) is not at all used in either frontend or backend. In the screenshot it's mentioned anonymous. This issue had occurred for the user's who had used our React App, are from Mac and Windows OS specifically using Chrome browser.
This is due to an Avast extension of browser. Unfortunately when tracking (all) JS errors, also errors originating from browser extensions could reported to Sentry. Mentioned in Github comments: https://github.com/getsentry/sentry/issues/9331
Sentry
51,720,715
13
Introduction Hi, I'm trying to get Sentry to recognise our sourcemaps in a react-native project, but I can't get it working. The artifacts are uploading - I can see them in the WebUI, but the events lack context/mapping: Question Can anyone see any problems in my setup? Thanks!  Background Assumptions uploading release artifacts, then deleting artifacts from web ui, then re-uploading new artifacts is valid "abs_path": "app:///index.bundle", requires the bundled js needs to be renamed to index.bundle That fact that all events have a processing error: Discarded invalid parameter 'dist' should not effect sourcemaps Once everything lines up, all my historical events for the release will benefit from the uploaded files/sourcemaps Xcode build phase During the XCode build phase we already bundle the DSym. In this script, I'm trying to pull out the bundled js and sourcemap, and uploading it. Script #!/bin/bash # WARNING: Run directly from Xcode # For testing of Xcode bundling/sentry locally, set to "true" DEBUG_FORCE_BUNDLING="true" printf "Xcode: Bundle react-native and upload to Sentry" source ../scripts/xcode/utils/node_activate.sh # Create bundle and sourcemap export NODE_BINARY=node export SENTRY_PROPERTIES=sentry.properties DEST=$CONFIGURATION_BUILD_DIR/$UNLOCALIZED_RESOURCES_FOLDER_PATH export EXTRA_PACKAGER_ARGS="--sourcemap-output $DEST/main.bundle.map.js"; if [ "${CONFIGURATION}" = "Release" ]; then FORCE_BUNDLING="$DEBUG_FORCE_BUNDLING" \ ../node_modules/@sentry/cli/sentry-cli react-native xcode \ ../node_modules/react-native/scripts/react-native-xcode.sh else FORCE_BUNDLING="$DEBUG_FORCE_BUNDLING" \ ../node_modules/@sentry/cli/sentry-cli react-native xcode \ ../node_modules/react-native/scripts/react-native-xcode.sh fi # Copy bundle & sourcemap mkdir -p ../.xcodebuild cp $DEST/main.jsbundle ../.xcodebuild/index.bundle # rename? cp $DEST/main.bundle.map.js ../.xcodebuild echo "Size of file $(wc -c ../.xcodebuild/index.bundle)" # RENAME!? echo "Size of sourcemap $(wc -c ../.xcodebuild/main.bundle.map.js)" # Upload sentry release # https://docs.sentry.io/cli/releases/#creating-releases APP_IDENTIFIER="com.mycompany.app" VERSION="1.4.21" RELEASE_NAME="$APP_IDENTIFIER-$VERSION" DISTRIBUTION_NAME="2400" function sentry_release { npx sentry-cli releases \ files $RELEASE_NAME \ $1 $2 $3 --dist $DISTRIBUTION_NAME \ --strip-prefix ".build" \ --ignore node_modules \ --rewrite "$(pwd)" } sentry_release upload ../.xcodebuild/index.bundle '~/index.bundle' echo "sentry_release upload" sentry_release upload-sourcemaps ../.xcodebuild/main.bundle.map.js echo "sentry_release upload-sourcemaps" echo `date` echo "DONE" Note: The important bit of node_modules/react-native/scripts/react-native-xcode.sh is: BUNDLE_FILE="$DEST/main.jsbundle" echo "BUNDLE_FILE: $BUNDLE_FILE" > ~/bh/react-native-native/bundle.log "$NODE_BINARY" $NODE_ARGS "$CLI_PATH" $BUNDLE_COMMAND \ $CONFIG_ARG \ --entry-file "$ENTRY_FILE" \ --platform ios \ --dev $DEV \ --reset-cache \ --bundle-output "$BUNDLE_FILE" \ --assets-dest "$DEST" \ $EXTRA_PACKAGER_ARGS Script output Xcode: Upload Debug Symbols to SentryNow using node v11.11.0 (npm v6.7.0) FORCE_BUNDLING enabled; continuing to bundle. warning: the transform cache was reset. Loading dependency graph, done. info Writing bundle output to:, /Users/me/Library/Developer/Xcode/DerivedData/TheApp-cvfhlrosjrphnjdcngyqxnlmjjbb/Build/Products/Debug-iphonesimulator/TheApp.app/main.jsbundle info Writing sourcemap output to:, /Users/me/Library/Developer/Xcode/DerivedData/TheApp-cvfhlrosjrphnjdcngyqxnlmjjbb/Build/Products/Debug-iphonesimulator/TheApp.app/main.bundle.map.js info Done writing bundle output info Done writing sourcemap output info Copying 109 asset files info Done copying assets Size of file 8477623 ../.xcodebuild/index.bundle Size of sourcemap 15378754 ../.xcodebuild/main.bundle.map.js A 560eaee15f0c1ccb5a57b68b5dc1b4944cff84d2 (8477623 bytes) sentry_release upload > Analyzing 1 sources > Adding source map references > Uploading source maps for release com.mycompany.app-1.4.21 Source Map Upload Report Source Maps ~/main.bundle.map.js sentry_release upload-sourcemaps Fri May 3 15:50:26 BST 2019 DONE Sentry event JSON Trimmed some breadcrumbs/callstack: // 20190503154011 // https://sentry.mycompany.com/mycompany/react-native-app/issues/4205/events/396945/json/ { "id": "1c754ed7d651445eb48ed79c995073e2", "project": 11, "release": "com.mycompany.app-1.4.21", "platform": "cocoa", "culprit": "crash(app:///index.bundle)", "message": "Error Sentry: TEST crash crash(app:///index.bundle)", "datetime": "2019-05-03T14:32:25.000000Z", "time_spent": null, "tags": [ [ "logger", "javascript" ], [ "sentry:user", "id:b5f212b4-9112-4253-86cc-11583ac1945a" ], [ "sentry:release", "com.mycompany.app-1.4.21" ], [ "level", "fatal" ], [ "device", "iPhone9,1" ], [ "device.family", "iOS" ], [ "os", "iOS 12.1" ], [ "os.name", "iOS" ], [ "os.rooted", "no" ] ], "contexts": { "device": { "model_id": "simulator", "family": "iOS", "simulator": true, "type": "device", "storage_size": 499963170816, "free_memory": 274915328, "memory_size": 17179869184, "boot_time": "2019-04-29T07:53:06Z", "timezone": "GMT+1", "model": "iPhone9,1", "usable_memory": 16463810560, "arch": "x86" }, "app": { "app_version": "1.4.21", "app_name": "MyApp", "device_app_hash": "<device_app_hash>", "app_id": "<app_id>", "app_build": "2400", "app_start_time": "2019-05-03T14:31:33Z", "app_identifier": "com.mycompany.app", "type": "default", "build_type": "simulator" }, "os": { "rooted": false, "kernel_version": "Darwin Kernel Version 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64", "version": "12.1", "build": "17G65", "type": "os", "name": "iOS" } }, "errors": [ { "type": "invalid_attribute", "name": "dist" } ], "extra": { "session:duration": 52129 }, "fingerprint": [ "{{ default }}" ], "metadata": { "type": "Error", "value": "Sentry: TEST crash" }, "received": 1556893946.0, "sdk": { "client_ip": "109.69.86.251", "version": "0.42.0", "name": "sentry.javascript.react-native" }, "sentry.interfaces.Breadcrumbs": { "values": [ { "category": "console", "timestamp": 1556893700.0, "message": "%c prev state color: #9E9E9E; font-weight: bold [object Object]", "type": "default" }, { "category": "console", "timestamp": 1556893941.0, "message": "%c prev state color: #9E9E9E; font-weight: bold [object Object]", "type": "default" }, { "category": "console", "timestamp": 1556893941.0, "message": "%c next state color: #4CAF50; font-weight: bold [object Object]", "type": "default" }, { "category": "sentry", "timestamp": 1556893945.0, "message": "Error: Sentry: TEST crash", "type": "default", "level": "fatal" } ] }, "sentry.interfaces.Exception": { "exc_omitted": null, "values": [ { "stacktrace": { "frames": [ { "function": "callFunctionReturnFlushedQueue", "platform": "javascript", "abs_path": "app:///[native code]", "in_app": false, "filename": "app:///[native code]" }, { "function": "touchableHandlePress", "abs_path": "app:///index.bundle", "in_app": false, "platform": "javascript", "lineno": 64988, "colno": 47, "filename": "app:///index.bundle" }, { "function": "crash", "abs_path": "app:///index.bundle", "in_app": false, "platform": "javascript", "lineno": 93710, "colno": 22, "filename": "app:///index.bundle" } ], "has_system_frames": false, "frames_omitted": null }, "mechanism": null, "raw_stacktrace": null, "value": "Sentry: TEST crash", "thread_id": 99, "module": null, "type": "Error" } ] }, "sentry.interfaces.User": { "id": "b5f212b4-9112-4253-86cc-11583ac1945a" }, "type": "error", "version": "7" } Artifacts Web UI Cut and pasted from Release artifacts page: Release com.mycompany.app-1.4.21 Artifacts NAME SIZE ~/index.bundle 8.1 MB ~/main.bundle.map.js 14.7 MB sourceMappingURL $ tail -c 50 .xcodebuild/main.jsbundle //# sourceMappingURL=main.bundle.map.js
After MONTHS, we realised we had to write client code to knit in the Distribution and Release.... const configureSentry = () => { Sentry.config(config.sentry.dsn).install(); Sentry.setDist(DeviceInfo.getBuildNumber()); Sentry.setRelease(DeviceInfo.getBundleId() + '-' + DeviceInfo.getVersion()); };
Sentry
56,020,745
13
I'm currently working with Celery tasks in a Django based project. We have raven configured to send all uncaught exceptions and log messages to Sentry, as described in the documentation. Everything works pretty good, except for uncaught exceptions inside celery tasks. For example, if I run this task: @app.task def test_logging(): log.error('Testing logging inside a task') raise IndexError('Testing exception inside a task') I only see in Sentry the log.error(...) but not the IndexError uncaught exception. I've tried using a try-except block around the exception with a log.exception(...) inside and it did work, but I think it's not scalable to catch all exceptions like this. So, the problem are only uncaught exceptions that somehow are not handled properly. These are my current package versions: celery (3.1.17) raven (5.1.1) Django (1.7.1) Would you help me to move in some direction? Thanks for your time!
As described by DRC in the comment up there, we finally got to the solution using this approach: https://docs.getsentry.com/hosted/clients/python/integrations/celery/ Basically doing this: import celery class Celery(celery.Celery): def on_configure(self): if hasattr(settings, 'RAVEN_CONFIG') and settings.RAVEN_CONFIG['dsn']: import raven from raven.contrib.celery import (register_signal, register_logger_signal) client = raven.Client(settings.RAVEN_CONFIG['dsn']) register_logger_signal(client) register_signal(client) app = Celery('myapp') app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) Thanks for your time.
Sentry
27,550,916
12
How to add custom tags to get raven set it to sentry? When I used raven in django there was several tags like OS, Browser, etc. But I want to add such tags by myself using raven, without django. Thanks.
If I'm correctly understanding the question, you can pass to sentry whatever you want in extra dictionary, see raven docs. You can also construct messages via capture* methods (and pass extra too): capture captureException captureMessage captureQuery Btw, OS, browser...etc parameters sentry gets from the passed request object.
Sentry
15,326,658
11
I may be a bit late on the train, but I wanted to use Sentry and Raven for logging in Django. I set up sentry and raven to the point, where I ran the test for raven and it works. So now I want to send my debug messages over to sentry, but how would I do this? settings.py RAVEN_CONFIG = { 'dsn': 'http://code4@mydomain:9000/2', # If you are using git, you can also automatically configure the # release based on the git info. 'release': raven.fetch_git_sha(BASE_DIR), } LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s' }, }, 'handlers': { 'sentry': { 'level': 'WARNING', 'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler', }, 'console': { 'level': 'WARNING', 'class': 'logging.StreamHandler', 'formatter': 'verbose' } }, 'loggers': { 'django': { 'handlers': ['sentry'], 'level': 'DEBUG', 'propagate': True, }, 'raven': { 'level': 'DEBUG', 'handlers': ['sentry'], 'propagate': False, }, 'sentry.errors': { 'level': 'DEBUG', 'handlers': ['sentry'], 'propagate': False, }, } } view.py import logger from raven.contrib.django.models import get_client client = get_client() client.captureException() logger = logging.getLogger(__name__) def my_view(request): [...] logger.error('There was some crazy error', exc_info=True, extra={ # Optionally pass a request and we'll grab any information we can 'request': request, }) [...] At this point it just logs errors and exceptions but wont send me this error message... How is the right way to use raven and sentry? The docs are totaly not helping and my google foo left me also. Any tips or helpfull tutorials?
You have 3 loggers defined: django, raven and sentry.errors. When you call logging.getLogger(__name__) you actually create a "throw-away" one, because your ___name__ doesn't match any of above. You should either use the raven logger... logger = logging.getLogger('raven') logger.debug('Hola!') ...or setup your own: LOGGING = { # ... 'loggers': { # ... 'yourapp': { 'level': 'debug', 'handlers': ['sentry'], 'propagate': False, }, } } and then in yourapp/views.py: logger = logging.getLogger(__name__) logger.debug('Hola!')
Sentry
34,729,025
11
The documentation was not very helpful for me. Locations I've tried: root folder (where gradle.properties and project's build.gradle files reside) /app folder (where app's build.gradle file is localed) /app/src/main/kotlin I initialize Sentry on start of my app in class that extends android.app.Application like so: class MyApp : Application() { override fun onCreate() { super.onCreate() Sentry.init(AndroidSentryClientFactory(applicationContext)) } } And then do a test capture in one of my methods: Sentry.capture("This is a test capture using Sentry") I also tried to specify the path to sentry.properties file explicitly in gradle.properties with no luck. When I use Sentry.init() method that accepts DSN, it works, but this is not desired, because I do not want to check DSN into VCS. I am aware of other methods of configuration, I just want to use sentry.properties though.
There are in fact two different sentry.properties files. The sentry.properties that is used by the app at runtime to configure the DSN should be placed at /app/src/main/resources (documentation). The sentry.properties that is used at build time by Gradle to generate and upload ProGuard mappings to Sentry. This should be placed in the project root. (documentation). This is optional and only relevant for apps that use ProGuard.
Sentry
49,485,428
11
The error in the title is caught by Sentry (an error tracking tool). Below is a screenshot from Sentry - showing the stack trace. Note: the script /en_US/iab.autofill.payment.js where handleMessage is located is loaded from Facebook (link here), and I couldn't find this script in the javascript bundle, nor anything related to it. I assume it's loaded by a 3rd party script - I'm using Google Tag Manager (which is also loading Facebook Pixel), Segment (loading Hotjar and Mixpanel), and Snapchat. The error started to appear without any changes in these scripts or the services that they're sending data to. Note 2: It seems that the error is triggered quite often, about 10-15% of the time. I tried to reproduce it but given that it's a handled error, it doesn't show in the dev console. Any direction on where to look would be much appreciated.
I'm seeing this a lot, and it seems to be coming 100% from users using Facebook browser on iOS (I guess this is the browser you see when you're using the Facebook app). I tried to debug this with a snippet: <script> window.addEventListener('message', function (e) { console.log(e); JSON.parse(e.data); console.log('foo'); }, false); </script> This is from the library you linked. Assuming that e.data is JSON string (not e.g. an object?), without any safeguard seems to be breaking things. The second console.log doesn't fire, so I think this is causing some unexpected behaviours in my case (buttons not reacting to clicks with js listeners etc) I don't know if there is a workaround or a way to protect from this in Facebook embedded browser (I guess it's loaded there) Looking forward to hear more info
Sentry
64,042,411
11
I have a simple setup for a project that imitates the Next JS sentry (simple) example The problem is without sentry Enable JavaScript source fetching feature on, I cannot get the source maps to report correctly to sentry example: with the Enable JavaScript source fetching it shows correctly example (of the same error): Here is the configuration files used: // next.config.js const { parsed: localEnv } = require("dotenv").config(); const webpack = require("webpack"); const TsconfigPathsPlugin = require("tsconfig-paths-webpack-plugin"); // Package.json => "@zeit/next-source-maps": "^0.0.4-canary.1", const withSourceMaps = require("@zeit/next-source-maps")({ devtool: "source-map" }); module.exports = withSourceMaps({ target: "serverless", env: { // Will be available on both server and client // Sentry DNS configurations SENTRY_DNS: process.env.SENTRY_DNS, }, poweredByHeader: false, webpack(config, options) { config.plugins.push(new webpack.EnvironmentPlugin(localEnv)); config.resolve.plugins.push(new TsconfigPathsPlugin()); config.node = { // Fixes node packages that depend on `fs` module fs: "empty", }; if (!options.isServer) { config.resolve.alias["@sentry/node"] = "@sentry/browser"; } return config; }, }); The src/pages/_app.tsx and src/pages/_error.tsx follow the example mentioned in the repo. // tsconfig.json { "compilerOptions": { "allowJs": true, "allowSyntheticDefaultImports": true, "baseUrl": ".", "declaration": false, "emitDecoratorMetadata": true, "esModuleInterop": true, "experimentalDecorators": true, "forceConsistentCasingInFileNames": true, "isolatedModules": true, "lib": [ "dom", "dom.iterable", "esnext" ], "module": "esnext", "moduleResolution": "node", "noEmit": true, "noImplicitAny": true, "noImplicitReturns": true, "paths": { "@src/*": ["./src/*"], "@components/*": ["./src/components/*"], "@services/*": ["./src/services/*"], "@utils/*": ["./src/utils/*"] }, "removeComments": false, "resolveJsonModule": true, "skipLibCheck": true, "sourceMap": true, "sourceRoot": "/", "strict": true, "target": "es6", "jsx": "preserve" }, "exclude": [ "node_modules", "cypress", "test", "public", "out" ], "include": [ "next-env.d.ts", "**/*.ts", "**/*.tsx" ], "compileOnSave": false } The source maps are uploaded to sentry during CI build process Using this script (after next build and next export) configure-sentry-release.sh #!/bin/bash set -eo pipefail # Install Sentry-CLI curl -sL https://sentry.io/get-cli/ | bash export SENTRY_ENVIRONMENT="production" export SENTRY_RELEASE=$(sentry-cli releases propose-version) # Configure the release and upload source maps echo "=> Configure Release: $SENTRY_RELEASE :: $SENTRY_ENVIRONMENT" sentry-cli releases new $SENTRY_RELEASE --project $SENTRY_PROJECT sentry-cli releases set-commits --auto $SENTRY_RELEASE sentry-cli releases files $SENTRY_RELEASE upload-sourcemaps ".next" --url-prefix "~/_next" sentry-cli releases deploys $SENTRY_RELEASE new -e $SENTRY_ENVIRONMENT sentry-cli releases finalize $SENTRY_RELEASE Is there anyway to get the source maps to work with sentry (without the Enable JavaScript source fetching and without leaving the source map publicly available on the server)?
This can be solved by abandoning the configure-sentry-release.sh script to upload the source maps manually but instead using sentry webpack plugin yarn add @sentry/webpack-plugin and use the plugin with next.config.js (webpack) to upload the source maps during the build step // next.config.js ... webpack(config, options) { ... // Sentry Webpack configurations for when all the env variables are configured // Can be used to build source maps on CI services if (SENTRY_DNS && SENTRY_ORG && SENTRY_PROJECT) { config.plugins.push( new SentryWebpackPlugin({ include: ".next", ignore: ["node_modules", "cypress", "test"], urlPrefix: "~/_next", }), ); } ... More on the issue can be found here: https://github.com/zeit/next.js/issues/11642 https://github.com/zeit/next.js/issues/8873
Sentry
61,011,281
10
Using Axios interceptors to handle the 400's and 500's in a generic manner by showing an Error Popup. Usually, Sentry calls are triggered when the custom _error.js page is rendered due to a JS error. How do I log the API call errors in sentry?
You can either use an axios interceptor or write it in the catch() of your axios call itself. Interceptor axios.interceptors.response.use( (response: AxiosResponse) => response, (error: AxiosError) => { Sentry.captureException(error); return Promise.reject(error); }, ); Axios Call axios({ url, method, ... }) .then(async (response: AxiosResponse) => { resolve(response.data); }) .catch(async (error: AxiosError) => { Sentry.captureException(error); reject(error); });
Sentry
65,043,634
10
I have an installation of Symfony 4.3 and I upgrade it to 4.4.19. On my old installation Sentry was working well with excluded_exception. I use it like this on sentry.yaml : sentry: dsn: "https://key@sentry.io/id" options: excluded_exceptions: - App\Exception\BadArgumentException - App\Exception\BadFilterException - App\Exception\BadRequestException But when I upgrade to 4.4.19 symfony logs tells me that excluded_exceptions does not exist. Sentry get each exception on my project. It works well so I don't understand why it doesn't recognize this option. (I've seen that it was add to sentry on v2.1). I've tried to do a composer update sentry/sentry-symfony but nothing changes. On my composer.json I have this on require part : "sentry/sentry": "^3.1", "sentry/sentry-symfony": "^4.0", So I don't know what to do now to fix this problem. I must forgot something maybe.
Please check upgrade file for Sentry Symfony 4.0. According to this file sentry.options.excluded_exceptions configuration option was removed. To exclude exceptions you must use IgnoreErrorsIntegration service: sentry: options: integrations: - 'Sentry\Integration\IgnoreErrorsIntegration' services: Sentry\Integration\IgnoreErrorsIntegration: arguments: $options: ignore_exceptions: - App\Exception\BadArgumentException - App\Exception\BadFilterException - App\Exception\BadRequestException
Sentry
66,154,926
10
As a beginner to Sentry and web dev and debugging issues, some of the errors Sentry is picking up are completely baffling to me, including this one. Our web app seems just fine at the URL that Sentry is saying there is an error at. I'm not familiar with our app using anything related to webkit-masked-url. Is it safe to ignore this type of error?
This particular set of mysterious errors has been asked about on Sentry's GitHub, and they reference a WebKit issue. According to the comments there, they are caused by an error coming from a Safari browser extension, and can safely be ignored, or filtered.
Sentry
74,197,049
10
Sleuth is not sending the trace information to Zipkin, even though Zipkin is running fine. I am using Spring 1.5.8.RELEASE, spring cloud Dalston.SR4 and I have added the below dependencies in my microservices: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> </dependency> My Logs are always coming false: [FOOMS,2e740f33c26e286d,2e740f33c26e286d,false] My Zipkin dependencies are: <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-server</artifactId> </dependency> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-ui</artifactId> <scope>runtime</scope> </dependency> Why am I getting false instead of true in my slueth statements? The traceId and SpanId are properly generated for all the calls though. My Zipkin is running in port 9411
I found that I need to add a sampler percentage. By default zero percentage of the samples are sent and that is why the sleuth was not sending anything to zipkin. when I added spring.sleuth.sampler.percentage=1.0 in the properties files, it started working.
Zipkin
47,670,883
10
What are the differences between Prometheus and Zabbix?
Both Zabbix and Prometheus may be used in various monitoring scenarios, and there isn't any particular specialization in either of these. Zabbix is older than Prometheus and probably more stable, with more ready-to-use solutions. Zabbix has a core written in C and a web UI based on PHP. Also it uses "agents" (client-side programs) written in C. Prometheus is written in the Go language. Zabbix stores data in an RDBMS (MySQL, PostgreSQL, Oracle, or SQLite) of the user's choice. Prometheus uses its own database embedded into the backend process (it is a non-relational database specially designed for storing monitoring data in a similar fashion to OpenTSDB's data model). Zabbix by default uses a "pull" model when a server connects to agents on each monitoring machine, and agents periodically gather the information and send it to a server. The alternative is "active checks" mode when agents establish a connection with a server and send data to it when it need. Prometheus uses a "pull" model when a server gathers information from client machines. But Prometheus Push Gateway may be used in cases when a "push" model is needed. Prometheus requires an application to be instrumented with the Prometheus client library (available in different programming languages) for preparing metrics. But for monitoring a system or software that can't be instrumented, there is an official "blackbox exporter" that allows probing endpoints over a range of protocols; additionally, a wide spread of third-party "exporters" and tools are available to help expose metrics for Prometheus (similar to "agents" for Zabbix). One such tool is Telegraf. Zabbix uses its own TCP-based communication protocol between agents and a server. Prometheus uses HTTP with Protocol Buffers (+ text format for ease of use with curl). Zabbix offers its own web UI for visualization. Prometheus offers a basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana. Zabbix has support for alerting in its core. Prometheus offers a solution for alerting that is separated from its core into the Alertmanager application.
Zabbix
35,305,170
35
I am exploring grafana for my log management and system monitoring. I found kibana is also used for same process. I just don't know when to use kibana and when to use grafana and when to use zabbix?
Zabbix - complex monitoring solution including data gathering, data archiving (trends, compaction,...), visualizer with dashboards, alerting and some management support for alerts escalations. (have a look at collectd, prometheus, cacti. They are all able to gather data) Grafana - visualizer of data. It can read data at least from prometheus, graphite and elastics. Its primary goal is to visualize things in user defined dashboards and correlate things from possibly various sources. You can for example see cpu load (float time serie data from prometheus for example) with nice annotations referring to some special event in log file (loaded from elastics of course) Kibana - visualization + analytics on logged data into elastics. Have a fast look at kibana discover to get idea. It is "must to have" tool when you need to search your logs (various services, various servers) in one place.
Zabbix
40,882,040
16
Here is my goal: I would like to be able to report various metrics to zabbix so that we can display the graphs on a web page. These metrics include: latency per soap service submission various query results from one or more databases. What things do I need to write and/or expose? Or is the zabbix server going to go and get it from an exposed service somewhere? I've been advised that a script that returns a single value will work, but I'm wondering if that's the right way.
I can offer 2 suggestions to get the metrics into Zabbix: Use the zabbix_sender binary to feed the data from your script directly to the Zabbix server. This allows your script to call on it's own interval and set all the parameters needed. You really only need to know the location to the zabbix_sender binary. Inside the Zabbix server interface, you would create items with the type of Zabbix trapper. This is the item type which receives values send from the zabbix_sender. You make up the key name and it has to match. The second way you could do this is to specify a key name and script/binary inside the zabbix_agentd.conf file. Every time the Zabbix server requests this item the script would be called and the data from the script recorded. This allows you to set the intervals in the Zabbix item configuration rather than forcing you to run your script on its own intervals. However, you would need to add this extra bit of information to your zabbix_agentd.conf file for every host. There may be other ways to do this directly from Python (zabbix_sender bindings for Python maybe?). But these are the 2 ways I have used before which work well. This isn't really Python specific. But you should be able to use zabbix_sender in your Python scripting. Hope this information helps! Update: I also remembered that Zabbix was working on/has a API (JSON/RPC style). But the documentation site is down at the moment and I am not sure if the API is for submitting item data or not. Here is the Wiki on the API: http://www.zabbix.com/wiki/doc/api And a project for Python API: https://github.com/gescheit/scripts/tree/master/zabbix/ There seems to be little documentation on the API as it is new as of Zabbix version 1.8
Zabbix
6,348,053
14
My Goal: I would like to extract graphs associated with hosts in .png format. My GOOGLE research say's we don't have Zabbix API designed to do this task. So few blogs advised to user Chart2.php & CURL. Can someone explain me how to go about it ( detailed steps )? Note: Sorry never worked on php nor on curl When i tried curl https://example.com/zabbix/chart2.php?graphid=1552&width=300&height=300 Got this, but link doesn't work <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>302 Found</title> </head><body> <h1>Found</h1> <p>The document has moved <a href="/zabbix/openid?graphid=1552&amp;modauthopenid.referrer=https%3A%2F%2Fexample.com%2Fzabbix%2Fchart2.php%3Fgraphid%3D1552">here</a>.</p> <hr> <address>Apache/2.2.3 (Red Hat) Server at example.com Port 80</address> </body></html> Also how can i incorporate this with my zabbix api (JAVA ) call ?
This works with the normal password authentication, you need to adapt it to openid which I don't use and most certainly you will have to change options for this to work with curl. 1. wget --save-cookies=z.coo -4 --keep-session-cookies -O - -S --post-data='name=(a zabbix username)&password=(password)&enter=Enter' 'http://example.com/zabbix/index.php?login=1' 2. wget -4 --load-cookies=z.coo -O result.png 'http://example.com/zabbix/chart2.php?graphid=410&width=1778&period=102105&stime=20121129005934' the first one posts authentication and saves the cookie. the second loads the same cookie file and retrieves the png. You must certainly want to implement it without using shell but in a language of your preference and zabbix's JSON-RPC API of which there are plenty of client libraries already. Though AFAIK you still will have to login like this to get the graph's image. At least for the time being. EDIT: https://support.zabbix.com/browse/ZBXNEXT-562 is the one to vote (or start working on it)
Zabbix
13,653,853
12
I have a .NET app that must send data to a Zabbix server. How to do that?
This is sample .Net library to connect Zabbix API https://github.com/p1nger/ODZL
Zabbix
2,373,705
10
I have read the Prometheus documentation carefully, but its still a bit unclear to me, so I am here to get confirmation about my understanding. (Please note that for the sake of the simplest examples possible I have used the one second for scraping interval, timerange - even if its not possible in practice) Despite we scrape a counter in each second and the counter's values is 30 right now. We have the following timeseries for that: second counter_value increase calculated by hand(call it ICH from now) 1 1 1 2 3 2 3 6 3 4 7 1 5 10 3 6 14 4 7 17 3 8 21 4 9 25 4 10 30 5 We want to run some query on this dataset. 1.rate() Official document states: "rate(v range-vector) : calculates the per-second average rate of increase of the time series in the range vector." With a layman's terms this means that we will get the increase for every second and the value for the given second will be the average increment in the given range? Here is what I mean: rate(counter[1s]): will match ICH because average will be calculated from one value only. rate(counter[2s]): will get the average from the increment in 2 sec and distribute it among the seconds So in the first 2 second we got an increment of total 3 which means the average is 1.5/sec. final result: second result 1 1,5 2 1,5 3 2 4 2 5 3,5 6 3,5 7 3,5 8 3,5 9 4,5 10 4,5 rate(counter[5s]): will get the average from the increment in 5 sec and distribute it among the seconds The same as for [2s] but we calculate the average from total increment of 5sec. final result: second result 1 2 2 2 3 2 4 2 5 2 6 4 7 4 8 4 9 4 10 4 So the higher the timerange the smoother result we will get. And the sum of these increase will match the actual counter. 2.increase() Official document states: "increase(v range-vector) : calculates the increase in the time series in the range vector." For me this means it wont distribute the average among the seconds, but instead will show the single increment for the given range(with extrapolation). increase(counter[1s]): In my term this will match with the ICH and the rate for 1s, just because the total range and rate's base granularity match. increase(counter[2s]): First 2 seconds gave us an increment of 3 total,so 2.seconds will get the value of 3 and so on... second result 1 3* 2 3 3 4* 4 4 5 7* 6 7 7 7* 8 7 9 9* 10 9 *In my terms these values means the extrapolated values to cover every second. Do I understand it well or am I far from that?
In an ideal world (where your samples' timestamps are exactly on the second and your rule evaluation happens exactly on the second) rate(counter[1s]) would return exactly your ICH value and rate(counter[5s]) would return the average of that ICH and the previous 4. Except the ICH at second 1 is 0, not 1, because no one knows when your counter was zero: maybe it incremented right there, maybe it got incremented yesterday, and stayed at 1 since then. (This is the reason why you won't see an increase the first time a counter appears with a value of 1 -- because your code just created and incremented it.) increase(counter[5s]) is exactly rate(counter[5s]) * 5 (and increase(counter[2s]) is exactly rate(counter[2s]) * 2). Now what happens in the real world is that your samples are not collected exactly every second on the second and rule evaluation doesn't happen exactly on the second either. So if you have a bunch of samples that are (more or less) 1 second apart and you use Prometheus' rate(counter[1s]), you'll get no output. That's because what Prometheus does is it takes all the samples in the 1 second range [now() - 1s, now()] (which would be a single sample in the vast majority of cases), tries to compute a rate and fails. If you query rate(counter[5s]) OTOH, Prometheus will pick all the samples in the range [now() - 5s, now] (5 samples, covering approximately 4 seconds on average, say [t1, v1], [t2, v2], [t3, v3], [t4, v4], [t5, v5]) and (assuming your counter doesn't reset within the interval) will return (v5 - v1) / (t5 - t1). I.e. it actually computes the rate of increase over ~4s rather than 5s. increase(counter[5s]) will return (v5 - v1) / (t5 - t1) * 5, so the rate of increase over ~4 seconds, extrapolated to 5 seconds. Due to the samples not being exactly spaced, both rate and increase will often return floating point values for integer counters (which makes obvious sense for rate, but not so much for increase).
Prometheus
54,494,394
212
I need to show, in Grafana, a panel with the number of requests in the period of time selected in the upper right corner. For this I need to solve 2 issues here, I will ask the prometheus question here and the Grafana question in another link. If I have a Counter http_requests_total, How can I build a query to get an integer with the total number of requests during a period of time (for example:24hs)?
What you need is the increase() function, that will calculate the difference between the counter values at the start and at the end of the specified time interval. It also correctly handles counter resets during that time period (if any). increase(http_requests_total[24h]) If you have multiple counters http_requests_total (e.g. from multiple instances) and you need to get the cumulative count of requests, use the sum() operator: sum(increase(http_requests_total[24h])) See also my answer to that part of the question about using Grafana's time range selection in queries.
Prometheus
47,138,461
196
I'm developing something that needs Prometheus to persist its data between restarts. Having followed the instructions $ docker volume create a-new-volume $ docker run \ --publish 9090:9090 \ --volume a-new-volume:/prometheus-data \ --volume "$(pwd)"/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus I have a valid prometheus.yml in the right directory on the host machine and it's being read by Prometheus from within the container. I'm just scraping a couple of HTTP endpoints for testing purposes at the moment. But when I restart the container it's empty, no data from the previous run. What am I missing from my docker run ... command to persist the data into the a-new-volume volume?
Use the default data dir, which is /prometheus. To do that, use this line instead of what you have in your command: ... --volume a-new-volume:/prometheus \ ... Found here: https://github.com/prometheus/prometheus/blob/master/Dockerfile Surprisingly is not mentioned in the image docs
Prometheus
50,009,065
79
Following the Prometheus webpage one main difference between Prometheus and InfluxDB is the usecase: while Prometheus stores time series only InfluxDB is better geared towards storing individual events. Since there was some major work done on the storage engine of InfluxDB I wonder if this is still true. I want to setup a time series database and apart from the push/push model (and probably a difference in performance) I can see no big thing which separates both projects. Can someone explain the difference in usecases?
InfluxDB CEO and developer here. The next version of InfluxDB (0.9.5) will have our new storage engine. With that engine we'll be able to efficiently store either single event data or regularly sampled series. i.e. Irregular and regular time series. InfluxDB supports int64, float64, bool, and string data types using different compression schemes for each one. Prometheus only supports float64. For compression, the 0.9.5 version will have compression competitive with Prometheus. For some cases we'll see better results since we vary the compression on timestamps based on what we see. Best case scenario is a regular series sampled at exact intervals. In those by default we can compress 1k points timestamps as an 8 byte starting time, a delta (zig-zag encoded) and a count (also zig-zag encoded). Depending on the shape of the data we've seen < 2.5 bytes per point on average after compactions. YMMV based on your timestamps, the data type, and the shape of the data. Random floats with nanosecond scale timestamps with large variable deltas would be the worst, for instance. The variable precision in timestamps is another feature that InfluxDB has. It can represent second, millisecond, microsecond, or nanosecond scale times. Prometheus is fixed at milliseconds. Another difference is that writes to InfluxDB are durable after a success response is sent to the client. Prometheus buffers writes in memory and by default flushes them every 5 minutes, which opens a window of potential data loss. Our hope is that once 0.9.5 of InfluxDB is released, it will be a good choice for Prometheus users to use as long term metrics storage (in conjunction with Prometheus). I'm pretty sure that support is already in Prometheus, but until the 0.9.5 release drops it might be a bit rocky. Obviously we'll have to work together and do a bunch of testing, but that's what I'm hoping for. For single server metrics ingest, I would expect Prometheus to have better performance (although we've done no testing here and have no numbers) because of their more constrained data model and because they don't append writes to disk before writing out the index. The query language between the two are very different. I'm not sure what they support that we don't yet or visa versa so you'd need to dig into the docs on both to see if there's something one can do that you need. Longer term our goal is to have InfluxDB's query functionality be a superset of Graphite, RRD, Prometheus and other time series solutions. I say superset because we want to cover those in addition to more analytic functions later on. It'll obviously take us time to get there. Finally, a longer term goal for InfluxDB is to support high availability and horizontal scalability through clustering. The current clustering implementation isn't feature complete yet and is only in alpha. However, we're working on it and it's a core design goal for the project. Our clustering design is that data is eventually consistent. To my knowledge, Prometheus' approach is to use double writes for HA (so there's no eventual consistency guarantee) and to use federation for horizontal scalability. I'm not sure how querying across federated servers would work. Within an InfluxDB cluster, you can query across the server boundaries without copying all the data over the network. That's because each query is decomposed into a sort of MapReduce job that gets run on the fly. There's probably more, but that's what I can think of at the moment.
Prometheus
33,350,314
77
I'm making a Grafana dashboard and want a panel that reports the latest version of our app. The version is reported as a label in the app_version_updated (say) metric like so: app_version_updated{instance="eu99",version="1.5.0-abcdefg"} I've tried a number of Prometheus queries to extract the version label as a string from the latest member of this time series, to no effect. For example, the query count(app_version_updated) by (version) returns a {version="1.5.0-abcdefg"} element with a value of 1. When put in a Grafana dashboard in a single value panel, this doesn't display the version string but instead the count value (1). How can I construct a Prometheus query that returns the version string?
My answer tries to elaborate on Carl's answer. I assume that the GUI layout may have changed a little since 2016, so it took me while to find the "name" option. Assuming you have a metric as follows: # HELP db2_prometheus_adapter_info Information on the state of the DB2-Prometheus-Adapter # TYPE db2_prometheus_adapter_info gauge db2_prometheus_adapter_info{app_state="UP"} 1.0 and you would like to show the value of the label app_state. Follow these steps: Create a "SingleStat" visualization. Go to the "Queries" tab: Enter the name (here db2_prometheus_adapter_info) of the metric. Enter the label name as the legend using the {{[LABEL]}} notation (here {{app_state}}). Activate the "instant" option. Go to the "Visualization" tab: Choose the value "Name" under "Value - Stat". Note on the "Instant" setting: This setting switches from a range query to a simplified query only returning the most recent value of the metric (also see What does the "instant" checkbox in Grafana graphs based on prometheus do?). If not activated, the panel will show an error as soon as there is more than one distinct value for the label in the history of the metric. For a "normal" metric you would remedy this by choosing "current" in the "Value - Stat" option. But doing so here prevents your label value to be shown.
Prometheus
38,525,891
65
I have job definition as follows: - job_name: 'test-name' static_configs: - targets: [ '192.168.1.1:9100', '192.168.1.1:9101', '192.168.1.1:9102' ] labels: group: 'development' Is there any way to annotate targets with labels? For instance, I would like to add 'service-1' label to '192.168.1.1:9100', 'service-2' to '192.168.1.1:9101' etc.
I have the same question before. Here is my solution: use job_name as the group label add more target option to separate instance and add labels For you the code may like this: - job_name: 'development' static_configs: - targets: [ '192.168.1.1:9100' ] labels: service: '1' - targets: [ '192.168.1.1:9101' ] labels: service: '2'
Prometheus
49,829,423
65
I want to count number of unique label values. Kind of like select count (distinct a) from hello_info For example if my metric 'hello_info' has labels a and b. I want to count number of unique a's. Here the count would be 3 for a = "1", "2", "3". hello_info(a="1", b="ddd") hello_info(a="2", b="eee") hello_info(a="1", b="fff") hello_info(a="3", b="ggg")
count(count by (a) (hello_info)) First you want an aggregator with a result per value of a, and then you can count them.
Prometheus
51,882,134
63
I want to calculate the cpu usage of all pods in a kubernetes cluster. I found two metrics in prometheus may be useful: container_cpu_usage_seconds_total: Cumulative cpu time consumed per cpu in seconds. process_cpu_seconds_total: Total user and system CPU time spent in seconds. Cpu Usage of all pods = increment per second of sum(container_cpu_usage_seconds_total{id="/"})/increment per second of sum(process_cpu_seconds_total) However, I found every second's increment of container_cpu_usage{id="/"} larger than the increment of sum(process_cpu_seconds_total). So the usage may be larger than 1...
This I'm using to get CPU usage at cluster level: sum (rate (container_cpu_usage_seconds_total{id="/"}[1m])) / sum (machine_cpu_cores) * 100 I also track the CPU usage for each pod. sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name) I have a complete kubernetes-prometheus solution on GitHub, maybe can help you with more metrics: https://github.com/camilb/prometheus-kubernetes
Prometheus
40,327,062
61
If I have a metric with the following labels: my_metric{group="group a"} 100 my_metric{group="group b"} 100 my_metric{group="group c"} 100 my_metric{group="misc group a"} 1 my_metric{group="misc group b"} 2 my_metric{group="misc group c"} 1 my_metric{group="misc group d"} 1 Is it possible to do a query or even a label_replace that combines the 'misc' groups together? (I realize that the metric cardinality needs to be improved, and I've updated the app to fix it. However it left me with this question for if I wanted to fix the metrics via a query later)
It's even easier sum by (group) (my_metric)
Prometheus
45,154,993
58
I have Prometheus server installed on my AWS instance, but the data is being removed automatically after 15 days. I need to have data for a year or months. Is there anything I need to change in my prometheus configuration? Or do I need any extensions like Thanos? I am new to Prometheus so please be easy on the answers.
Edit the prometheus.service file vi /etc/systemd/system/prometheus.service add "--storage.tsdb.retention.time=1y" below to "ExecStart=/usr/local/bin/prometheus \" line. So the config will look like bellow for 1 year of data retention. [Unit] Description=Prometheus Wants=network-online.target After=network-online.target [Service] User=prometheus Group=prometheus Type=simple ExecStart=/usr/local/bin/prometheus \ --config.file /etc/prometheus/prometheus.yml \ --storage.tsdb.path /var/lib/prometheus/ \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries \ --web.external-url=http://XX.XX.XX.XX:9090 \ --storage.tsdb.retention.time=1y [Install] WantedBy=multi-user.target
Prometheus
59,298,811
53
I'm monitoring docker containers via Prometheus.io. My problem is that I'm just getting cpu_user_seconds_total or cpu_system_seconds_total. How to convert this ever-increasing value to a CPU percentage? Currently I'm querying: rate(container_cpu_user_seconds_total[30s]) But I don't think that it is quite correct (comparing to top). How to convert cpu_user_seconds_total to CPU percentage? (Like in top)
Rate returns a per second value, so multiplying by 100 will give a percentage: rate(container_cpu_user_seconds_total[30s]) * 100
Prometheus
34,923,788
51
I have Prometheus configuration with many jobs where I am scraping metrics over HTTP. But I have one job where I need to scrape the metrics over HTTPS. When I access: https://ip-address:port/metrics I can see the metrics. The job that I have added in the prometheus.yml configuration is: - job_name: 'test-jvm-metrics' scheme: https static_configs: - targets: ['ip:port'] When I restart the Prometheus I can see an error on my target that says: context deadline exceeded I have read that maybe the scrape_timeout is the problem, but I have set it to 50 sec and still the same problem. What can cause this problem and how to fix it? Thank you!
Probably the default scrape_timeout value is too short for you [ scrape_timeout: <duration> | default = 10s ] Set a bigger value for scrape_timeout. scrape_configs: - job_name: 'prometheus' scrape_interval: 5m scrape_timeout: 1m Take a look here https://github.com/prometheus/prometheus/issues/1438
Prometheus
49,817,558
49
I need to write a query that use any of the different jobs I define. {job="traefik" OR job="cadvisor" OR job="prometheus"} Is it possible to write logical binary operators?
Prometheus has an or logical binary operator, but what you're asking about here is vector selectors. You can use a regex for this {job=~"traefik|cadvisor|prometheus"}, however that you want to do this is a smell.
Prometheus
43,134,060
48
I have no clue what the option "instant" means in Grafana when creating graph with Prometheus. Any ideas?
It uses the query API endpoint rather than the query_range API endpoint on Prometheus, which is more efficient if you only care about the end of your time range and don't want to pull in data that Grafana is going to throw away again.
Prometheus
51,728,031
47
I'm trying to configure Prometheus and Grafana with my Hyperledger fabric v1.4 network to analyze the peer and chaincode mertics. I've mapped peer container's port 9443 to my host machine's port 9443 after following this documentation. I've also changed the provider entry to prometheus under metrics section in core.yml of peer. I've configured prometheus and grafana in docker-compose.yml in the following way. prometheus: image: prom/prometheus:v2.6.1 container_name: prometheus volumes: - ./prometheus/:/etc/prometheus/ - prometheus_data:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--web.console.libraries=/etc/prometheus/console_libraries' - '--web.console.templates=/etc/prometheus/consoles' - '--storage.tsdb.retention=200h' - '--web.enable-lifecycle' restart: unless-stopped ports: - 9090:9090 networks: - basic labels: org.label-schema.group: "monitoring" grafana: image: grafana/grafana:5.4.3 container_name: grafana volumes: - grafana_data:/var/lib/grafana - ./grafana/datasources:/etc/grafana/datasources - ./grafana/dashboards:/etc/grafana/dashboards - ./grafana/setup.sh:/setup.sh entrypoint: /setup.sh environment: - GF_SECURITY_ADMIN_USER={ADMIN_USER} - GF_SECURITY_ADMIN_PASSWORD={ADMIN_PASS} - GF_USERS_ALLOW_SIGN_UP=false restart: unless-stopped ports: - 3000:3000 networks: - basic labels: org.label-schema.group: "monitoring" When I curl 0.0.0.0:9443/metrics on my remote centos machine, I get all the list of metrics. However, when I run Prometheus with the above configuration, it throws the error Get http://localhost:9443/metrics: dial tcp 127.0.0.1:9443: connect: connection refused. This is what my prometheus.yml looks like. global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'prometheus' scrape_interval: 10s static_configs: - targets: ['localhost:9090'] - job_name: 'peer_metrics' scrape_interval: 10s static_configs: - targets: ['localhost:9443'] Even, when I go to endpoint http://localhost:9443/metrics in my browser, I get all the metrics. What am I doing wrong here. How come Prometheus metrics are being shown on its interface and not peer's?
Since the targets are not running inside the prometheus container, they cannot be accessed through localhost. You need to access them through the host private IP or by replacing localhost with docker.for.mac.localhost or host.docker.internal. On Windows: host.docker.internal (tested on win10, win11) On Max docker.for.mac.localhost
Prometheus
54,397,463
42
I was curious concerning the workings of Prometheus. Using the Prometheus interface I am able to see a drop-down list which I assume contains all available metrics. However, I am not able to access the metrics endpoint which lists all of the scraped metrics. The http://targethost:9090/metrics endpoint only displays the metrics concerning the Prometheus server itself. Is it possible to access a similar endpoint which lists all available metrics. I could perform a query based on {__name__=~".+"} but I would prefer to avoid this option.
The endpoint for that is http://localhost:9090/api/v1/label/__name__/values API Reference
Prometheus
58,319,911
41
Every instance of my application has a different URL. How can I configure prometheus.yml so that it takes path of a target along with the host name? scrape_configs: - job_name: 'example-random' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s static_configs: - targets: ['localhost:8090','localhost:8080'] labels: group: 'dummy'
You currently can't configure the metrics_path per target within a job but you can create separate jobs for each of your targets so you can define metrics_path per target. Your config file would look something like this: scrape_configs: - job_name: 'example-target-1' scrape_interval: 5s metrics_path: /target-1-path-to-metrics static_configs: - targets: ['localhost:8090'] labels: group: 'dummy' - job_name: 'example-target-2' scrape_interval: 5s metrics_path: /totally-different-path-for-target-2 static_configs: - targets: ['localhost:8080'] labels: group: 'dummy-2'
Prometheus
44,927,130
40
I am using Prometheus to monitor my Kubernetes cluster. I have set up Prometheus in a separate namespace. I have multiple namespaces and multiple pods are running. Each pod container exposes a custom metrics at this end point, :80/data/metrics . I am getting the Pods CPU, memory metrics etc, but how to configure Prometheus to pull data from :80/data/metrics in each available pod ? I have used this tutorial to set up Prometheus, Link
You have to add this three annotation to your pods: prometheus.io/scrape: 'true' prometheus.io/path: '/data/metrics' prometheus.io/port: '80' How it will work? Look at the kubernetes-pods job of config-map.yaml you are using to configure prometheus, - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name Check this three relabel configuration - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ Here, __metrics_path__ and port and whether to scrap metrics from this pod are being read from pod annotations. For, more details on how to configure Prometheus see here.
Prometheus
53,365,191
39
I have Prometheus scraping metrics from node exporters on several machines with a config like this: scrape_configs: - job_name: node_exporter static_configs: - targets: - 1.2.3.4:9100 - 2.3.4.5:9100 - 3.4.5.6:9100 When viewed in Grafana, these instances are assigned rather meaningless IP addresses; instead, I would prefer to see their hostnames. I think you should be able to relabel the instance label to match the hostname of a node, so I tried using relabelling rules like this, to no effect whatsoever: relabel_configs: - source_labels: ['nodename'] target_label: 'instance' I can manually relabel every target, but that requires hardcoding every hostname into Prometheus, which is not really nice. I see that the node exporter provides the metric node_uname_info that contains the hostname, but how do I extract it from there? node_uname_info{domainname="(none)",machine="x86_64",nodename="myhostname",release="4.13.0-32-generic",sysname="Linux",version="..."} 1
I just came across this problem and the solution is to use a group_left to resolve this problem. You can't relabel with a nonexistent value in the request, you are limited to the different parameters that you gave to Prometheus or those that exists in the module use for the request (gcp,aws...). So the solution I used is to combine an existing value containing what we want (the hostnmame) with a metric from the node exporter. Our answer exist inside the node_uname_info metric which contains the nodename value. I used the answer to this post as a model for my request: https://stackoverflow.com/a/50357418 . The solution is this one: node_memory_Active_bytes * on(instance) group_left(nodename) node_uname_info With this, the node_memory_Active_bytes metric which contains only instance and job labels by default, gets an additional nodename label that you can use in the description field of Grafana. Hope that this will help others.
Prometheus
49,896,956
39
I'm using Grafana with Prometheus and I'd like to build a query that depends on the selected period of time selected in the upper right corner of the screen. Is there any variable (or something like that) to use in the query field? In other words, If I select 24hs I'd like to use that data in the query.
There are two ways that I know: You can use the $__interval variable like this: increase(http_requests_total[$__interval]) There is a drawback that the $__interval variable's value is adjusted by resolution of the graph, but this may also be helpful in some situations. This approach should fit your case better: Go to Dashboard's Templating settings, create new variable with the type of Interval. Enable "Auto Option", adjust "Step count" to be equal 1. Then ensure that the "auto" is selected in corresponding drop-down list at the top of the dashboard. Let's assume you name it timeRange, then the query will look like this: increase(http_requests_total[$timeRange]) This variable will not be adjusted by graph resolution and if you select "Last 10 hours" its value will be 10h.
Prometheus
47,141,967
38
I need to monitor very different log files for errors, success status etc. And I need to grab corresponding metrics using Prometheus and show in Grafana + set some alerting on it. Prometheus + Grafana are OK I already use them a lot with different exporters like node_exporter or mysql_exporter etc. Also alerting in new Grafana 4.x works very well. But I have quite a problem to find suitable exporter/ program which could analyze log files "on fly" and extract metrics from them. So far I tried: mtail (https://github.com/google/mtail) - works but existing version cannot easily monitor more files - in general it cannot bind specific mtail program (receipt for analysis) to some specific log file + I cannot easily add log file name into tag grok_exporter (https://github.com/fstab/grok_exporter) - works but I can extract only limited information + one instance can monitor only one log file which mean I would have to start more instances exporting on more ports and configure all off them in prometheus - which makes too many new points of failure fluentd prometheus exporter (https://github.com/kazegusuri/fluent-plugin-prometheus) - works but looks like I can extract only very simple metrics and I cannot make any advanced regexp analysis of a line(s) from log file Does any one here has a really running solution for monitoring advanced metrics from log files using "some exporter" + Prometheus + Grafana? Or instead of exporter some program from which I could grab results using Prometheus push gateway. Thanks.
Take a look at Telegraf. It does support tailing logs using input plugins logparser and tail. To export metrics as prometheus endpoint use prometheus_client output plugin. You also may apply on the fly aggregations. I've found it simpler to configure for multiple log files than grok_exporter or mtail
Prometheus
41,160,883
38
When deciding between Counter and Gauge, Prometheus documentation states that To pick between counter and gauge, there is a simple rule of thumb: if the value can go down, it is a gauge. Counters can only go up (and reset, such as when a process restarts). They seem to cover overlapping use cases: you could use a Gauge that only ever increases. So why even create the Counter metric type in the first place? Why don't you simply use Gauges for both?
From a conceptual point of view, gauge and counter have different purposes a gauge typically represents a state, usually with the purpose of detecting saturation. the absolute value of a counter is not really meaningful, the real purpose is rather to compute an evolution (usually a utilization) with functions like irate/rate(), increase() ... Those evolution operations requires a reliable computation of the increase that you could not achieve with a gauge because you need to detect resets of the value. Technically, a counter has two important properties: it always starts at 0 it always increases (i.e. incremented in the code) If the application restarts between two Prometheus scrapes, the value of the second scrape is likely to be less than the previous scrape and the increase can be recovered (somewhat because you'll always lose the increase between the last scrape and the reset). A simple algorithm to compute the increase of counter between scrapes from t1 to t2 is: if counter(t2) >= counter(t1) then increase=counter(t2)-counter(t1) if counter(t2) < counter(t1) then increase=counter(t2) As a conclusion, from a technical point of view, you can use a gauge instead of a counter provided you reset it to 0 at startup and only increment it, but any violation of contract will lead to wrong values. As a side note, I also expect a counter implementation to use unsigned integer representation while gauge will rather use a floating point representation. This has some minor impacts on the code such as the ability to overflow to 0 automatically and better support for atomic operations on current cpus.
Prometheus
58,674,087
37
I want to add my HTTPS target URL to Prometheus, an error like this appears: "https://myDomain.dev" is not a valid hostname" my domain can access and run using proxy pass Nginx with port 9100(basically I made a domain for node-exporter) my configuration prometheus.yml scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'domain-job' static_configs: - targets: ['https://myDomain.dev'] is there any more configuration to add?
Use the following configuration: - job_name: 'domain-job' scheme: https static_configs: - targets: ['myDomain.dev']
Prometheus
68,435,536
35
Prometheus running inside a docker container (version 18.09.2, build 6247962, docker-compose.xml below) and the scrape target is on localhost:8000 which is created by a Python 3 script. Error obtained for the failed scrape target (localhost:9090/targets) is Get http://127.0.0.1:8000/metrics: dial tcp 127.0.0.1:8000: getsockopt: connection refused Question: Why is Prometheus in the docker container unable to scrape the target which is running on the host computer (Mac OS X)? How can we get Prometheus running in docker container able to scrape the target running on the host? Failed attempt: Tried replacing in docker-compose.yml networks: - back-tier - front-tier with network_mode: "host" but then we are unable to access the Prometheus admin page at localhost:9090. Unable to find solution from similar questions Getting error "Get http://localhost:9443/metrics: dial tcp 127.0.0.1:9443: connect: connection refused" docker-compose.yml version: '3.3' networks: front-tier: back-tier: services: prometheus: image: prom/prometheus:v2.1.0 volumes: - ./prometheus/prometheus:/etc/prometheus/ - ./prometheus/prometheus_data:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--web.console.libraries=/usr/share/prometheus/console_libraries' - '--web.console.templates=/usr/share/prometheus/consoles' ports: - 9090:9090 networks: - back-tier restart: always grafana: image: grafana/grafana user: "104" depends_on: - prometheus ports: - 3000:3000 volumes: - ./grafana/grafana_data:/var/lib/grafana - ./grafana/provisioning/:/etc/grafana/provisioning/ env_file: - ./grafana/config.monitoring networks: - back-tier - front-tier restart: always prometheus.yml global: scrape_interval: 15s evaluation_interval: 15s external_labels: monitor: 'my-project' - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'rigs-portal' scrape_interval: 5s static_configs: - targets: ['127.0.0.1:8000'] Output at http://localhost:8000/metrics # HELP python_gc_objects_collected_total Objects collected during gc # TYPE python_gc_objects_collected_total counter python_gc_objects_collected_total{generation="0"} 65.0 python_gc_objects_collected_total{generation="1"} 281.0 python_gc_objects_collected_total{generation="2"} 0.0 # HELP python_gc_objects_uncollectable_total Uncollectable object found during GC # TYPE python_gc_objects_uncollectable_total counter python_gc_objects_uncollectable_total{generation="0"} 0.0 python_gc_objects_uncollectable_total{generation="1"} 0.0 python_gc_objects_uncollectable_total{generation="2"} 0.0 # HELP python_gc_collections_total Number of times this generation was collected # TYPE python_gc_collections_total counter python_gc_collections_total{generation="0"} 37.0 python_gc_collections_total{generation="1"} 3.0 python_gc_collections_total{generation="2"} 0.0 # HELP python_info Python platform information # TYPE python_info gauge python_info{implementation="CPython",major="3",minor="7",patchlevel="3",version="3.7.3"} 1.0 # HELP request_processing_seconds Time spend processing request # TYPE request_processing_seconds summary request_processing_seconds_count 2545.0 request_processing_seconds_sum 1290.4869346540017 # TYPE request_processing_seconds_created gauge request_processing_seconds_created 1.562364777766845e+09 # HELP my_inprorgress_requests CPU Load # TYPE my_inprorgress_requests gauge my_inprorgress_requests 65.0 Python3 script from prometheus_client import start_http_server, Summary, Gauge import random import time # Create a metric to track time spent and requests made REQUEST_TIME = Summary("request_processing_seconds", 'Time spend processing request') @REQUEST_TIME.time() def process_request(t): time.sleep(t) if __name__ == "__main__": start_http_server(8000) g = Gauge('my_inprorgress_requests', 'CPU Load') g.set(65) while True: process_request(random.random())
While not a very common use case.. you can indeed connect from your container to your host. From https://docs.docker.com/docker-for-mac/networking/ I want to connect from a container to a service on the host The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
Prometheus
56,909,896
35
I want to select all metrics that don't have label "container". Is there any possibility to do that with prometheus query?
Try this: {__name__=~".+",container=""} There needs to be at least one non-empty matcher (hence the + in the __name__ regular expression, * wouldn't cut it). And the way you query for a missing label is by checking for equality with the empty string.
Prometheus
51,293,895
35
Is there a way to group all metrics of an app by metric names? A portion from a query listing all metrics for an app (i.e. {app="bar"}) : ch_qos_logback_core_Appender_all_total{affiliation="foo",app="bar", instance="baz-3-dasp",job="kubernetes-service-endpoints",kubernetes_name="bar",kubernetes_namespace="foobarz",kubernetes_node="mypaas-dev-node3.fud.com",updatedBy="janedoe"} 44 ch_qos_logback_core_Appender_debug_total{affiliation="foo",app="bar", instance="baz-3-dasp",job="kubernetes-service-endpoints",kubernetes_name="bar",kubernetes_namespace="foobarz",kubernetes_node="mypaas-dev-node23.fud.com",updatedBy="deppba"} 32 I have also tried to use wildcard in the metric name, prometheus is complaining about that. Looking at the metrics, I can see that some of them have dynamic names, most probably delivered by dropwizard metrics. What I ultimately want is a list of all available metrics.
The following query lists all available metrics: sum by(__name__)({app="bar"}) Where bar is the application name, as you can see in the log entries posted in the question.
Prometheus
49,135,746
35
I've found that for some graphs I get doubles values from Prometheus where should be just ones: Query I use: increase(signups_count[4m]) Scrape interval is set to the recommended maximum of 2 minutes. If I query the actual data stored: curl -gs 'localhost:9090/api/v1/query?query=(signups_count[1h])' "values":[ [1515721365.194, "579"], [1515721485.194, "579"], [1515721605.194, "580"], [1515721725.194, "580"], [1515721845.194, "580"], [1515721965.194, "580"], [1515722085.194, "580"], [1515722205.194, "581"], [1515722325.194, "581"], [1515722445.194, "581"], [1515722565.194, "581"] ], I see that there were just two increases. And indeed if I query for these times I see an expected result: curl -gs 'localhost:9090/api/v1/query_range?step=4m&query=increase(signups_count[4m])&start=1515721965.194&end=1515722565.194' "values": [ [1515721965.194, "0"], [1515722205.194, "1"], [1515722445.194, "0"] ], But Grafana (and Prometheus in the GUI) tends to set a different step in queries, with which I get a very unexpected result for a person unfamiliar with internal workings of Prometheus. curl -gs 'localhost:9090/api/v1/query_range?step=15&query=increase(signups_count[4m])&start=1515721965.194&end=1515722565.194' ... skip ... [1515722190.194, "0"], [1515722205.194, "1"], [1515722220.194, "2"], [1515722235.194, "2"], ... skip ... Knowing that increase() is just a syntactic sugar for a specific use-case of the rate() function, I guess this is how it is supposed to work given the circumstances. How to avoid such situations? How do I make Prometheus/Grafana show me ones for ones, and twos for twos, most of the time? Other than by increasing the scrape interval (this will be my last resort). I understand that Prometheus isn't an exact sort of tool, so it is fine with me if I would have a good number not at all times, but most of the time. What else am I missing here?
This is known as aliasing and is a fundamental problem in signal processing. You can improve this a bit by increasing your sample rate, a 4m range is a bit short with a 2m range. Try a 10m range. Here for example the query executed at 1515722220 only sees the 580@1515722085.194 and 581@1515722205.194 samples. That's an increase of 1 over 2 minutes, which extrapolated over 4 minutes is an increase of 2 - which is as expected. Any metrics-based monitoring system will have similar artifacts, if you want 100% accuracy you need logs.
Prometheus
48,218,950
35
We graph a timeseries with sum(increase(foo_requests_total[1m])) to show the number of foo requests per minute. Requests come in quite sporadically - just a couple of requests per day. The value that is shown in the graph is always 1.3333. Why is the value not 1? There was one request during this minute.
The challenge with calculating this number is that we only have a few data points inside a time range, and they tend not to be at the exact start and end of that time range (1 minute here). What do we do about the time between the start of the time range and the first data point, similarly the last data point and the end of the range? We do a small bit of extrapolation to smooth this out and produce the correct result in aggregate. For very slow moving counters like this it can cause artifacts.
Prometheus
38,665,904
35
What are the differences between Prometheus and Zabbix?
Both Zabbix and Prometheus may be used in various monitoring scenarios, and there isn't any particular specialization in either of these. Zabbix is older than Prometheus and probably more stable, with more ready-to-use solutions. Zabbix has a core written in C and a web UI based on PHP. Also it uses "agents" (client-side programs) written in C. Prometheus is written in the Go language. Zabbix stores data in an RDBMS (MySQL, PostgreSQL, Oracle, or SQLite) of the user's choice. Prometheus uses its own database embedded into the backend process (it is a non-relational database specially designed for storing monitoring data in a similar fashion to OpenTSDB's data model). Zabbix by default uses a "pull" model when a server connects to agents on each monitoring machine, and agents periodically gather the information and send it to a server. The alternative is "active checks" mode when agents establish a connection with a server and send data to it when it need. Prometheus uses a "pull" model when a server gathers information from client machines. But Prometheus Push Gateway may be used in cases when a "push" model is needed. Prometheus requires an application to be instrumented with the Prometheus client library (available in different programming languages) for preparing metrics. But for monitoring a system or software that can't be instrumented, there is an official "blackbox exporter" that allows probing endpoints over a range of protocols; additionally, a wide spread of third-party "exporters" and tools are available to help expose metrics for Prometheus (similar to "agents" for Zabbix). One such tool is Telegraf. Zabbix uses its own TCP-based communication protocol between agents and a server. Prometheus uses HTTP with Protocol Buffers (+ text format for ease of use with curl). Zabbix offers its own web UI for visualization. Prometheus offers a basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana. Zabbix has support for alerting in its core. Prometheus offers a solution for alerting that is separated from its core into the Alertmanager application.
Prometheus
35,305,170
35
I'm attracted to prometheus by the histogram (and summaries) time-series, but I've been unsuccessful to display a histogram in either promdash or grafana. What I expect is to be able to show: a histogram at a point in time, e.g. the buckets on the X axis and the count for the bucket on the Y axis and a column for each bucket a stacked graph of the buckets such that each bucket is shaded and the total of the stack equals the inf bucket A sample metric would be the response time of an HTTP server.
Grafana v5+ provides direct support for representing Prometheus histograms as heatmap. http://docs.grafana.org/features/panels/heatmap/#histograms-and-buckets Heatmaps are preferred over histogram because a histogram does not show you how the trend changes over time. So if you have a time-series histogram, then use the heatmap panel to picture it. To get you started, here is an example (for Prometheus data): Suppose you've a histogram as follows, http_request_duration_seconds_bucket(le=0.2) 1, http_request_duration_seconds_bucket(le=0.5) 2, http_request_duration_seconds_bucket(le=1.0) 2, http_request_duration_seconds_bucket(le=+inf) 5 http_request_duration_seconds_count 5 http_request_duration_seconds_sum 3.07 You can picture this histogram data as a heatmap by using the query: sum(increase(http_request_duration_seconds_bucket[10m])) by (le), making sure to set the format as "heatmap," the legend format as {{ le }}, and setting the visualization in the panel settings to "heatmap."
Prometheus
39,135,026
34
Prometheus is built around returning a time series representation of metrics. In many cases, however, I only care about what the state of a metric is right now, and I'm having a hard time figuring out a reliable way to get the "most recent" value of a metric. Since right now it's getting metrics every 30 seconds, I tried something like this: my_metric[30s] But this feels fragile. If metrics are dated any more or less than 30 seconds between data points, then I either get back more than one or zero results. How can I get the most recent value of a metric?
All you need is my_metric, which will by default return the most recent value no more than 5 minutes old.
Prometheus
40,729,406
34
I try to get Total and Free disk space on my Kubernetes VM so I can display % of taken space on it. I tried various metrics that included "filesystem" in name but none of these displayed correct total disk size. Which one should be used to do so? Here is a list of metrics I tried node_filesystem_size_bytes node_filesystem_avail_bytes node:node_filesystem_usage: node:node_filesystem_avail: node_filesystem_files node_filesystem_files_free node_filesystem_free_bytes node_filesystem_readonly
According to my Grafana dashboard, the following metrics work nicely for alerting for available space, 100 - ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"}) The formula gives out the percentage of available space on the pointed disk. Make sure you include the mountpoint and fstype within the metrics.
Prometheus
57,357,532
33
ElasticSearch is a document store and more of a search engine, I think ElasticSearch is not good choice for monitoring high dimensional data as it consumes lot of resources. On the other hand prometheus is a TSDB which is designed for capturing high dimensional data. Anyone experienced in this please let me know what's the best tool to go with for container and server monitoring.
ELK is a general-purpose no-sql stack that can be used for monitoring. We've successfully deployed one on production and used it for some aspects of our monitoring system. You can ship metrics into it (if you wish) and use it to monitor them, but its not specifically designed to do that. Nor does the non-commercial version (version 7.9) come with an alerting system - you'll need to setup another component for that (like Sensu) or pay for ES commercial license. Prometheus, on the other hand, is designed to be used for monitoring. And along with its metric-gathering clients (or other 3rd party clients like Telegraf and its service discovery options (like consul) and its alert-manager is just the right tool for this job. Ultimately, both solutions can work, but in my opinion Elasticsearch will require more work and more upkeep (we found that ES clusters are a pain to maintain - but that depends on the amount of data you'll have).
Prometheus
40,793,901
33
I'm trying to write a prometheus query in grafana that will select visits_total{route!~"/api/docs/*"} What I'm trying to say is that it should select all the instances where the route doesn't match /api/docs/* (regex) but this isn't working. It's actually just selecting all the instances. I tried to force it to select others by doing this: visits_total{route=~"/api/order/*"} but it doesn't return anything. I found these operators in the querying basics page of prometheus. What am I doing wrong here?
May be because you have / in the regex. Try with something like visits_total{route=~".*order.*"} and see if the result is generated or not. Try this also, visits_total{route!~"\/api\/docs\/\*"} If you want to exclude all the things that has the word docs you can use below, visits_total{route!~".*docs.*"}
Prometheus
54,813,545
31
I'm displaying Prometheus query on a Grafana table. That's the query (Counter metric): sum(increase(check_fail{app="monitor"}[20m])) by (reason) The result is a table of failure reason and its count. The problem is that the table is also showing reasons that happened 0 times in the time frame and I don't want to display them. AFAIK it's not possible to hide them through Grafana. I know prometheus has comparison operators but I wasn't able to apply them.
I don't know how you tried to apply the comparison operators, but if I use this very similar query: sum(increase(up[1d])) by (job) I get a result of zero for all jobs that have not restarted over the past day and a non-zero result for jobs that have had instances restart. If I now tack on a != 0 to the end of it, all zero values are filtered out: sum(increase(up[1d])) by (job) != 0
Prometheus
54,762,265
31
I have a query: node_systemd_unit_state{instance="server-01",job="node-exporters",name="kubelet.service",state="active"} 1 I want the label name being renamed (or replaced) to unit_name ONLY within the node_systemd_unit_state metric. So, desired result is: node_systemd_unit_state{instance="server-01",job="node-exporters",unit_name="kubelet.service",state="active"} 1 There are many other metrics with a label name name in the node-exporters job. That's why I can't use relabel config across the job.
you can use the label_replace function in promQL, but it also add the label, don't replace it label_replace( <vector_expr>, "<desired_label>", "$1", "<existing_label>", "(.+)" ) label_replace( node_systemd_unit_state{instance="server-01",job="node-exporters",name="kubelet.service",state="active"}, "unit_name","$1","name", "(.+)" ) So, to avoid the repetition you can add: sum(label_replace( node_systemd_unit_state{instance="server-01",job="node-exporters",name="kubelet.service",state="active"}, "unit_name","$1","name", "(.+)" ) )by(unit_name)
Prometheus
54,235,797
31
I have a metric varnish_main_client_req of type counter and I want to set up an alert that triggers if the rate of requests drops/raises by a certain amount in a given time (e.g. "Amount of requests deviated in the last 2 min!"). Using the deriv() function should work much better than comparing relative values, but it can only be used with gauges. Is it possible to convert an ever increasing metric aka. counter to a rated metric aka. gauge? Query: deriv(rate(varnish_main_client_req[2m])[5m]) Expectation: Prometheus calculates the rate of client requests over the last 2 mins and returns a derivative of the resulting values over the last 5 mins. Actual result: "error": "parse error at char 48: range specification must be preceded by a metric selector, but follows a *promql.Call instead" Recording rules might be an option but it feels like a cheap workaround for something that should work with queries: my_gauge_metric = rate(some_counter_metric[2m])
Solution It's possible with the subquery-syntax (introduced in Prometheus version 2.7): deriv(rate(varnish_main_client_req[2m])[5m:10s]) Warning: These subqueries are expensive, i.e. create very high load on Prometheus. Use recording-rules when you use these queries regularly (in alerts, etc.). Subquery syntax <instant_query>[<range>:<resolution>] instant_query: a PromQL-function which returns an instant-vector) range: offset (back in time) to start the first subquery resolution: the size of each of the subqueries. It returns a range-vector. In the example above, Prometheus runs rate() (= instant_query) 30 times (the first from 5 minutes ago to -4:50, ..., the last -0:10 to now). The resulting range-vector is input to the deriv()-function. Another example (mostly available on all Prometheus instances): deriv(rate(prometheus_http_request_duration_seconds_sum{job="prometheus"}[1m])[5m:10s]) Without the subquery-range ([5m:10s]), you'll get this error-message: parse error at char 80: expected type range vector in call to function "deriv", got instant vector
Prometheus
40,717,605
30
I currently have the following Promql query which allow me to query the memory used by each of my K8S pods: sum(container_memory_working_set_bytes{image!="",name=~"^k8s_.*"}) by (pod_name) The pod's name is followed by a hash defined by K8S: weave-net-kxpxc weave-net-jjkki weave-net-asdkk Which all belongs to the same app: weave-net What I would like is to aggregate the memory of all pods which belongs to the same app. So, the query would sum the memory of all weave-net pods and place the result in an app called weave. Such as the result would be: {pod_name="weave-net"} 10 instead of {pod_name="weave-net-kxpxc"} 5 {pod_name="weave-net-jjkki"} 3 {pod_name="weave-net-asdkk"} 2 Is it even possible to do so, and if yes, how ?
You can use label_replace sum(label_replace(container_memory_working_set_bytes{image!="",name=~"^k8s_.*"}, "pod_set", "$1", "pod_name", "(.*)-.{5}")) by (pod_set) You will be including a new label (pod_set) that matches the first group ($1) from matching the regex over the pod_name label. Then you sum over the new label pod_set. [Edited]
Prometheus
51,614,030
29
There are times when you need to divide one metric by another metric. For example, I'd like to calculate a mean latency like that: rate({__name__="hystrix_command_latency_total_seconds_sum"}[60s]) / rate({__name__="hystrix_command_latency_total_seconds_count"}[60s]) If there is no activity during the specified time period, the rate() in the divider becomes 0 and the result of division becomes NaN. If I do some aggregation over the results (avg() or sum() or whatever), the whole aggregation result becomes NaN. So I add a check for zero in divider: rate({__name__="hystrix_command_latency_total_seconds_sum"}[60s]) / (rate({__name__="hystrix_command_latency_total_seconds_count"}[60s]) > 0) This removes NaNs from the result vector. And also tears the line on the graph to shreds. Let's mark periods of inactivity with 0 value to make the graph continuous again: rate({__name__="hystrix_command_latency_total_seconds_sum"}[60s]) / (rate({__name__="hystrix_command_latency_total_seconds_count"}[60s]) > 0) or rate({__name__="hystrix_command_latency_total_seconds_count"}[60s]) > bool 0 This effectively replaces NaNs with 0, graph is continuous, aggregations work OK. But resulting query is slightly cumbersome, especially when you need to do more label filtering and do some aggregations over results. Something like that: avg( 1000 * increase({__name__=~".*_hystrix_command_latency_total_seconds_sum", command_group=~"$commandGroup", command_name=~"$commandName", job=~"$service", instance=~"$instance"}[60s]) / (increase({__name__=~".*_hystrix_command_latency_total_seconds_count", command_group=~"$commandGroup", command_name=~"$commandName", job=~"$service", instance=~"$instance"}[60s]) > 0) or increase({__name__=~".*_hystrix_command_latency_total_seconds_count", command_group=~"$commandGroup", command_name=~"$commandName", job=~"$service", instance=~"$instance"}[60s]) > bool 0 ) by (command_group, command_name) Long story short: Are there any simpler ways to deal with zeros in divider? Or any common practices?
If there is no activity during the specified time period, the rate() in the divider becomes 0 and the result of division becomes NaN. This is the correct behaviour, NaN is what you want the result to be. aggregations work OK. You can't aggregate ratios. You need to aggregate the numerator and denominator separately and then divide. So: sum by (command_group, command_name)(rate(hystrix_command_latency_total_seconds_sum[5m])) / sum by (command_group, command_name)(rate(hystrix_command_latency_total_seconds_count[5m]))
Prometheus
47,056,557
29
We have a hierachical prometheus setup with some server scraping others. We'd like to have some servers scrape all metrics from others. Currently we try to use match[]="{__name__=~".*"}" as a metric selector, but this gives the error parse error at char 16: vector selector must contain at least one non-empty matcher. Is there a way to scrape all metrics from a remote prometheus without listing each (prefix) as a match selector?
Yes, you can do: match[]="{__name__=~".+"}" (note the + instead of * to not match the empty string). Prometheus requires at least one matcher in a label matcher set that doesn't match everything.
Prometheus
39,249,048
29
I'm running prometheus inside kubernetes cluster. I need to send queries to Prometheus every minute, to gather information of many metrics from many containers. There are too match queries, so I must combine them. I know how I can ask Prometheus for one metric information on multiple containers: my_metric{container_name=~"frontend|backend|db"} , but I haven't found a way to ask Prometheus for multiple metric information in one query. I'm looking for the equivalent to 'union' in sql queries.
I found here this solution: {__name__=~"metricA|metricB|metricC",container_name=~"frontend|backend|db"}.
Prometheus
47,406,624
28