Dataset Viewer
issue
stringlengths 10
256k
| solution
stringlengths 1
101M
|
---|---|
Elasticsearch check fails when used with Opensearch
**Note:** If you have a feature request, you should [contact support](https://docs.datadoghq.com/help/) so the request can be properly tracked.
**Output of the [info page](https://docs.datadoghq.com/agent/guide/agent-commands/#agent-status-and-information)**
```text
elastic (1.24.0)
----------------
Instance ID: elastic:e6126c6cf9b8eebe [ERROR]
Configuration Source: file:/etc/datadog-agent/conf.d/elastic.yaml
Total Runs: 121
Metric Samples: Last Run: 0, Total: 0
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 1, Total: 121
Average Execution Time : 13ms
Last Execution Date : 2021-09-10 06:00:16 UTC (1631253616000)
Last Successful Execution Date : Never
metadata:
version.major: 1
version.minor: 0
version.patch: 0
version.raw: 1.0.0
version.scheme: semver
Error: 400 Client Error: Bad Request for url: http://localhost:9200/_nodes/_local/stats?all=true
Traceback (most recent call last):
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py", line 897, in run
self.check(instance)
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/elastic/elastic.py", line 79, in check
stats_data = self._get_data(stats_url)
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/elastic/elastic.py", line 232, in _get_data
resp.raise_for_status()
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://localhost:9200/_nodes/_local/stats?all=true
```
**Additional environment details (Operating System, Cloud provider, etc):**
Fedora 33 running agent inside systemd-nspawn container. Container is running Opensearch 1.0.0 and datadog agent has been configured to monitor it.
**Steps to reproduce the issue:**
1. Install and run Opensearch
2. Configure datadog agent Elasticsearch integration to monitor it
3. Observe that the Elasticsearch check is failing
**Describe the results you received:**
The Elasticsearch check fails in the following way:
```text
2021-09-09 12:48:49 UTC | CORE | INFO | (pkg/collector/runner/runner.go:261 in work) | check:elastic | Running check
2021-09-09 12:48:49 UTC | CORE | DEBUG | (pkg/collector/python/check.go:81 in runCheck) | Running python check elastic elastic:878a21a7a9cbcd49
2021-09-09 12:48:49 UTC | CORE | DEBUG | (pkg/collector/python/datadog_agent.go:126 in LogMessage) | - | (connectionpool.py:226) | Starting new HTTP connection (1): localhost:9200
2021-09-09 12:48:49 UTC | CORE | DEBUG | (pkg/collector/python/datadog_agent.go:126 in LogMessage) | - | (connectionpool.py:433) | http://localhost:9200 "GET / HTTP/1.1" 200 349
2021-09-09 12:48:49 UTC | CORE | DEBUG | (pkg/collector/python/datadog_agent.go:126 in LogMessage) | elastic:878a21a7a9cbcd49 | (elastic.py:247) | request to url http://localhost:9200 returned: <Response [200]>
2021-09-09 12:48:49 UTC | CORE | DEBUG | (pkg/collector/python/datadog_agent.go:126 in LogMessage) | elastic:878a21a7a9cbcd49 | (elastic.py:147) | Elasticsearch version is [1, 0, 0]
2021-09-09 12:48:49 UTC | CORE | DEBUG | (pkg/collector/python/datadog_agent.go:126 in LogMessage) | - | (connectionpool.py:226) | Starting new HTTP connection (1): localhost:9200
2021-09-09 12:48:49 UTC | CORE | DEBUG | (pkg/collector/python/datadog_agent.go:126 in LogMessage) | - | (connectionpool.py:433) | http://localhost:9200 "GET /_nodes/_local/stats?all=true HTTP/1.1" 400 155
2021-09-09 12:48:49 UTC | CORE | ERROR | (pkg/collector/runner/runner.go:292 in work) | Error running check elastic: [{"message": "400 Client Error: Bad Request for url: http://localhost:9200/_nodes/_local/stats?all=true", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 897, in run\n self.check(instance)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/elastic/elastic.py\", line 79, in check\n stats_data = self._get_data(stats_url)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/elastic/elastic.py\", line 232, in _get_data\n resp.raise_for_status()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/requests/models.py\", line 940, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://localhost:9200/_nodes/_local/stats?all=true\n"}]
2021-09-09 12:48:49 UTC | CORE | INFO | (pkg/collector/runner/runner.go:327 in work) | check:elastic | Done running check
```
**Describe the results you expected:**
Opensearch 1.0.0 is fully compatible with Elasticsearch 7.10.2 and datadog agent should be able to monitor it in the same way as the Elasticsearch.
**Additional information you deem important (e.g. issue happens only occasionally):**
The issue seems to be that since Opensearch is reporting its version as `1.0.0` the Elasticsearch check interprets it as Elasticsearch `1.0.0` and for this reason adds `?all=true` to the URL when calling `/_nodes/_local/stats` API to collect the node stats. https://github.com/DataDog/integrations-core/blob/40d9ceb14c57c109e8f6371b1a4c677fa33e1669/elastic/datadog_checks/elastic/elastic.py#L215
The `?all=true` parameter has been depricated since Elasticsearch version `5.0.0` and results in HTTP 400 error when used with newer versions of Elasticsearch. All is working well when Elasticsearch check is monitoring Opensearch if `?all=true` has been left out when making requests to `/_nodes/_local/stats`.
The version information returned from Opensearch contains field called `distribution` which has value `opensearch` that could be used to detect that the check is actually running against an Opensearch server. Example here:
```text
{'name': 'os-355e47a7-1', 'cluster_name': 'fc7a706a-9c51-40a1-9e1b-953d1546caa0', 'cluster_uuid': 'HrCNW5uhSZKtHxlJQz444g', 'version': {'distribution': 'opensearch', 'number': '1.0.0', 'build_type': 'unknown', 'build_hash': 'unknown', 'build_date': '2021-08-13T12:38:05.881264Z', 'build_snapshot': False, 'lucene_version': '8.8.2', 'minimum_wire_compatibility_version': '6.8.0', 'minimum_index_compatibility_version': '6.0.0-beta1'}, 'tagline': 'The OpenSearch Project: https://opensearch.org/'}
```
| diff --git a/elastic/datadog_checks/elastic/elastic.py b/elastic/datadog_checks/elastic/elastic.py
--- a/elastic/datadog_checks/elastic/elastic.py
+++ b/elastic/datadog_checks/elastic/elastic.py
@@ -139,11 +139,17 @@ def _get_es_version(self):
try:
data = self._get_data(self._config.url, send_sc=False)
raw_version = data['version']['number']
+
self.set_metadata('version', raw_version)
# pre-release versions of elasticearch are suffixed with -rcX etc..
# peel that off so that the map below doesn't error out
raw_version = raw_version.split('-')[0]
version = [int(p) for p in raw_version.split('.')[0:3]]
+ if data['version'].get('distribution', '') == 'opensearch':
+ # Opensearch API is backwards compatible with ES 7.10.0
+ # https://opensearch.org/faq
+ self.log.debug('OpenSearch version %s detected', version)
+ version = [7, 10, 0]
except AuthenticationError:
raise
except Exception as e:
|
[nginx] NGINX Plus extended status API depreciated; Integration needs to use Plus API instead
NGINX R13 was just released and it replaces the Extended Status API with the NGINX Plus API.
[See Announcement](https://www.nginx.com/blog/nginx-plus-r13-released/)
A future version of NGINX Plus will drop support for the Extended Status API. They are promising at least 6 months before they do so.
The Datadog integration for NGINX Plus depends on the Extended Status API (http status module).
https://github.com/DataDog/integrations-core/tree/master/nginx
### Extended Status API Deprecated
The previous on-the-fly reconfiguration and extended status APIs are now deprecated. The deprecated APIs will continue to be shipped with NGINX Plus for a minimum of 6 months, alongside the new NGINX Plus API. After this period, the old APIs will be removed in a subsequent release of NGINX Plus.
### NGINX Plus API
NGINX Plus R13 includes a new API that is unified under one single endpoint. Previous versions of NGINX Plus included separate upstream_conf and extended status APIs. The new API combines the functionality of both, and also includes a new Key-Value Store module which brings a variety of use cases for on-the-fly reconfiguration (discussed in the next section).
| diff --git a/nginx/datadog_checks/nginx/__init__.py b/nginx/datadog_checks/nginx/__init__.py
--- a/nginx/datadog_checks/nginx/__init__.py
+++ b/nginx/datadog_checks/nginx/__init__.py
@@ -2,6 +2,6 @@
Nginx = nginx.Nginx
-__version__ = "1.1.0"
+__version__ = "1.2.0"
__all__ = ['nginx']
diff --git a/nginx/datadog_checks/nginx/nginx.py b/nginx/datadog_checks/nginx/nginx.py
--- a/nginx/datadog_checks/nginx/nginx.py
+++ b/nginx/datadog_checks/nginx/nginx.py
@@ -5,6 +5,8 @@
# stdlib
import re
import urlparse
+import time
+from datetime import datetime
# 3rd party
import requests
@@ -23,6 +25,20 @@
'nginx.upstream.peers.responses.5xx'
]
+PLUS_API_ENDPOINTS = {
+ "nginx": [],
+ "http/requests": ["requests"],
+ "http/server_zones": ["server_zones"],
+ "http/upstreams": ["upstreams"],
+ "http/caches": ["caches"],
+ "processes": ["processes"],
+ "connections": ["connections"],
+ "ssl": ["ssl"],
+ "slabs": ["slabs"],
+ "stream/server_zones": ["stream", "server_zones"],
+ "stream/upstreams": ["stream", "upstreams"],
+}
+
class Nginx(AgentCheck):
"""Tracks basic nginx metrics via the status module
* number of connections
@@ -39,18 +55,32 @@ class Nginx(AgentCheck):
"""
def check(self, instance):
+
if 'nginx_status_url' not in instance:
raise Exception('NginX instance missing "nginx_status_url" value.')
tags = instance.get('tags', [])
- response, content_type = self._get_data(instance)
- self.log.debug(u"Nginx status `response`: {0}".format(response))
- self.log.debug(u"Nginx status `content_type`: {0}".format(content_type))
+ url, ssl_validation, auth, use_plus_api, plus_api_version = self._get_instance_params(instance)
- if content_type.startswith('application/json'):
- metrics = self.parse_json(response, tags)
+ if not use_plus_api:
+ response, content_type = self._get_data(url, ssl_validation, auth)
+ self.log.debug(u"Nginx status `response`: {0}".format(response))
+ self.log.debug(u"Nginx status `content_type`: {0}".format(content_type))
+
+ if content_type.startswith('application/json'):
+ metrics = self.parse_json(response, tags)
+ else:
+ metrics = self.parse_text(response, tags)
else:
- metrics = self.parse_text(response, tags)
+ metrics = []
+ self._perform_service_check("/".join([url, plus_api_version]), ssl_validation, auth)
+ # These are all the endpoints we have to call to get the same data as we did with the old API
+ # since we can't get everything in one place anymore.
+
+ for endpoint, nest in PLUS_API_ENDPOINTS.iteritems():
+ response = self._get_plus_api_data(url, ssl_validation, auth, plus_api_version, endpoint, nest)
+ self.log.debug(u"Nginx Plus API version {0} `response`: {1}".format(plus_api_version, response))
+ metrics.extend(self.parse_json(response, tags))
funcs = {
'gauge': self.gauge,
@@ -62,13 +92,13 @@ def check(self, instance):
name, value, tags, metric_type = row
if name in UPSTREAM_RESPONSE_CODES_SEND_AS_COUNT:
func_count = funcs['count']
- func_count(name+"_count", value, tags)
+ func_count(name + "_count", value, tags)
func = funcs[metric_type]
func(name, value, tags)
except Exception as e:
self.log.error(u'Could not submit metric: %s: %s' % (repr(row), str(e)))
- def _get_data(self, instance):
+ def _get_instance_params(self, instance):
url = instance.get('nginx_status_url')
ssl_validation = instance.get('ssl_validation', True)
@@ -76,6 +106,26 @@ def _get_data(self, instance):
if 'user' in instance and 'password' in instance:
auth = (instance['user'], instance['password'])
+ use_plus_api = instance.get("use_plus_api", False)
+ plus_api_version = str(instance.get("plus_api_version", 2))
+
+ return url, ssl_validation, auth, use_plus_api, plus_api_version
+
+ def _get_data(self, url, ssl_validation, auth):
+
+ r = self._perform_service_check(url, ssl_validation, auth)
+
+ body = r.content
+ resp_headers = r.headers
+ return body, resp_headers.get('content-type', 'text/plain')
+
+ def _perform_request(self, url, ssl_validation, auth):
+ r = requests.get(url, auth=auth, headers=headers(self.agentConfig),
+ verify=ssl_validation, timeout=self.default_integration_http_timeout)
+ r.raise_for_status()
+ return r
+
+ def _perform_service_check(self, url, ssl_validation, auth):
# Submit a service check for status page availability.
parsed_url = urlparse.urlparse(url)
nginx_host = parsed_url.hostname
@@ -84,9 +134,7 @@ def _get_data(self, instance):
service_check_tags = ['host:%s' % nginx_host, 'port:%s' % nginx_port]
try:
self.log.debug(u"Querying URL: {0}".format(url))
- r = requests.get(url, auth=auth, headers=headers(self.agentConfig),
- verify=ssl_validation, timeout=self.default_integration_http_timeout)
- r.raise_for_status()
+ r = self._perform_request(url, ssl_validation, auth)
except Exception:
self.service_check(service_check_name, AgentCheck.CRITICAL,
tags=service_check_tags)
@@ -94,10 +142,31 @@ def _get_data(self, instance):
else:
self.service_check(service_check_name, AgentCheck.OK,
tags=service_check_tags)
+ return r
- body = r.content
- resp_headers = r.headers
- return body, resp_headers.get('content-type', 'text/plain')
+ def _nest_payload(self, keys, payload):
+ # Nest a payload in a dict under the keys contained in `keys`
+ if len(keys) == 0:
+ return payload
+ else:
+ return {
+ keys[0]: self._nest_payload(keys[1:], payload)
+ }
+
+ def _get_plus_api_data(self, api_url, ssl_validation, auth, plus_api_version, endpoint, nest):
+ # Get the data from the Plus API and reconstruct a payload similar to what the old API returned
+ # so we can treat it the same way
+
+ url = "/".join([api_url, plus_api_version, endpoint])
+ payload = {}
+ try:
+ self.log.debug(u"Querying URL: {0}".format(url))
+ r = self._perform_request(url, ssl_validation, auth)
+ payload = self._nest_payload(nest, r.json())
+ except Exception as e:
+ self.log.exception("Error querying %s metrics at %s: %s", endpoint, url, e)
+
+ return payload
@classmethod
def parse_text(cls, raw, tags):
@@ -134,7 +203,10 @@ def parse_text(cls, raw, tags):
def parse_json(cls, raw, tags=None):
if tags is None:
tags = []
- parsed = json.loads(raw)
+ if isinstance(raw, dict):
+ parsed = raw
+ else:
+ parsed = json.loads(raw)
metric_base = 'nginx'
output = []
all_keys = parsed.keys()
@@ -188,7 +260,19 @@ def _flatten_json(cls, metric_base, val, tags):
val = 0
output.append((metric_base, val, tags, 'gauge'))
- elif isinstance(val, (int, float)):
+ elif isinstance(val, (int, float, long)):
output.append((metric_base, val, tags, 'gauge'))
+ elif isinstance(val, (unicode, str)):
+ # In the new Plus API, timestamps are now formatted strings, some include microseconds, some don't...
+ try:
+ timestamp = time.mktime(datetime.strptime(val, "%Y-%m-%dT%H:%M:%S.%fZ").timetuple())
+ output.append((metric_base, timestamp, tags, 'gauge'))
+ except ValueError:
+ try:
+ timestamp = time.mktime(datetime.strptime(val, "%Y-%m-%dT%H:%M:%SZ").timetuple())
+ output.append((metric_base, timestamp, tags, 'gauge'))
+ except ValueError:
+ pass
+
return output
|
Support for "headers" configuration in haproxy.yaml?
Hi all, I'm interested in passing a custom header to my HAProxy integration in `haproxy.yaml`, like this:
```
instances:
- name: haproxy
url: http://localhost:3000
headers:
service-route: haproxy-stats
```
However, that [doesn't appear to be supported](https://github.com/DataDog/integrations-core/blob/master/haproxy/check.py#L160). Any chance something like this could be added, or ideas for a workaround?
| diff --git a/haproxy/datadog_checks/haproxy/__init__.py b/haproxy/datadog_checks/haproxy/__init__.py
--- a/haproxy/datadog_checks/haproxy/__init__.py
+++ b/haproxy/datadog_checks/haproxy/__init__.py
@@ -2,6 +2,6 @@
HAProxy = haproxy.HAProxy
-__version__ = "1.0.2"
+__version__ = "1.2.0"
__all__ = ['haproxy']
diff --git a/haproxy/datadog_checks/haproxy/haproxy.py b/haproxy/datadog_checks/haproxy/haproxy.py
--- a/haproxy/datadog_checks/haproxy/haproxy.py
+++ b/haproxy/datadog_checks/haproxy/haproxy.py
@@ -65,7 +65,10 @@ class HAProxy(AgentCheck):
def __init__(self, name, init_config, agentConfig, instances=None):
AgentCheck.__init__(self, name, init_config, agentConfig, instances)
- # Host status needs to persist across all checks
+ # Host status needs to persist across all checks.
+ # We'll create keys when they are referenced. See:
+ # https://en.wikipedia.org/wiki/Autovivification
+ # https://gist.github.com/hrldcpr/2012250
self.host_status = defaultdict(lambda: defaultdict(lambda: None))
METRICS = {
@@ -112,8 +115,9 @@ def check(self, instance):
username = instance.get('username')
password = instance.get('password')
verify = not _is_affirmative(instance.get('disable_ssl_validation', False))
+ custom_headers = instance.get('headers', {})
- data = self._fetch_url_data(url, username, password, verify)
+ data = self._fetch_url_data(url, username, password, verify, custom_headers)
collect_aggregates_only = _is_affirmative(
instance.get('collect_aggregates_only', True)
@@ -159,16 +163,17 @@ def check(self, instance):
tags_regex=tags_regex,
)
- def _fetch_url_data(self, url, username, password, verify):
+ def _fetch_url_data(self, url, username, password, verify, custom_headers):
''' Hit a given http url and return the stats lines '''
# Try to fetch data from the stats URL
auth = (username, password)
url = "%s%s" % (url, STATS_URL)
+ custom_headers.update(headers(self.agentConfig))
self.log.debug("Fetching haproxy stats from url: %s" % url)
- response = requests.get(url, auth=auth, headers=headers(self.agentConfig), verify=verify, timeout=self.default_integration_http_timeout)
+ response = requests.get(url, auth=auth, headers=custom_headers, verify=verify, timeout=self.default_integration_http_timeout)
response.raise_for_status()
return response.content.splitlines()
|
process monitor throws warnings on postgresql: Warning: Process 123 disappeared while scanning
process monitor throws warnings on postgresql: Warning: Process 123 disappeared while scanning
process
-------
- instance #0 [OK]
- instance DataDog/dd-agent#1 [WARNING]
Warning: Process 16039 disappeared while scanning
Warning: Process 16177 disappeared while scanning
Warning: Process 16178 disappeared while scanning
Warning: Process 16193 disappeared while scanning
Warning: Process 16194 disappeared while scanning
Warning: Process 16198 disappeared while scanning
```
Warning: Process 15830 disappeared while scanning
Warning: Process 15844 disappeared while scanning
- instance DataDog/dd-agent#2 [OK]
- instance DataDog/dd-agent#3 [OK]
- instance DataDog/dd-agent#4 [OK]
- instance DataDog/dd-agent#5 [OK]
- instance DataDog/dd-agent#6 [OK]
- instance DataDog/dd-agent#7 [OK]
- instance DataDog/dd-agent#8 [OK]
- instance DataDog/dd-agent#9 [OK]
- Collected 43 metrics, 0 events & 11 service checks
```
Is is perfectly normal for postgresql to start and stop processes, and I see no reason why datadog would have to complain about that.
| diff --git a/process/datadog_checks/process/process.py b/process/datadog_checks/process/process.py
--- a/process/datadog_checks/process/process.py
+++ b/process/datadog_checks/process/process.py
@@ -260,7 +260,7 @@ def psutil_wrapper(self, process, method, accessors=None, *args, **kwargs):
except psutil.AccessDenied:
self.log.debug("psutil was denied access for method %s", method)
except psutil.NoSuchProcess:
- self.warning("Process %s disappeared while scanning", process.pid)
+ self.log.debug("Process %s disappeared while scanning", process.pid)
return result
@@ -285,7 +285,7 @@ def get_process_state(self, name, pids):
self.log.debug('New process in cache: %s', pid)
# Skip processes dead in the meantime
except psutil.NoSuchProcess:
- self.warning('Process %s disappeared while scanning', pid)
+ self.log.debug('Process %s disappeared while scanning', pid)
# reset the process caches now, something changed
self.last_pid_cache_ts[name] = 0
self.process_list_cache.reset()
|
[sqlserver] Add 'recovery model' tag to database metrics
Add the `recovery_model_desc` column from `sys.databases` to `SqlBaseStats` as a tag. This would be a useful addition to be able to filter off of for alerts, and since the data is already being queried it should be a minor update.
https://github.com/DataDog/integrations-core/blob/01b9aaa746ca94f0b233b41b357cc61b58f3f7ae/sqlserver/datadog_checks/sqlserver/metrics.py#L687-L715
Happy to make the PR if the suggestion is accepted.
| diff --git a/sqlserver/datadog_checks/sqlserver/metrics.py b/sqlserver/datadog_checks/sqlserver/metrics.py
--- a/sqlserver/datadog_checks/sqlserver/metrics.py
+++ b/sqlserver/datadog_checks/sqlserver/metrics.py
@@ -697,6 +697,7 @@ def fetch_all_values(cls, cursor, counters_list, logger, databases=None):
def fetch_metric(self, rows, columns):
database_name = columns.index("name")
db_state_desc_index = columns.index("state_desc")
+ db_recovery_model_desc_index = columns.index("recovery_model_desc")
value_column_index = columns.index(self.column)
for row in rows:
@@ -705,9 +706,11 @@ def fetch_metric(self, rows, columns):
column_val = row[value_column_index]
db_state_desc = row[db_state_desc_index]
+ db_recovery_model_desc = row[db_recovery_model_desc_index]
metric_tags = [
'database:{}'.format(str(self.instance)),
'database_state_desc:{}'.format(str(db_state_desc)),
+ 'database_recovery_model_desc:{}'.format(str(db_recovery_model_desc)),
]
metric_tags.extend(self.tags)
metric_name = '{}'.format(self.datadog_name)
|
[apache] fix metric collection
### What does this PR do?
This collects the correct data for `apache.net.bytes_per_s` and `apache.net.request_per_s`.
https://www.datadoghq.com/blog/collect-apache-performance-metrics
https://wiki.opennms.org/wiki/Monitoring_Apache_with_the_HTTP_collector
https://httpd.apache.org/docs/current/mod/mod_status.html
### Motivation
Customer encountered bug
| diff --git a/apache/datadog_checks/apache/__init__.py b/apache/datadog_checks/apache/__init__.py
--- a/apache/datadog_checks/apache/__init__.py
+++ b/apache/datadog_checks/apache/__init__.py
@@ -2,6 +2,6 @@
Apache = apache.Apache
-__version__ = "1.1.1"
+__version__ = "1.1.2"
__all__ = ['apache']
diff --git a/apache/datadog_checks/apache/apache.py b/apache/datadog_checks/apache/apache.py
--- a/apache/datadog_checks/apache/apache.py
+++ b/apache/datadog_checks/apache/apache.py
@@ -29,9 +29,12 @@ class Apache(AgentCheck):
'ConnsTotal': 'apache.conns_total',
'ConnsAsyncWriting': 'apache.conns_async_writing',
'ConnsAsyncKeepAlive': 'apache.conns_async_keep_alive',
- 'ConnsAsyncClosing' : 'apache.conns_async_closing',
- 'BytesPerSec': 'apache.net.bytes_per_s',
- 'ReqPerSec': 'apache.net.request_per_s'
+ 'ConnsAsyncClosing' : 'apache.conns_async_closing'
+ }
+
+ RATES = {
+ 'Total kBytes': 'apache.net.bytes_per_s',
+ 'Total Accesses': 'apache.net.request_per_s'
}
def __init__(self, name, init_config, agentConfig, instances=None):
@@ -99,6 +102,12 @@ def check(self, instance):
metric_name = self.GAUGES[metric]
self.gauge(metric_name, value, tags=tags)
+ # Send metric as a rate, if applicable
+ if metric in self.RATES:
+ metric_count += 1
+ metric_name = self.RATES[metric]
+ self.rate(metric_name, value, tags=tags)
+
if metric_count == 0:
if self.assumed_url.get(instance['apache_status_url'], None) is None and url[-5:] != '?auto':
self.assumed_url[instance['apache_status_url']] = '%s?auto' % url
|
Consul NodeName (NodeId) in Service Checks
**Note:** If you have a feature request, you should [contact support](https://docs.datadoghq.com/help/) so the request can be properly tracked.
**Output of the [info page](https://docs.datadoghq.com/agent/guide/agent-commands/#agent-status-and-information)**
```text
Getting the status from the agent.
===============
Agent (v7.37.1)
===============
Status date: 2022-07-27 15:16:34.997 UTC (1658934994997)
Agent start: 2022-07-22 18:34:00.519 UTC (1658514840519)
Pid: 18465
Go Version: go1.17.11
Python Version: 3.8.11
Build arch: amd64
Agent flavor: agent
Check Runners: 4
Log Level: info
Paths
=====
Config File: /etc/datadog-agent/datadog.yaml
conf.d: /etc/datadog-agent/conf.d
checks.d: /etc/datadog-agent/checks.d
Clocks
======
NTP offset: 385µs
System time: 2022-07-27 15:16:34.997 UTC (1658934994997)
Host Info
=========
bootTime: 2021-09-23 20:11:50 UTC (1632427910000)
hostId: <redacted>
kernelArch: x86_64
kernelVersion: 4.9.0-16-amd64
os: linux
platform: debian
platformFamily: debian
platformVersion: 9.13
procs: 121
uptime: 7246h22m12s
Hostnames
=========
<redacted>
Metadata
========
agent_version: 7.37.1
cloud_provider: AWS
config_apm_dd_url:
config_dd_url: https://app.datadoghq.com
config_logs_dd_url:
config_logs_socks5_proxy_address:
config_no_proxy: []
config_process_dd_url:
config_proxy_http:
config_proxy_https:
config_site:
feature_apm_enabled: true
feature_cspm_enabled: false
feature_cws_enabled: false
feature_logs_enabled: false
feature_networks_enabled: false
feature_networks_http_enabled: false
feature_networks_https_enabled: false
feature_otlp_enabled: false
feature_process_enabled: false
feature_processes_container_enabled: true
flavor: agent
hostname_source: os
install_method_installer_version: deb_package
install_method_tool: dpkg
install_method_tool_version: dpkg-1.18.26
=========
Collector
=========
Running Checks
==============
consul (2.1.0)
--------------
Instance ID: consul:default:b77b05cc5a5351d9 [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/consul.d/conf.yaml
Total Runs: 28,011
Metric Samples: Last Run: 1, Total: 28,011
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 2, Total: 57,359
Average Execution Time : 4ms
Last Execution Date : 2022-07-27 15:16:32 UTC (1658934992000)
Last Successful Execution Date : 2022-07-27 15:16:32 UTC (1658934992000)
metadata:
version.major: 1
version.minor: 8
version.patch: 4
version.raw: 1.8.4
version.scheme: semver
<redacted>
=========
Forwarder
=========
Transactions
============
Cluster: 0
ClusterRole: 0
ClusterRoleBinding: 0
CronJob: 0
DaemonSet: 0
Deployment: 0
Dropped: 0
HighPriorityQueueFull: 0
Ingress: 0
Job: 0
Node: 0
PersistentVolume: 0
PersistentVolumeClaim: 0
Pod: 0
ReplicaSet: 0
Requeued: 0
Retried: 0
RetryQueueSize: 0
Role: 0
RoleBinding: 0
Service: 0
ServiceAccount: 0
StatefulSet: 0
Transaction Successes
=====================
Total number: 59055
Successes By Endpoint:
check_run_v1: 28,010
intake: 2,335
metadata_v1: 700
series_v1: 28,010
On-disk storage
===============
On-disk storage is disabled. Configure `forwarder_storage_max_size_in_bytes` to enable it.
API Keys status
===============
API key ending with 91d2c: API Key valid
==========
Endpoints
==========
https://app.datadoghq.com - API Key ending with:
- 91d2c
==========
Logs Agent
==========
Logs Agent is not running
=============
Process Agent
=============
Version: 7.37.1
Status date: 2022-07-27 15:16:44.135 UTC (1658935004135)
Process Agent Start: 2022-07-22 18:34:00.573 UTC (1658514840573)
Pid: 18466
Go Version: go1.17.11
Build arch: amd64
Log Level: info
Enabled Checks: [process_discovery]
Allocated Memory: 13,024,816 bytes
Hostname: <redacted> # consul-server-host (leader)
=================
Process Endpoints
=================
https://process.datadoghq.com - API Key ending with:
- 91d2c
=========
Collector
=========
Last collection time: 2022-07-27 14:34:01
Docker socket:
Number of processes: 0
Number of containers: 0
Process Queue length: 0
RTProcess Queue length: 0
Pod Queue length: 0
Process Bytes enqueued: 0
RTProcess Bytes enqueued: 0
Pod Bytes enqueued: 0
Drop Check Payloads: []
=========
APM Agent
=========
<redacted>
=========
Aggregator
=========
Checks Metric Sample: 6,678,903
Dogstatsd Metric Sample: 4,565,629
Event: 1
Events Flushed: 1
Number Of Flushes: 28,010
Series Flushed: 8,834,639
Service Check: 310,384
Service Checks Flushed: 338,390
=========
DogStatsD
=========
Event Packets: 0
Event Parse Errors: 0
Metric Packets: 4,565,628
Metric Parse Errors: 0
Service Check Packets: 0
Service Check Parse Errors: 0
Udp Bytes: 406,735,605
Udp Packet Reading Errors: 0
Udp Packets: 2,650,131
Uds Bytes: 0
Uds Origin Detection Errors: 0
Uds Packet Reading Errors: 0
Uds Packets: 0
Unterminated Metric Errors: 0
====
OTLP
====
Status: Not enabled
Collector status: Not running
```
**Additional environment details (Operating System, Cloud provider, etc):**
**Steps to reproduce the issue:**
1. Install the `consul.d` check/integration
2. ????
3. Non-profit
**Describe the results you received:**
The output of the Consul Service Checks for Consul Service Healthchecks does _not_ include a `node`, `node_name`, nor `node_id` tag or information on the Datadog Service Checks.
**Describe the results you expected:**
A tag or information should exist for `node`, `node_name`, or `node_id` on the Datadog Service Check (since the information is available and retrieved from the Consul API).
**Additional information you deem important (e.g. issue happens only occasionally):**
The problem is as follows: Consul Service Checks have information such as Ok, Warning, Critical for the Service, Check (id), and Node (which host the check is failing for). However, the Datadog Consul integration does not seem to gather that Node Name/Id bit of information. So, when a Datadog Consul Service Check is in the Critical state (like `consul.check`) the information provided only gives details about the Consul Service and Check Name/Id... which is not particularly useful because what happens when you have a Consul Service with 50 Nodes? Which Node has the check failing?
The tag should be added here: https://github.com/DataDog/integrations-core/blob/f8c50c779dc836e9419326a5d2d64524f3216821/consul/datadog_checks/consul/consul.py#L367-L375
Specifically on/after line 373:
```python
if check["Node"]:
tags.append("consul_node:{}".format(check["Node"]))
sc[sc_id] = {'status': status, 'tags': tags}
```
The data is available and returned in the Consul API endpoint `/v1/health/state/any` on line 356: https://github.com/DataDog/integrations-core/blob/f8c50c779dc836e9419326a5d2d64524f3216821/consul/datadog_checks/consul/consul.py#L356
See: https://www.consul.io/api-docs/health#sample-response-3
Example Response:
```json
[
{
"Node": "foobar",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Namespace": "default"
},
[...]
]
```
| diff --git a/consul/datadog_checks/consul/consul.py b/consul/datadog_checks/consul/consul.py
--- a/consul/datadog_checks/consul/consul.py
+++ b/consul/datadog_checks/consul/consul.py
@@ -372,6 +372,8 @@ def check(self, _):
tags.append('service:{}'.format(check['ServiceName']))
if check["ServiceID"]:
tags.append("consul_service_id:{}".format(check["ServiceID"]))
+ if check["Node"]:
+ tags.append("consul_node:{}".format(check["Node"]))
sc[sc_id] = {'status': status, 'tags': tags}
elif STATUS_SEVERITY[status] > STATUS_SEVERITY[sc[sc_id]['status']]:
|
[Active Directory] Entry point is using datadog_checks.ntp console script
In the Active Directory integration's `setup.py` the entrypoint (for running without an agent) uses the datadog_checks.ntp console script.
```
# The entrypoint to run the check manually without an agent
entry_points={
'console_scripts': [
'ntp=datadog_checks.ntp:main',
],
},
```
See: https://github.com/DataDog/integrations-core/blob/master/active_directory/setup.py#L124
| diff --git a/active_directory/setup.py b/active_directory/setup.py
--- a/active_directory/setup.py
+++ b/active_directory/setup.py
@@ -116,12 +116,5 @@ def find_version(*file_paths):
# Extra files to ship with the wheel package
package_data={b'datadog_checks.active_directory': ['conf.yaml.example']},
- include_package_data=True,
-
- # The entrypoint to run the check manually without an agent
- entry_points={
- 'console_scripts': [
- 'ntp=datadog_checks.ntp:main',
- ],
- },
+ include_package_data=True
)
|
[oracle] Integration fails when Oracle DB has offline tablespace
**Output of the [info page](https://docs.datadoghq.com/agent/faq/agent-commands/#agent-status-and-information)**
```
====================
Collector (v 5.22.1)
====================
Status date: 2018-04-14 10:05:29 (16s ago)
Pid: 26119
Platform: Linux-2.6.32-358.el6.x86_64-x86_64-with-redhat-6.4-Santiago
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/collector.log, syslog:/dev/log
Clocks
======
NTP offset: 0.0025 s
System UTC time: 2018-04-14 17:05:46.178245
Paths
=====
conf.d: /etc/dd-agent/conf.d
checks.d: Not found
Hostnames
=========
agent-hostname: cnrac1.delphix.com
hostname: cnrac1.delphix.com
socket-fqdn: cnrac1.delphix.com
Checks
======
oracle (1.0.0)
--------------
- instance #0 [ERROR]: 'float() argument must be a string or a number'
- Collected 44 metrics, 0 events & 1 service check
disk (1.1.0)
------------
- instance #0 [OK]
- Collected 58 metrics, 0 events & 0 service checks
network (1.4.0)
---------------
- instance #0 [OK]
- Collected 24 metrics, 0 events & 0 service checks
ntp (1.0.0)
-----------
- Collected 0 metrics, 0 events & 0 service checks
Emitters
========
- http_emitter [OK]
====================
Dogstatsd (v 5.22.1)
====================
Status date: 2018-04-14 10:05:43 (3s ago)
Pid: 26109
Platform: Linux-2.6.32-358.el6.x86_64-x86_64-with-redhat-6.4-Santiago
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/dogstatsd.log, syslog:/dev/log
Flush count: 54
Packet Count: 0
Packets per second: 0.0
Metric count: 1
Event count: 0
Service check count: 0
====================
Forwarder (v 5.22.1)
====================
Status date: 2018-04-14 10:05:43 (3s ago)
Pid: 26108
Platform: Linux-2.6.32-358.el6.x86_64-x86_64-with-redhat-6.4-Santiago
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/forwarder.log, syslog:/dev/log
Queue Size: 463 bytes
Queue Length: 1
Flush Count: 172
Transactions received: 137
Transactions flushed: 136
Transactions rejected: 0
API Key Status: API Key is valid
======================
Trace Agent (v 5.22.1)
======================
Pid: 26107
Uptime: 545 seconds
Mem alloc: 2889344 bytes
Hostname: cnrac1.delphix.com
Receiver: localhost:8126
API Endpoint: https://trace.agent.datadoghq.com
--- Receiver stats (1 min) ---
--- Writer stats (1 min) ---
Traces: 0 payloads, 0 traces, 0 bytes
Stats: 0 payloads, 0 stats buckets, 0 bytes
Services: 0 payloads, 0 services, 0 bytes
```
**Additional environment details (Operating System, Cloud provider, etc):**
Red Hat Enterprise Linux Server release 6.4 (Santiago)
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit
**Steps to reproduce the issue:**
1. Configure a Datadog check for an Oracle database and confirm that it is working
2. In the Oracle database, add a new tablespace+datafile and take the datafile offline, e.g. with:
```
SQL> create tablespace offline_tablespace datafile size 10m autoextend off;
Tablespace created.
SQL> alter tablespace OFFLINE_TABLESPACE offline;
Tablespace altered.
```
3. Note that the query used in `_get_tablespace_metrics` now returns `NULL` for the bytes and maximum bytes of this tablespace, as it is offline and Oracle cannot access the requested information:
```
SQL> select tablespace_name, sum(bytes), sum(maxbytes) from dba_data_files where tablespace_name = 'OFFLINE_TABLESPACE' group by tablespace_name;
TABLESPACE_NAME SUM(BYTES) SUM(MAXBYTES)
------------------------------ ---------- -------------
OFFLINE_TABLESPACE
```
**Describe the results you received:**
Datadog oracle check starts failing with `float() argument must be a string or a number`
**Describe the results you expected:**
Datadog oracle check ignores the offline datafiles.
**Additional information you deem important (e.g. issue happens only occasionally):**
| diff --git a/oracle/datadog_checks/oracle/__init__.py b/oracle/datadog_checks/oracle/__init__.py
--- a/oracle/datadog_checks/oracle/__init__.py
+++ b/oracle/datadog_checks/oracle/__init__.py
@@ -2,6 +2,6 @@
Oracle = oracle.Oracle
-__version__ = "1.1.0"
+__version__ = "1.2.0"
__all__ = ['oracle']
diff --git a/oracle/datadog_checks/oracle/oracle.py b/oracle/datadog_checks/oracle/oracle.py
--- a/oracle/datadog_checks/oracle/oracle.py
+++ b/oracle/datadog_checks/oracle/oracle.py
@@ -115,8 +115,17 @@ def _get_tablespace_metrics(self, con, tags):
cur.execute(query)
for row in cur:
tablespace_tag = 'tablespace:%s' % row[0]
- used = float(row[1])
- size = float(row[2])
+ if row[1] is None:
+ # mark tablespace as offline if sum(BYTES) is null
+ offline = True
+ used = 0
+ else:
+ offline = False
+ used = float(row[1])
+ if row[2] is None:
+ size = 0
+ else:
+ size = float(row[2])
if (used >= size):
in_use = 100
elif (used == 0) or (size == 0):
@@ -127,3 +136,4 @@ def _get_tablespace_metrics(self, con, tags):
self.gauge('oracle.tablespace.used', used, tags=tags + [tablespace_tag])
self.gauge('oracle.tablespace.size', size, tags=tags + [tablespace_tag])
self.gauge('oracle.tablespace.in_use', in_use, tags=tags + [tablespace_tag])
+ self.gauge('oracle.tablespace.offline', offline, tags=tags + [tablespace_tag])
|
etcd leader check uses unsupported endpoint
The etcd integration currently uses the `/v3alpha/maintenance/status` endpoint when determining whether the instance it's checking is the leader:
https://github.com/DataDog/integrations-core/blame/7ab36d84f116ccbdef71ed12b84e81462ca38113/etcd/datadog_checks/etcd/etcd.py#L127
However, this endpoint has been graduated to `/v3/...`, so trying to access the `/v3alpha/...` endpoint returns a 404 on etcd 3.4. This results in the leader tag not getting added.
I think this check should be updated to use the `/v3/...` endpoint, since that is available on 3.4.0 (https://github.com/hexfusion/etcd/blob/v3.4.0/Documentation/dev-guide/apispec/swagger/rpc.swagger.json#L1045) and above.
Per etcd's docs on suported versions (https://etcd.io/docs/v3.5/op-guide/versioning/), because 3.5 is the current version, 3.4 is still maintained, but anything below is no longer maintained. Also, the oldest maintained version of kubernetes (1.25) is using etcd 3.5 (https://github.com/kubernetes/kubernetes/pull/110033).
I'd be happy to make a PR for this change!
| diff --git a/etcd/datadog_checks/etcd/etcd.py b/etcd/datadog_checks/etcd/etcd.py
--- a/etcd/datadog_checks/etcd/etcd.py
+++ b/etcd/datadog_checks/etcd/etcd.py
@@ -123,9 +123,7 @@ def access_api(self, scraper_config, path, data='{}'):
return response
def is_leader(self, scraper_config):
- # Modify endpoint as etcd stabilizes
- # https://github.com/etcd-io/etcd/blob/master/Documentation/dev-guide/api_grpc_gateway.md#notes
- response = self.access_api(scraper_config, '/v3alpha/maintenance/status')
+ response = self.access_api(scraper_config, '/v3/maintenance/status')
leader = response.get('leader')
member = response.get('header', {}).get('member_id')
|
Correctly compute the `templates.count` metric
### What does this PR do?
<!-- A brief description of the change being made with this pull request. -->
Correctly compute the templates.count metric
### Motivation
<!-- What inspired you to submit this pull request? -->
- QA for https://github.com/DataDog/integrations-core/pull/14569 which had no tests
- The previous implementation was dropping elements from the same list we were iterating on, so the index that were removed were not correct because the list was continuously updated. The previous implementation returned 12 instead of 6.
### Additional Notes
<!-- Anything else we should know when reviewing? -->
### Review checklist (to be filled by reviewers)
- [ ] Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
- [ ] PR title must be written as a CHANGELOG entry [(see why)](https://github.com/DataDog/integrations-core/blob/master/CONTRIBUTING.md#pull-request-title)
- [ ] Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR)
- [ ] PR must have `changelog/` and `integration/` labels attached
- [ ] If the PR doesn't need to be tested during QA, please add a `qa/skip-qa` label.
| diff --git a/elastic/datadog_checks/elastic/__about__.py b/elastic/datadog_checks/elastic/__about__.py
--- a/elastic/datadog_checks/elastic/__about__.py
+++ b/elastic/datadog_checks/elastic/__about__.py
@@ -2,4 +2,4 @@
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
-__version__ = "5.4.0"
+__version__ = "5.4.1"
|
[btrfs] Incorrect Metrics and misleading default dashboard
**Output of the info page**
```
❯ sudo /etc/init.d/datadog-agent info
====================
Collector (v 5.17.2)
====================
Status date: 2017-09-24 19:12:35 (13s ago)
Pid: 2931
Platform: Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
Python Version: 2.7.13, 64bit
Logs: <stderr>, /var/log/datadog/collector.log, syslog:/dev/log
Clocks
======
NTP offset: Unknown (No response received from 1.datadog.pool.ntp.org.)
System UTC time: 2017-09-25 00:12:50.110178
Paths
=====
conf.d: /etc/dd-agent/conf.d
checks.d: /opt/datadog-agent/agent/checks.d
Hostnames
=========
socket-hostname: server.example.com
hostname:server.example.com
socket-fqdn: server.example.com
Checks
======
linux_proc_extras (5.17.2)
--------------------------
- instance #0 [OK]
- Collected 13 metrics, 0 events & 0 service checks
process (5.17.2)
----------------
- instance #0 [OK]
- Collected 16 metrics, 0 events & 1 service check
network (5.17.2)
----------------
- instance #0 [OK]
- Collected 19 metrics, 0 events & 0 service checks
btrfs (5.17.2)
--------------
- instance #0 [OK]
- Collected 16 metrics, 0 events & 0 service checks
ntp (5.17.2)
------------
- instance #0 [OK]
- Collected 1 metric, 0 events & 1 service check
disk (5.17.2)
-------------
- instance #0 [OK]
- Collected 12 metrics, 0 events & 0 service checks
docker_daemon (5.17.2)
----------------------
- instance #0 [OK]
- Collected 141 metrics, 0 events & 1 service check
Emitters
========
- http_emitter [OK]
====================
Dogstatsd (v 5.17.2)
====================
Status date: 2017-09-24 19:12:46 (4s ago)
Pid: 2928
Platform: Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
Python Version: 2.7.13, 64bit
Logs: <stderr>, /var/log/datadog/dogstatsd.log, syslog:/dev/log
Flush count: 51867
Packet Count: 0
Packets per second: 0.0
Metric count: 1
Event count: 0
Service check count: 0
====================
Forwarder (v 5.17.2)
====================
Status date: 2017-09-24 19:12:51 (1s ago)
Pid: 2927
Platform: Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
Python Version: 2.7.13, 64bit
Logs: <stderr>, /var/log/datadog/forwarder.log, syslog:/dev/log
Queue Size: 0 bytes
Queue Length: 0
Flush Count: 170892
Transactions received: 132655
Transactions flushed: 132655
Transactions rejected: 0
API Key Status: API Key is valid
======================
Trace Agent (v 5.17.2)
======================
Not running (port 8126)
```
**Additional environment details (Operating System, Cloud provider, etc):**
Operating System: Debian 9.1 Strech
Physical Host
3, 8TB HDD
**Steps to reproduce the issue:**
1. Add BTRFS integration to a host with a LUKS encrypted BTRFS RAID1 array
```
❯ cat /etc/dd-agent/conf.d/btrfs.yaml
init_config: null
instances:
- excluded_devices: []
```
2. Review metrics in DataDog UI
3. Realize they don't seem correct compared to the `btrfs` output
**Describe the results you received:**
Currently in the Web UI I'm seeing the following:
`system.disk.btrfs.used`:
<img width="1234" alt="screen shot 2017-09-24 at 8 17 52 pm" src="https://user-images.githubusercontent.com/15491304/30788157-94c50902-a165-11e7-89f7-b88dc6c30430.png">
`system.disk.btrfs.free`:
<img width="1250" alt="screen shot 2017-09-24 at 8 17 41 pm" src="https://user-images.githubusercontent.com/15491304/30788159-99da1af4-a165-11e7-8042-ed700c11d617.png">
`system.disk.btrfs.total`:
<img width="1242" alt="screen shot 2017-09-24 at 8 17 28 pm" src="https://user-images.githubusercontent.com/15491304/30788161-9fe7e1e2-a165-11e7-9b3a-9f711e3070a0.png">
`system.disk.btrfs.usage`:
<img width="1239" alt="screen shot 2017-09-24 at 8 17 14 pm" src="https://user-images.githubusercontent.com/15491304/30788163-a5cbfa12-a165-11e7-8420-5a95b9995bc3.png">
**Describe the results you expected:**
The above statistics greatly vary from what the `btrfs` commands are reporting directly on the host:
```
❯ btrfs fi usage /raid
Overall:
Device size: 21.83TiB
Device allocated: 13.70TiB
Device unallocated: 8.13TiB
Device missing: 0.00B
Used: 13.69TiB
Free (estimated): 4.07TiB (min: 4.07TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:6.84TiB, Used:6.84TiB
/dev/mapper/luks_sda 4.56TiB
/dev/mapper/luks_sdb 4.56TiB
/dev/mapper/luks_sdc 4.56TiB
Metadata,RAID1: Size:10.00GiB, Used:8.23GiB
/dev/mapper/luks_sda 5.00GiB
/dev/mapper/luks_sdb 6.00GiB
/dev/mapper/luks_sdc 9.00GiB
System,RAID1: Size:32.00MiB, Used:992.00KiB
/dev/mapper/luks_sda 32.00MiB
/dev/mapper/luks_sdc 32.00MiB
Unallocated:
/dev/mapper/luks_sda 2.71TiB
/dev/mapper/luks_sdb 2.71TiB
/dev/mapper/luks_sdc 2.71TiB
```
```
❯ btrfs fi df /raid
Data, RAID1: total=6.84TiB, used=6.84TiB
System, RAID1: total=32.00MiB, used=992.00KiB
Metadata, RAID1: total=10.00GiB, used=8.23GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
```
```
❯ lsblk /dev/mapper/luks_sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
luks_sdb 254:1 0 7.3T 0 crypt /raid
```
```
❯ df -h /raid
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/luks_sdb 11T 6.9T 2.8T 72% /raid
```
I took a look at https://github.com/DataDog/integrations-core/blob/master/btrfs/check.py and the logic is a bit over my head sadly. I'm happy to provide any additional info that may help here. I'm also game to test changing things the agent. This array is pretty critical in my setup so, I won't be able to change too much on that side but, am happy to dump any additional info.
**Additional information you deem important (e.g. issue happens only occasionally):**
I've tried reinstalling the agent completely and resetting the integration but, it appears that it's just completely incorrect in how it's calculating the above metrics.
EDIT: I've also opened support case #109891 for this as well.
| diff --git a/btrfs/datadog_checks/btrfs/btrfs.py b/btrfs/datadog_checks/btrfs/btrfs.py
--- a/btrfs/datadog_checks/btrfs/btrfs.py
+++ b/btrfs/datadog_checks/btrfs/btrfs.py
@@ -52,10 +52,17 @@
})
BTRFS_IOC_SPACE_INFO = 0xc0109414
+BTRFS_IOC_DEV_INFO = 0xd000941e
+BTRFS_IOC_FS_INFO = 0x8400941f
TWO_LONGS_STRUCT = struct.Struct("=2Q") # 2 Longs
THREE_LONGS_STRUCT = struct.Struct("=3Q") # 3 Longs
+# https://github.com/thorvalds/linux/blob/master/include/uapi/linux/btrfs.h#L173
+# https://github.com/thorvalds/linux/blob/master/include/uapi/linux/btrfs.h#L182
+BTRFS_DEV_INFO_STRUCT = struct.Struct("=Q16B381Q1024B")
+BTRFS_FS_INFO_STRUCT = struct.Struct("=2Q16B4I122Q")
+
def sized_array(count):
return array.array("B", itertools.repeat(0, count))
@@ -82,14 +89,9 @@ def open(self, dir):
class BTRFS(AgentCheck):
def __init__(self, name, init_config, agentConfig, instances=None):
- AgentCheck.__init__(
- self, name, init_config,
- agentConfig, instances=instances
- )
+ AgentCheck.__init__(self, name, init_config, agentConfig, instances=instances)
if instances is not None and len(instances) > 1:
- raise Exception(
- "BTRFS check only supports one configured instance."
- )
+ raise Exception("BTRFS check only supports one configured instance.")
def get_usage(self, mountpoint):
results = []
@@ -103,34 +105,59 @@ def get_usage(self, mountpoint):
_, total_spaces = TWO_LONGS_STRUCT.unpack(ret)
# Allocate it
- buffer_size = (
- TWO_LONGS_STRUCT.size + total_spaces
- * THREE_LONGS_STRUCT.size
- )
+ buffer_size = TWO_LONGS_STRUCT.size + total_spaces * THREE_LONGS_STRUCT.size
data = sized_array(buffer_size)
TWO_LONGS_STRUCT.pack_into(data, 0, total_spaces, 0)
fcntl.ioctl(fd, BTRFS_IOC_SPACE_INFO, data)
_, total_spaces = TWO_LONGS_STRUCT.unpack_from(ret, 0)
- for offset in xrange(TWO_LONGS_STRUCT.size,
- buffer_size,
- THREE_LONGS_STRUCT.size):
-
+ for offset in xrange(TWO_LONGS_STRUCT.size, buffer_size, THREE_LONGS_STRUCT.size):
# https://github.com/spotify/linux/blob/master/fs/btrfs/ioctl.h#L40-L44
- flags, total_bytes, used_bytes =THREE_LONGS_STRUCT.unpack_from(data, offset) # noqa E501
+ flags, total_bytes, used_bytes = THREE_LONGS_STRUCT.unpack_from(data, offset)
results.append((flags, total_bytes, used_bytes))
return results
+ def get_unallocated_space(self, mountpoint):
+ unallocated_bytes = 0
+
+ with FileDescriptor(mountpoint) as fd:
+
+ # Retrieve the fs info to get the number of devices and max device id
+ fs_info = sized_array(BTRFS_FS_INFO_STRUCT.size)
+ fcntl.ioctl(fd, BTRFS_IOC_FS_INFO, fs_info)
+ fs_info = BTRFS_FS_INFO_STRUCT.unpack_from(fs_info, 0)
+ max_id, num_devices = fs_info[0], fs_info[1]
+
+ # Loop through all devices, and sum the number of unallocated bytes on each one
+ for dev_id in xrange(max_id + 1):
+ if num_devices == 0:
+ break
+ try:
+ dev_info = sized_array(BTRFS_DEV_INFO_STRUCT.size)
+ BTRFS_DEV_INFO_STRUCT.pack_into(dev_info, 0, dev_id, *([0] * 1421))
+ fcntl.ioctl(fd, BTRFS_IOC_DEV_INFO, dev_info)
+ dev_info = BTRFS_DEV_INFO_STRUCT.unpack_from(dev_info, 0)
+
+ unallocated_bytes = unallocated_bytes + dev_info[18] - dev_info[17]
+ num_devices = num_devices - 1
+
+ except IOError as e:
+ self.log.debug("Cannot get device info for device id %s: %s", dev_id, e)
+
+ if num_devices != 0:
+ # Could not retrieve the info for all the devices, skip the metric
+ return None
+ return unallocated_bytes
+
def check(self, instance):
btrfs_devices = {}
excluded_devices = instance.get('excluded_devices', [])
custom_tags = instance.get('tags', [])
for p in psutil.disk_partitions():
- if (p.fstype == 'btrfs' and p.device not in btrfs_devices
- and p.device not in excluded_devices):
+ if p.fstype == 'btrfs' and p.device not in btrfs_devices and p.device not in excluded_devices:
btrfs_devices[p.device] = p.mountpoint
if len(btrfs_devices) == 0:
@@ -140,27 +167,26 @@ def check(self, instance):
for flags, total_bytes, used_bytes in self.get_usage(mountpoint):
replication_type, usage_type = FLAGS_MAPPER[flags]
tags = [
- 'usage_type:{0}'.format(usage_type),
- 'replication_type:{0}'.format(replication_type),
+ 'usage_type:{}'.format(usage_type),
+ 'replication_type:{}'.format(replication_type),
+ "device:{}".format(device)
]
tags.extend(custom_tags)
free = total_bytes - used_bytes
usage = float(used_bytes) / float(total_bytes)
- self.gauge(
- 'system.disk.btrfs.total', total_bytes,
- tags=tags, device_name=device
- )
- self.gauge(
- 'system.disk.btrfs.used', used_bytes,
- tags=tags, device_name=device
- )
- self.gauge(
- 'system.disk.btrfs.free',
- free, tags=tags, device_name=device
- )
- self.gauge(
- 'system.disk.btrfs.usage',
- usage, tags=tags, device_name=device
+ self.gauge('system.disk.btrfs.total', total_bytes, tags=tags)
+ self.gauge('system.disk.btrfs.used', used_bytes, tags=tags)
+ self.gauge('system.disk.btrfs.free', free, tags=tags)
+ self.gauge('system.disk.btrfs.usage', usage, tags=tags)
+
+ unallocated_bytes = self.get_unallocated_space(mountpoint)
+ if unallocated_bytes is not None:
+ tags = ["device:{}".format(device)] + custom_tags
+ self.gauge("system.disk.btrfs.unallocated", unallocated_bytes, tags=tags)
+ else:
+ self.log.debug(
+ "Could not retrieve the number of unallocated bytes for all devices,"
+ " skipping metric for mountpoint {}".format(mountpoint)
)
|
[btrfs] Incorrect Metrics and misleading default dashboard
**Output of the info page**
```
❯ sudo /etc/init.d/datadog-agent info
====================
Collector (v 5.17.2)
====================
Status date: 2017-09-24 19:12:35 (13s ago)
Pid: 2931
Platform: Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
Python Version: 2.7.13, 64bit
Logs: <stderr>, /var/log/datadog/collector.log, syslog:/dev/log
Clocks
======
NTP offset: Unknown (No response received from 1.datadog.pool.ntp.org.)
System UTC time: 2017-09-25 00:12:50.110178
Paths
=====
conf.d: /etc/dd-agent/conf.d
checks.d: /opt/datadog-agent/agent/checks.d
Hostnames
=========
socket-hostname: server.example.com
hostname:server.example.com
socket-fqdn: server.example.com
Checks
======
linux_proc_extras (5.17.2)
--------------------------
- instance #0 [OK]
- Collected 13 metrics, 0 events & 0 service checks
process (5.17.2)
----------------
- instance #0 [OK]
- Collected 16 metrics, 0 events & 1 service check
network (5.17.2)
----------------
- instance #0 [OK]
- Collected 19 metrics, 0 events & 0 service checks
btrfs (5.17.2)
--------------
- instance #0 [OK]
- Collected 16 metrics, 0 events & 0 service checks
ntp (5.17.2)
------------
- instance #0 [OK]
- Collected 1 metric, 0 events & 1 service check
disk (5.17.2)
-------------
- instance #0 [OK]
- Collected 12 metrics, 0 events & 0 service checks
docker_daemon (5.17.2)
----------------------
- instance #0 [OK]
- Collected 141 metrics, 0 events & 1 service check
Emitters
========
- http_emitter [OK]
====================
Dogstatsd (v 5.17.2)
====================
Status date: 2017-09-24 19:12:46 (4s ago)
Pid: 2928
Platform: Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
Python Version: 2.7.13, 64bit
Logs: <stderr>, /var/log/datadog/dogstatsd.log, syslog:/dev/log
Flush count: 51867
Packet Count: 0
Packets per second: 0.0
Metric count: 1
Event count: 0
Service check count: 0
====================
Forwarder (v 5.17.2)
====================
Status date: 2017-09-24 19:12:51 (1s ago)
Pid: 2927
Platform: Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
Python Version: 2.7.13, 64bit
Logs: <stderr>, /var/log/datadog/forwarder.log, syslog:/dev/log
Queue Size: 0 bytes
Queue Length: 0
Flush Count: 170892
Transactions received: 132655
Transactions flushed: 132655
Transactions rejected: 0
API Key Status: API Key is valid
======================
Trace Agent (v 5.17.2)
======================
Not running (port 8126)
```
**Additional environment details (Operating System, Cloud provider, etc):**
Operating System: Debian 9.1 Strech
Physical Host
3, 8TB HDD
**Steps to reproduce the issue:**
1. Add BTRFS integration to a host with a LUKS encrypted BTRFS RAID1 array
```
❯ cat /etc/dd-agent/conf.d/btrfs.yaml
init_config: null
instances:
- excluded_devices: []
```
2. Review metrics in DataDog UI
3. Realize they don't seem correct compared to the `btrfs` output
**Describe the results you received:**
Currently in the Web UI I'm seeing the following:
`system.disk.btrfs.used`:
<img width="1234" alt="screen shot 2017-09-24 at 8 17 52 pm" src="https://user-images.githubusercontent.com/15491304/30788157-94c50902-a165-11e7-89f7-b88dc6c30430.png">
`system.disk.btrfs.free`:
<img width="1250" alt="screen shot 2017-09-24 at 8 17 41 pm" src="https://user-images.githubusercontent.com/15491304/30788159-99da1af4-a165-11e7-8042-ed700c11d617.png">
`system.disk.btrfs.total`:
<img width="1242" alt="screen shot 2017-09-24 at 8 17 28 pm" src="https://user-images.githubusercontent.com/15491304/30788161-9fe7e1e2-a165-11e7-9b3a-9f711e3070a0.png">
`system.disk.btrfs.usage`:
<img width="1239" alt="screen shot 2017-09-24 at 8 17 14 pm" src="https://user-images.githubusercontent.com/15491304/30788163-a5cbfa12-a165-11e7-8420-5a95b9995bc3.png">
**Describe the results you expected:**
The above statistics greatly vary from what the `btrfs` commands are reporting directly on the host:
```
❯ btrfs fi usage /raid
Overall:
Device size: 21.83TiB
Device allocated: 13.70TiB
Device unallocated: 8.13TiB
Device missing: 0.00B
Used: 13.69TiB
Free (estimated): 4.07TiB (min: 4.07TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:6.84TiB, Used:6.84TiB
/dev/mapper/luks_sda 4.56TiB
/dev/mapper/luks_sdb 4.56TiB
/dev/mapper/luks_sdc 4.56TiB
Metadata,RAID1: Size:10.00GiB, Used:8.23GiB
/dev/mapper/luks_sda 5.00GiB
/dev/mapper/luks_sdb 6.00GiB
/dev/mapper/luks_sdc 9.00GiB
System,RAID1: Size:32.00MiB, Used:992.00KiB
/dev/mapper/luks_sda 32.00MiB
/dev/mapper/luks_sdc 32.00MiB
Unallocated:
/dev/mapper/luks_sda 2.71TiB
/dev/mapper/luks_sdb 2.71TiB
/dev/mapper/luks_sdc 2.71TiB
```
```
❯ btrfs fi df /raid
Data, RAID1: total=6.84TiB, used=6.84TiB
System, RAID1: total=32.00MiB, used=992.00KiB
Metadata, RAID1: total=10.00GiB, used=8.23GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
```
```
❯ lsblk /dev/mapper/luks_sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
luks_sdb 254:1 0 7.3T 0 crypt /raid
```
```
❯ df -h /raid
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/luks_sdb 11T 6.9T 2.8T 72% /raid
```
I took a look at https://github.com/DataDog/integrations-core/blob/master/btrfs/check.py and the logic is a bit over my head sadly. I'm happy to provide any additional info that may help here. I'm also game to test changing things the agent. This array is pretty critical in my setup so, I won't be able to change too much on that side but, am happy to dump any additional info.
**Additional information you deem important (e.g. issue happens only occasionally):**
I've tried reinstalling the agent completely and resetting the integration but, it appears that it's just completely incorrect in how it's calculating the above metrics.
EDIT: I've also opened support case #109891 for this as well.
| diff --git a/btrfs/datadog_checks/btrfs/btrfs.py b/btrfs/datadog_checks/btrfs/btrfs.py
--- a/btrfs/datadog_checks/btrfs/btrfs.py
+++ b/btrfs/datadog_checks/btrfs/btrfs.py
@@ -23,11 +23,16 @@
SINGLE = "single"
RAID0 = "raid0"
RAID1 = "raid1"
+RAID5 = "raid5"
+RAID6 = "raid6"
RAID10 = "raid10"
DUP = "dup"
UNKNOWN = "unknown"
+GLB_RSV = "globalreserve"
-FLAGS_MAPPER = defaultdict(lambda: (SINGLE, UNKNOWN), {
+# https://github.com/torvalds/linux/blob/98820a7e244b17b8a4d9e9d1ff9d3b4e5bfca58b/include/uapi/linux/btrfs_tree.h#L829-L840
+# https://github.com/torvalds/linux/blob/98820a7e244b17b8a4d9e9d1ff9d3b4e5bfca58b/include/uapi/linux/btrfs_tree.h#L879
+FLAGS_MAPPER = defaultdict(lambda: (SINGLE, UNKNOWN), {
1: (SINGLE, DATA),
2: (SINGLE, SYSTEM),
4: (SINGLE, METADATA),
@@ -48,7 +53,15 @@
66: (RAID10, SYSTEM),
68: (RAID10, METADATA),
69: (RAID10, MIXED),
-
+ 129: (RAID5, DATA),
+ 130: (RAID5, SYSTEM),
+ 132: (RAID5, METADATA),
+ 133: (RAID5, MIXED),
+ 257: (RAID6, DATA),
+ 258: (RAID6, SYSTEM),
+ 260: (RAID6, METADATA),
+ 261: (RAID6, MIXED),
+ 562949953421312: (SINGLE, GLB_RSV)
})
BTRFS_IOC_SPACE_INFO = 0xc0109414
|
[docker_daemon] Spurious warnings due to race condition with stopped containers
**Output of the [info page](https://docs.datadoghq.com/agent/faq/agent-commands/#agent-status-and-information)**
```text
====================
Collector (v 5.23.0)
====================
Status date: 2018-05-20 23:50:24 (19s ago)
Pid: 29
Platform: Linux-4.9.49-moby-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/collector.log
Clocks
======
NTP offset: -0.003 s
System UTC time: 2018-05-20 23:50:43.116200
Paths
=====
conf.d: /etc/dd-agent/conf.d
checks.d: Not found
Hostnames
=========
ec2-hostname: ip-172-32-2-162.ap-southeast-2.compute.internal
local-ipv4: 172.32.2.162
local-hostname: ip-172-32-2-162.ap-southeast-2.compute.internal
socket-hostname: 3c27839860c5
public-hostname: ec2-<REDACTED>.ap-southeast-2.compute.amazonaws.com
hostname: i-080cb851fe5ccaf61
instance-id: i-080cb851fe5ccaf61
public-ipv4: <REDACTED>
socket-fqdn: 3c27839860c5
Checks
======
ntp (1.2.0)
-----------
- instance #0 [OK]
- Collected 1 metric, 0 events & 1 service check
disk (1.2.0)
------------
- instance #0 [OK]
- Collected 40 metrics, 0 events & 0 service checks
network (1.5.0)
---------------
- instance #0 [OK]
- Collected 0 metrics, 0 events & 0 service checks
docker_daemon (1.9.0)
---------------------
- instance #0 [OK]
- Collected 121 metrics, 0 events & 1 service check
Emitters
========
- http_emitter [OK]
```
**Additional environment details (Operating System, Cloud provider, etc):**
Docker for AWS version 17.09.0 CE.
**Steps to reproduce the issue:**
1. Run a Docker Swarm with a number of services being scaled regularly (dozens of containers starting and stopping).
2. Run a docker-dd-agent container.
3. Observe warnings in container logs - see results below.
4. Observe reported `docker_daemon` integration issues in Datadog Infrastructure List.
**Describe the results you received:**
The Datadog container regularly prints warnings like the following:
```
| WARNING | dd.collector | checks.docker_daemon(__init__.py:717) | Failed to report IO metrics from file /host/proc/30042/net/dev. Exception: [Errno 2] No such file or directory: '/host/proc/30042/net/dev'
```
**Describe the results you expected:**
These errors should not be classified as warnings. Other methods in the same check, including `_crawl_container_pids`, `_report_cgroup_metrics`, and `_parse_cgroup_file`, catch IOErrors and log a debug message instead of a warning. The same should happen in `_report_net_metrics`.
**Additional information you deem important (e.g. issue happens only occasionally):**
This warning is common in busy environments. It's logged around 100 times per day in our production environment. This would not be a big concern except that any warning apparently means Datadog considers the integration to have issues; see step 4 above.
| diff --git a/docker_daemon/datadog_checks/docker_daemon/docker_daemon.py b/docker_daemon/datadog_checks/docker_daemon/docker_daemon.py
--- a/docker_daemon/datadog_checks/docker_daemon/docker_daemon.py
+++ b/docker_daemon/datadog_checks/docker_daemon/docker_daemon.py
@@ -759,9 +759,9 @@ def _report_net_metrics(self, container, tags):
m_func(self, "docker.net.bytes_rcvd", long(x[0]), net_tags)
m_func(self, "docker.net.bytes_sent", long(x[8]), net_tags)
- except Exception as e:
+ except IOError as e:
# It is possible that the container got stopped between the API call and now
- self.warning("Failed to report IO metrics from file {0}. Exception: {1}".format(proc_net_file, e))
+ self.log.debug("Cannot read network interface file, container likely raced to finish : {0}".format(e))
def _invalidate_network_mapping_cache(self, api_events):
for ev in api_events:
|
[MySQL] Error while fetching mysql pid from psutil (psutil.NoSuchProcess)
I'm using dd-agent 5.22.3 on Gentoo Linux. Occasionally (once or twice a week) , the MySQL check fails with the following error:
```
ERROR (mysql.py:811): Error while fetching mysql pid from psutil
Traceback (most recent call last):
File "/opt/datadog/latest/venv/lib/python2.7/site-packages/datadog_checks/mysql/mysql.py", line 808, in _get_server_pid
if proc.name() == "mysqld":
File "/opt/datadog/latest/venv/lib/python2.7/site-packages/psutil/__init__.py", line 561, in name
name = self._proc.name()
File "/opt/datadog/latest/venv/lib/python2.7/site-packages/psutil/_pslinux.py", line 1089, in wrapper
raise NoSuchProcess(self.pid, self._name)
NoSuchProcess: psutil.NoSuchProcess process no longer exists (pid=8762)
```
The [method _get_server_pid](https://github.com/DataDog/integrations-core/blob/5.22.x/mysql/datadog_checks/mysql/mysql.py#L783) loops over all running processes to find the pid of MySQL. Processes can vanish during this loop which then triggers this error. Please note that the pid mentioned in the above error is _not_ the pid of my MySQL process. The MySQL process is not crashing or anything, it's running totally fine. The error happens when a completely unrelated process vanishes while the loop is running.
I think `psutil.NoSuchProcess` errors raised by `proc.name()` should be ignored.
(Alternatively, dd-agent could maybe use `systemctl show -p MainPID mysqld.service` to fetch the pid of MySQL.)
| diff --git a/mysql/datadog_checks/mysql/mysql.py b/mysql/datadog_checks/mysql/mysql.py
--- a/mysql/datadog_checks/mysql/mysql.py
+++ b/mysql/datadog_checks/mysql/mysql.py
@@ -25,6 +25,7 @@
RATE = "rate"
COUNT = "count"
MONOTONIC = "monotonic_count"
+PROC_NAME = 'mysqld'
# Vars found in "SHOW STATUS;"
STATUS_VARS = {
@@ -789,10 +790,10 @@ def _collect_system_metrics(self, host, db, tags):
self.warning("Error while reading mysql (pid: %s) procfs data\n%s"
% (pid, traceback.format_exc()))
- def _get_server_pid(self, db):
- pid = None
-
- # Try to get pid from pid file, it can fail for permission reason
+ def _get_pid_file_variable(self, db):
+ """
+ Get the `pid_file` variable
+ """
pid_file = None
try:
with closing(db.cursor()) as cursor:
@@ -801,6 +802,13 @@ def _get_server_pid(self, db):
except Exception:
self.warning("Error while fetching pid_file variable of MySQL.")
+ return pid_file
+
+ def _get_server_pid(self, db):
+ pid = None
+
+ # Try to get pid from pid file, it can fail for permission reason
+ pid_file = self._get_pid_file_variable(db)
if pid_file is not None:
self.log.debug("pid file: %s" % str(pid_file))
try:
@@ -812,12 +820,14 @@ def _get_server_pid(self, db):
# If pid has not been found, read it from ps
if pid is None and PSUTIL_AVAILABLE:
- try:
- for proc in psutil.process_iter():
- if proc.name() == "mysqld":
+ for proc in psutil.process_iter():
+ try:
+ if proc.name() == PROC_NAME:
pid = proc.pid
- except Exception:
- self.log.exception("Error while fetching mysql pid from psutil")
+ except (psutil.AccessDenied, psutil.ZombieProcess, psutil.NoSuchProcess):
+ continue
+ except Exception:
+ self.log.exception("Error while fetching mysql pid from psutil")
return pid
|
NTP check broken: Servname not supported for ai_socktype (missing /etc/services from netbase)
Just upgraded from 5.23.0 to 5.24.0 using the official docker image to resolve issue #1346 and it broke the NTP check with the following error:
```
2018-05-17 11:30:31 UTC | INFO | dd.collector | config(config.py:1249) | initialized checks.d checks: ['kube_dns', 'network', 'kubernetes', 'ntp', 'docker_daemon', 'http_check', 'system_core', 'redisdb', 'disk', 'kube_proxy']
2018-05-17 11:30:31 UTC | INFO | dd.collector | config(config.py:1250) | initialization failed checks.d checks: []
2018-05-17 11:30:31 UTC | INFO | dd.collector | collector(agent.py:166) | Check reload was successful. Running 11 checks.
2018-05-17 11:30:35 UTC | ERROR | dd.collector | checks.ntp(__init__.py:829) | Check 'ntp' instance #0 failed
Traceback (most recent call last):
File "/opt/datadog-agent/agent/checks/__init__.py", line 812, in run
self.check(copy.deepcopy(instance))
File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/ntp/ntp.py", line 33, in check
ntp_stats = ntplib.NTPClient().request(**req_args)
File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/ntplib.py", line 292, in request
addrinfo = socket.getaddrinfo(host, port)[0]
gaierror: [Errno -8] Servname not supported for ai_socktype
```
**Output of the [info page](https://docs.datadoghq.com/agent/faq/agent-commands/#agent-status-and-information)**
```text
[0;31mWarning: Known bug in Linux Kernel 3.18+ causes 'status' to fail.[0m
Calling 'info', instead...
====================
Collector (v 5.24.0)
====================
Status date[0m: 2018-05-17 11:24:41 (6s ago)[0m
Pid: 37
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/collector.log
Clocks
======
NTP offset: Unknown ([Errno -8] Servname not supported for ai_socktype)
System UTC time: 2018-05-17 11:24:47.882946
Paths
=====
conf.d: /etc/dd-agent/conf.d
checks.d: Not found
Hostnames
=========
socket-hostname: dd-agent-bsljm
hostname: <snip>.internal
socket-fqdn: dd-agent-bsljm
Checks
======
kube_dns (1.3.0)
----------------
- instance #0 [[32mOK[0m]
- Collected 44 metrics, 0 events & 0 service checks
network (1.5.0)
---------------
- instance #0 [[32mOK[0m]
- Collected 122 metrics, 0 events & 0 service checks
kubernetes (1.5.0)
------------------
- instance #0 [[32mOK[0m]
- Collected 402 metrics, 0 events & 3 service checks
ntp (1.2.0)
-----------
- Collected 0 metrics, 0 events & 0 service checks
docker_daemon (1.10.0)
----------------------
- instance #0 [[32mOK[0m]
- Collected 527 metrics, 0 events & 1 service check
http_check (2.0.1)
------------------
- instance #0 [[32mOK[0m]
- instance #1 [[32mOK[0m]
- Collected 8 metrics, 0 events & 4 service checks
system_core (1.0.0)
-------------------
- instance #0 [[32mOK[0m]
- Collected 41 metrics, 0 events & 0 service checks
redisdb (1.5.0)
---------------
- instance #0 [[32mOK[0m]
- Collected 44 metrics, 0 events & 1 service check
- Dependencies:
- redis: 2.10.5
disk (1.2.0)
------------
- instance #0 [[32mOK[0m]
- Collected 58 metrics, 0 events & 0 service checks
kube_proxy (Unknown Wheel)
--------------------------
- Collected 0 metrics, 0 events & 0 service checks
Emitters
========
- http_emitter [[32mOK[0m]
====================
Dogstatsd (v 5.24.0)
====================
Status date[0m: 2018-05-17 11:24:48 (0s ago)[0m
Pid: 34
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/dogstatsd.log
Flush count: 119
Packet Count: 7177
Packets per second: 5.5
Metric count: 109
Event count: 0
Service check count: 0
====================
Forwarder (v 5.24.0)
====================
Status date[0m: 2018-05-17 11:24:48 (0s ago)[0m
Pid: 33
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/forwarder.log
Queue Size: 1559 bytes
Queue Length: 1
Flush Count: 387
Transactions received: 302
Transactions flushed: 301
Transactions rejected: 0
API Key Status: API Key is valid
======================
Trace Agent (v 5.24.0)
======================
Pid: 32
Uptime: 1193 seconds
Mem alloc: 4502136 bytes
Hostname: <snip>.internal
Receiver: 0.0.0.0:8126
API Endpoint: https://trace.agent.datadoghq.com
--- Receiver stats (1 min) ---
From go 1.9.2 (gc-amd64-linux), client v0.5.0
Traces received: 19 (9127 bytes)
Spans received: 38
Services received: 0 (0 bytes)
--- Writer stats (1 min) ---
Traces: 6 payloads, 18 traces, 4801 bytes
Stats: 4 payloads, 5 stats buckets, 4440 bytes
Services: 0 payloads, 0 services, 0 bytes
```
**Additional environment details (Operating System, Cloud provider, etc):**
- Official docker image 12.6.5240
- Google Kubernetes Engine 1.8.10.gke0 with Container Optimized OS
**Steps to reproduce the issue:**
1. install datadog agent on kubernetes using official docker image 12.6.5240
2. check the logs
**Describe the results you received:**
The `Servname not supported for ai_socktype` error happens when the NTP check is executed.
**Describe the results you expected:**
No error.
**Additional information you deem important (e.g. issue happens only occasionally):**
Did not happen with 5.23.0
NTP check broken: Servname not supported for ai_socktype (missing /etc/services from netbase)
Just upgraded from 5.23.0 to 5.24.0 using the official docker image to resolve issue #1346 and it broke the NTP check with the following error:
```
2018-05-17 11:30:31 UTC | INFO | dd.collector | config(config.py:1249) | initialized checks.d checks: ['kube_dns', 'network', 'kubernetes', 'ntp', 'docker_daemon', 'http_check', 'system_core', 'redisdb', 'disk', 'kube_proxy']
2018-05-17 11:30:31 UTC | INFO | dd.collector | config(config.py:1250) | initialization failed checks.d checks: []
2018-05-17 11:30:31 UTC | INFO | dd.collector | collector(agent.py:166) | Check reload was successful. Running 11 checks.
2018-05-17 11:30:35 UTC | ERROR | dd.collector | checks.ntp(__init__.py:829) | Check 'ntp' instance #0 failed
Traceback (most recent call last):
File "/opt/datadog-agent/agent/checks/__init__.py", line 812, in run
self.check(copy.deepcopy(instance))
File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/ntp/ntp.py", line 33, in check
ntp_stats = ntplib.NTPClient().request(**req_args)
File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/ntplib.py", line 292, in request
addrinfo = socket.getaddrinfo(host, port)[0]
gaierror: [Errno -8] Servname not supported for ai_socktype
```
**Output of the [info page](https://docs.datadoghq.com/agent/faq/agent-commands/#agent-status-and-information)**
```text
[0;31mWarning: Known bug in Linux Kernel 3.18+ causes 'status' to fail.[0m
Calling 'info', instead...
====================
Collector (v 5.24.0)
====================
Status date[0m: 2018-05-17 11:24:41 (6s ago)[0m
Pid: 37
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/collector.log
Clocks
======
NTP offset: Unknown ([Errno -8] Servname not supported for ai_socktype)
System UTC time: 2018-05-17 11:24:47.882946
Paths
=====
conf.d: /etc/dd-agent/conf.d
checks.d: Not found
Hostnames
=========
socket-hostname: dd-agent-bsljm
hostname: <snip>.internal
socket-fqdn: dd-agent-bsljm
Checks
======
kube_dns (1.3.0)
----------------
- instance #0 [[32mOK[0m]
- Collected 44 metrics, 0 events & 0 service checks
network (1.5.0)
---------------
- instance #0 [[32mOK[0m]
- Collected 122 metrics, 0 events & 0 service checks
kubernetes (1.5.0)
------------------
- instance #0 [[32mOK[0m]
- Collected 402 metrics, 0 events & 3 service checks
ntp (1.2.0)
-----------
- Collected 0 metrics, 0 events & 0 service checks
docker_daemon (1.10.0)
----------------------
- instance #0 [[32mOK[0m]
- Collected 527 metrics, 0 events & 1 service check
http_check (2.0.1)
------------------
- instance #0 [[32mOK[0m]
- instance #1 [[32mOK[0m]
- Collected 8 metrics, 0 events & 4 service checks
system_core (1.0.0)
-------------------
- instance #0 [[32mOK[0m]
- Collected 41 metrics, 0 events & 0 service checks
redisdb (1.5.0)
---------------
- instance #0 [[32mOK[0m]
- Collected 44 metrics, 0 events & 1 service check
- Dependencies:
- redis: 2.10.5
disk (1.2.0)
------------
- instance #0 [[32mOK[0m]
- Collected 58 metrics, 0 events & 0 service checks
kube_proxy (Unknown Wheel)
--------------------------
- Collected 0 metrics, 0 events & 0 service checks
Emitters
========
- http_emitter [[32mOK[0m]
====================
Dogstatsd (v 5.24.0)
====================
Status date[0m: 2018-05-17 11:24:48 (0s ago)[0m
Pid: 34
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/dogstatsd.log
Flush count: 119
Packet Count: 7177
Packets per second: 5.5
Metric count: 109
Event count: 0
Service check count: 0
====================
Forwarder (v 5.24.0)
====================
Status date[0m: 2018-05-17 11:24:48 (0s ago)[0m
Pid: 33
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/forwarder.log
Queue Size: 1559 bytes
Queue Length: 1
Flush Count: 387
Transactions received: 302
Transactions flushed: 301
Transactions rejected: 0
API Key Status: API Key is valid
======================
Trace Agent (v 5.24.0)
======================
Pid: 32
Uptime: 1193 seconds
Mem alloc: 4502136 bytes
Hostname: <snip>.internal
Receiver: 0.0.0.0:8126
API Endpoint: https://trace.agent.datadoghq.com
--- Receiver stats (1 min) ---
From go 1.9.2 (gc-amd64-linux), client v0.5.0
Traces received: 19 (9127 bytes)
Spans received: 38
Services received: 0 (0 bytes)
--- Writer stats (1 min) ---
Traces: 6 payloads, 18 traces, 4801 bytes
Stats: 4 payloads, 5 stats buckets, 4440 bytes
Services: 0 payloads, 0 services, 0 bytes
```
**Additional environment details (Operating System, Cloud provider, etc):**
- Official docker image 12.6.5240
- Google Kubernetes Engine 1.8.10.gke0 with Container Optimized OS
**Steps to reproduce the issue:**
1. install datadog agent on kubernetes using official docker image 12.6.5240
2. check the logs
**Describe the results you received:**
The `Servname not supported for ai_socktype` error happens when the NTP check is executed.
**Describe the results you expected:**
No error.
**Additional information you deem important (e.g. issue happens only occasionally):**
Did not happen with 5.23.0
| diff --git a/ntp/datadog_checks/__init__.py b/ntp/datadog_checks/__init__.py
--- a/ntp/datadog_checks/__init__.py
+++ b/ntp/datadog_checks/__init__.py
@@ -0,0 +1,5 @@
+# (C) Datadog, Inc. 2018
+# All rights reserved
+# Licensed under a 3-clause BSD style license (see LICENSE)
+
+__path__ = __import__('pkgutil').extend_path(__path__, __name__)
diff --git a/ntp/datadog_checks/ntp/__about__.py b/ntp/datadog_checks/ntp/__about__.py
new file mode 100644
--- /dev/null
+++ b/ntp/datadog_checks/ntp/__about__.py
@@ -0,0 +1,3 @@
+
+
+__version__ = "1.2.0"
diff --git a/ntp/datadog_checks/ntp/__init__.py b/ntp/datadog_checks/ntp/__init__.py
--- a/ntp/datadog_checks/ntp/__init__.py
+++ b/ntp/datadog_checks/ntp/__init__.py
@@ -1,7 +1,7 @@
-from . import ntp
+# (C) Datadog, Inc. 2018
+# All rights reserved
+# Licensed under Simplified BSD License (see LICENSE)
+from .ntp import NtpCheck
+from .__about__ import __version__
-NtpCheck = ntp.NtpCheck
-
-__version__ = "1.2.0"
-
-__all__ = ['ntp']
+__all__ = ['NtpCheck', '__version__']
diff --git a/ntp/datadog_checks/ntp/ntp.py b/ntp/datadog_checks/ntp/ntp.py
--- a/ntp/datadog_checks/ntp/ntp.py
+++ b/ntp/datadog_checks/ntp/ntp.py
@@ -1,21 +1,40 @@
-# (C) Datadog, Inc. 2010-2016
+# (C) Datadog, Inc. 2018
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
+import random
+import socket
-# 3p
import ntplib
+from datadog_checks.checks import AgentCheck
-# project
-from checks import AgentCheck
-from utils.ntp import NTPUtil
DEFAULT_OFFSET_THRESHOLD = 60 # in seconds
+DEFAULT_HOST = '{}.datadog.pool.ntp.org'.format(random.randint(0, 3))
+DEFAULT_VERSION = 3
+DEFAULT_TIMEOUT = 1.0 # in seconds
+DEFAULT_PORT = 'ntp'
+DEFAULT_PORT_NUM = 123
class NtpCheck(AgentCheck):
DEFAULT_MIN_COLLECTION_INTERVAL = 900 # in seconds
+ def _get_service_port(self, instance):
+ """
+ Get the ntp server port
+ """
+ host = instance.get('host', DEFAULT_HOST)
+ port = instance.get('port', DEFAULT_PORT)
+ # default port is the name of the service but lookup would fail
+ # if the /etc/services file is missing. In that case, fallback to numeric
+ try:
+ socket.getaddrinfo(host, port)
+ except socket.gaierror:
+ port = DEFAULT_PORT_NUM
+
+ return port
+
def check(self, instance):
service_check_msg = None
offset_threshold = instance.get('offset_threshold', DEFAULT_OFFSET_THRESHOLD)
@@ -23,16 +42,22 @@ def check(self, instance):
try:
offset_threshold = int(offset_threshold)
except (TypeError, ValueError):
- raise Exception('Must specify an integer value for offset_threshold. Configured value is %s' % repr(offset_threshold))
+ msg = "Must specify an integer value for offset_threshold. Configured value is {}".format(offset_threshold)
+ raise Exception(msg)
- req_args = NTPUtil().args
+ req_args = {
+ 'host': instance.get('host', DEFAULT_HOST),
+ 'port': self._get_service_port(instance),
+ 'version': int(instance.get('version', DEFAULT_VERSION)),
+ 'timeout': float(instance.get('timeout', DEFAULT_TIMEOUT)),
+ }
- self.log.debug("Using ntp host: {0}".format(req_args['host']))
+ self.log.debug("Using ntp host: {}".format(req_args['host']))
try:
ntp_stats = ntplib.NTPClient().request(**req_args)
except ntplib.NTPException:
- self.log.debug("Could not connect to NTP Server {0}".format(
+ self.log.debug("Could not connect to NTP Server {}".format(
req_args['host']))
status = AgentCheck.UNKNOWN
ntp_ts = None
@@ -46,7 +71,8 @@ def check(self, instance):
if abs(ntp_offset) > offset_threshold:
status = AgentCheck.CRITICAL
- service_check_msg = "Offset {0} secs higher than offset threshold ({1} secs)".format(ntp_offset, offset_threshold)
+ service_check_msg = "Offset {} secs higher than offset threshold ({} secs)".format(ntp_offset,
+ offset_threshold)
else:
status = AgentCheck.OK
diff --git a/ntp/setup.py b/ntp/setup.py
--- a/ntp/setup.py
+++ b/ntp/setup.py
@@ -1,84 +1,39 @@
-# Always prefer setuptools over distutils
+# (C) Datadog, Inc. 2018
+# All rights reserved
+# Licensed under a 3-clause BSD style license (see LICENSE)
from setuptools import setup
-# To use a consistent encoding
-from codecs import open
+from codecs import open # To use a consistent encoding
from os import path
-import json
-import re
+HERE = path.dirname(path.abspath(__file__))
-here = path.abspath(path.dirname(__file__))
-
-def parse_req_line(line):
- line = line.strip()
- if not line or line.startswith('--hash') or line[0] == '#':
- return None
- req = line.rpartition('#')
- if len(req[1]) == 0:
- line = req[2].strip()
- else:
- line = req[1].strip()
-
- if '--hash=' in line:
- line = line[:line.find('--hash=')].strip()
- if ';' in line:
- line = line[:line.find(';')].strip()
- if '\\' in line:
- line = line[:line.find('\\')].strip()
-
- return line
+# Get version info
+ABOUT = {}
+with open(path.join(HERE, 'datadog_checks', 'ntp', '__about__.py')) as f:
+ exec(f.read(), ABOUT)
# Get the long description from the README file
-with open(path.join(here, 'README.md'), encoding='utf-8') as f:
+with open(path.join(HERE, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
-# Parse requirements
-runtime_reqs = ['datadog_checks_base']
-with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
- for line in f.readlines():
- req = parse_req_line(line)
- if req:
- runtime_reqs.append(req)
-def read(*parts):
- with open(path.join(here, *parts), 'r') as fp:
- return fp.read()
-
-def find_version(*file_paths):
- version_file = read(*file_paths)
- version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
- version_file, re.M)
- if version_match:
- return version_match.group(1)
- raise RuntimeError("Unable to find version string.")
-
-# https://packaging.python.org/guides/single-sourcing-package-version/
-version = find_version("datadog_checks", "ntp", "__init__.py")
+# Parse requirements
+def get_requirements(fpath):
-manifest_version = None
-with open(path.join(here, 'manifest.json'), encoding='utf-8') as f:
- manifest = json.load(f)
- manifest_version = manifest.get('version')
+ with open(path.join(HERE, fpath), encoding='utf-8') as f:
+ return f.readlines()
-if version != manifest_version:
- raise Exception("Inconsistent versioning in module and manifest - aborting wheel build")
setup(
name='datadog-ntp',
- version=version,
+ version=ABOUT['__version__'],
description='The NTP check',
long_description=long_description,
keywords='datadog agent ntp check',
-
- # The project's main homepage.
url='https://github.com/DataDog/integrations-core',
-
- # Author details
author='Datadog',
author_email='packages@datadoghq.com',
-
- # License
- license='MIT',
+ license='BSD',
# See https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
@@ -95,24 +50,12 @@ def find_version(*file_paths):
packages=['datadog_checks.ntp'],
# Run-time dependencies
- install_requires=list(set(runtime_reqs)),
-
- # Development dependencies, run with:
- # $ pip install -e .[dev]
- extras_require={
- 'dev': [
- 'check-manifest',
- 'datadog_agent_tk>=5.15',
- ],
- },
+ install_requires=get_requirements('requirements.in')+[
+ 'datadog_checks_base',
+ ],
# Testing setup and dependencies
- tests_require=[
- 'nose',
- 'coverage',
- 'datadog_agent_tk>=5.15',
- ],
- test_suite='nose.collector',
+ tests_require=get_requirements('requirements-dev.txt'),
# Extra files to ship with the wheel package
package_data={b'datadog_checks.ntp': ['conf.yaml.default']},
|
Instructions to run integrations tests fail on tox
**Additional environment details (Operating System, Cloud provider, etc):**
Ubuntu 14.04, Python 2.7.6
**Steps to reproduce the issue:**
1. Follow instructions at https://github.com/DataDog/integrations-core/blob/e3985fe43853f6ee3bb11e8d9d20a18ca12ae340/docs/dev/README.md
2.
3.
**Describe the results you received:**
```
$ tox
ERROR: No setup.py file found. The expected location is:
/home/sday/src/DataDog/integrations-core/setup.py
You can
1. Create one:
https://packaging.python.org/tutorials/distributing-packages/#setup-py
2. Configure tox to avoid running sdist:
http://tox.readthedocs.io/en/latest/example/general.html#avoiding-expensive-sdist
```
**Describe the results you expected:**
Tests should run.
**Additional information you deem important (e.g. issue happens only occasionally):**
| diff --git a/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py b/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py
--- a/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py
+++ b/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py
@@ -17,7 +17,7 @@
METRIC_TYPES = ['counter', 'gauge']
WHITELISTED_WAITING_REASONS = ['ErrImagePull']
-WHITELISTED_TERMINATED_REASONS = ['OOMKilled','ContainerCannotRun','Error']
+WHITELISTED_TERMINATED_REASONS = ['OOMKilled', 'ContainerCannotRun', 'Error']
class KubernetesState(PrometheusCheck):
@@ -81,8 +81,8 @@ def __init__(self, name, init_config, agentConfig, instances=None):
'kube_pod_container_resource_requests_cpu_cores': 'container.cpu_requested',
'kube_pod_container_resource_requests_memory_bytes': 'container.memory_requested',
'kube_pod_container_status_ready': 'container.ready',
- 'kube_pod_container_status_restarts': 'container.restarts', # up to kube-state-metrics 1.1.x
- 'kube_pod_container_status_restarts_total': 'container.restarts', # from kube-state-metrics 1.2.0
+ 'kube_pod_container_status_restarts': 'container.restarts', # up to kube-state-metrics 1.1.x
+ 'kube_pod_container_status_restarts_total': 'container.restarts', # from kube-state-metrics 1.2.0
'kube_pod_container_status_running': 'container.running',
'kube_pod_container_resource_requests_nvidia_gpu_devices': 'container.gpu.request',
'kube_pod_container_resource_limits_nvidia_gpu_devices': 'container.gpu.limit',
@@ -167,7 +167,8 @@ def __init__(self, name, init_config, agentConfig, instances=None):
}
}
- extra_labels = instances[0].get("label_joins", {}) # We do not support more than one instance of kube-state-metrics
+ # We do not support more than one instance of kube-state-metrics
+ extra_labels = instances[0].get("label_joins", {})
self.label_joins.update(extra_labels)
self.label_to_hostname = 'node'
@@ -233,8 +234,8 @@ def _condition_to_tag_check(self, metric, base_sc_name, mapping, tags=None):
"""
Metrics from kube-state-metrics have changed
For example:
- kube_node_status_condition{condition="Ready",node="ip-172-33-39-189.eu-west-1.compute.internal",status="true"} 1
- kube_node_status_condition{condition="OutOfDisk",node="ip-172-33-57-130.eu-west-1.compute.internal",status="false"} 1
+ kube_node_status_condition{condition="Ready",node="ip-172-33-39-189.eu-west-1.compute",status="true"} 1
+ kube_node_status_condition{condition="OutOfDisk",node="ip-172-33-57-130.eu-west-1.compute",status="false"} 1
metric {
label { name: "condition", value: "true"
}
@@ -253,27 +254,45 @@ def _condition_to_tag_check(self, metric, base_sc_name, mapping, tags=None):
mapping = condition_map['mapping']
if base_sc_name == 'kubernetes_state.pod.phase':
- message = "%s is currently reporting %s" % (self._label_to_tag('pod', metric.label), self._label_to_tag('phase', metric.label))
+ message = "{} is currently reporting {}".format(self._label_to_tag('pod', metric.label),
+ self._label_to_tag('phase', metric.label))
else:
- message = "%s is currently reporting %s" % (self._label_to_tag('node', metric.label), self._label_to_tag('condition', metric.label))
+ message = "{} is currently reporting {}".format(self._label_to_tag('node', metric.label),
+ self._label_to_tag('condition', metric.label))
if condition_map['service_check_name'] is None:
- self.log.debug("Unable to handle %s - unknown condition %s" % (service_check_name, label_value))
+ self.log.debug("Unable to handle {} - unknown condition {}".format(service_check_name, label_value))
else:
self.service_check(service_check_name, mapping[label_value], tags=tags, message=message)
- self.log.debug("%s %s %s" % (service_check_name, mapping[label_value], tags))
+ self.log.debug("{} {} {}".format(service_check_name, mapping[label_value], tags))
def _get_metric_condition_map(self, base_sc_name, labels):
if base_sc_name == 'kubernetes_state.node':
switch = {
- 'Ready': {'service_check_name': base_sc_name + '.ready', 'mapping': self.condition_to_status_positive},
- 'OutOfDisk': {'service_check_name': base_sc_name + '.out_of_disk', 'mapping': self.condition_to_status_negative},
- 'DiskPressure': {'service_check_name': base_sc_name + '.disk_pressure', 'mapping': self.condition_to_status_negative},
- 'NetworkUnavailable': {'service_check_name': base_sc_name + '.network_unavailable', 'mapping': self.condition_to_status_negative},
- 'MemoryPressure': {'service_check_name': base_sc_name + '.memory_pressure', 'mapping': self.condition_to_status_negative}
+ 'Ready': {
+ 'service_check_name': base_sc_name + '.ready',
+ 'mapping': self.condition_to_status_positive
+ },
+ 'OutOfDisk': {
+ 'service_check_name': base_sc_name + '.out_of_disk',
+ 'mapping': self.condition_to_status_negative
+ },
+ 'DiskPressure': {
+ 'service_check_name': base_sc_name + '.disk_pressure',
+ 'mapping': self.condition_to_status_negative
+ },
+ 'NetworkUnavailable': {
+ 'service_check_name': base_sc_name + '.network_unavailable',
+ 'mapping': self.condition_to_status_negative
+ },
+ 'MemoryPressure': {
+ 'service_check_name': base_sc_name + '.memory_pressure',
+ 'mapping': self.condition_to_status_negative
+ }
}
label_value = self._extract_label_value('status', labels)
- return label_value, switch.get(self._extract_label_value('condition', labels), {'service_check_name': None, 'mapping': None})
+ return label_value, switch.get(self._extract_label_value('condition', labels),
+ {'service_check_name': None, 'mapping': None})
elif base_sc_name == 'kubernetes_state.pod.phase':
label_value = self._extract_label_value('phase', labels)
@@ -374,7 +393,9 @@ def kube_cronjob_next_schedule_time(self, message, **kwargs):
on_schedule = int(metric.gauge.value) - curr_time
tags = [self._format_tag(label.name, label.value) for label in metric.label] + self.custom_tags
if on_schedule < 0:
- message = "The service check scheduled at %s is %s seconds late" % (time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(int(metric.gauge.value))), on_schedule)
+ message = "The service check scheduled at {} is {} seconds late".format(
+ time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(int(metric.gauge.value))), on_schedule
+ )
self.service_check(check_basename, self.CRITICAL, tags=tags, message=message)
else:
self.service_check(check_basename, self.OK, tags=tags)
@@ -414,7 +435,6 @@ def kube_job_status_failed(self, message, **kwargs):
tags.append(self._format_tag(label.name, label.value))
self.job_failed_count[frozenset(tags)] += metric.gauge.value
-
def kube_job_status_succeeded(self, message, **kwargs):
for metric in message.metric:
tags = [] + self.custom_tags
@@ -510,7 +530,7 @@ def kube_resourcequota(self, message, **kwargs):
def kube_limitrange(self, message, **kwargs):
""" Resource limits by consumer type. """
- # type's cardinality is low: https://github.com/kubernetes/kubernetes/blob/v1.6.1/pkg/api/v1/types.go#L3872-L3879
+ # type's cardinality's low: https://github.com/kubernetes/kubernetes/blob/v1.6.1/pkg/api/v1/types.go#L3872-L3879
# idem for resource: https://github.com/kubernetes/kubernetes/blob/v1.6.1/pkg/api/v1/types.go#L3342-L3352
# idem for constraint: https://github.com/kubernetes/kubernetes/blob/v1.6.1/pkg/api/v1/types.go#L3882-L3901
metric_base_name = self.NAMESPACE + '.limitrange.{}.{}'
diff --git a/tasks/constants.py b/tasks/constants.py
--- a/tasks/constants.py
+++ b/tasks/constants.py
@@ -41,6 +41,7 @@
'kafka_consumer',
'kube_proxy',
'kubelet',
+ 'kubernetes_state',
'kyototycoon',
'lighttpd',
'linkerd',
|
Fargate checks fails when there are stopped containers in the task
I'm running the agent as a sidecar in a Fargate task as described [here](https://www.datadoghq.com/blog/monitor-aws-fargate/). The fargate check continuously fails with the following error:
```text
[ AGENT ] 2018-07-27 06:30:33 UTC | ERROR | (runner.go:277 in work) | Error running check ecs_fargate: [{"message": "'NoneType' object has no attribute '__getitem__'", "traceback": "Traceback (most recent call last):
File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/checks/base.py\", line 303, in run
self.check(copy.deepcopy(self.instances[0]))
File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/ecs_fargate/ecs_fargate.py\", line 126, in check
self.rate('ecs.fargate.cpu.system', container_stats['cpu_stats']['system_cpu_usage'], tags)
TypeError: 'NoneType' object has no attribute '__getitem__'"}]
```
After some investigation I found that the ecs stats endpoint returns stopped containers stats as `null`. I use a volume in the task and Fargate creates a `~internal~ecs-emptyvolume-source` container that is immediately stopped, causing an stats entry with the container id but no stats. The agent then fails to handle the null vaule.
This applies not just to volumes but any container that is stopped. For example a container that runs a command and exits 0.
You can see an example of the `metadata` and `stats` output [here](https://gist.github.com/zlangbert/41540dd857dbbb35ba6831d32f95968b).
**Additional environment details (Operating System, Cloud provider, etc):**
AWS Fargate platform version 1.1.0
**Additional information you deem important (e.g. issue happens only occasionally):**
This is related to support ticket 156356
| diff --git a/ecs_fargate/datadog_checks/ecs_fargate/ecs_fargate.py b/ecs_fargate/datadog_checks/ecs_fargate/ecs_fargate.py
--- a/ecs_fargate/datadog_checks/ecs_fargate/ecs_fargate.py
+++ b/ecs_fargate/datadog_checks/ecs_fargate/ecs_fargate.py
@@ -1,12 +1,10 @@
# (C) Datadog, Inc. 2010-2017
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
-
-# 3rd party
import requests
+from six import iteritems
-# project
-from checks import AgentCheck
+from datadog_checks.checks import AgentCheck
# Fargate related constants
EVENT_TYPE = SOURCE_TYPE_NAME = 'ecs.fargate'
@@ -90,7 +88,7 @@ def check(self, instance):
if label in label_whitelist or label not in LABEL_BLACKLIST:
container_tags[c_id].append(label + ':' + value)
- if container['Limits']['CPU'] > 0:
+ if container.get('Limits', {}).get('CPU', 0) > 0:
self.gauge('ecs.fargate.cpu.limit', container['Limits']['CPU'], container_tags[c_id])
try:
@@ -121,25 +119,47 @@ def check(self, instance):
self.log.warning(msg, exc_info=True)
for container_id, container_stats in stats.iteritems():
- # CPU metrics
tags = container_tags[container_id]
- self.rate('ecs.fargate.cpu.system', container_stats['cpu_stats']['system_cpu_usage'], tags)
- self.rate('ecs.fargate.cpu.user', container_stats['cpu_stats']['cpu_usage']['total_usage'], tags)
+
+ # CPU metrics
+ cpu_stats = container_stats.get('cpu_stats', {})
+
+ value = cpu_stats.get('system_cpu_usage')
+ if value is not None:
+ self.rate('ecs.fargate.cpu.system', value, tags)
+
+ value = cpu_stats.get('cpu_usage', {}).get('total_usage')
+ if value is not None:
+ self.rate('ecs.fargate.cpu.user', value, tags)
+
# Memory metrics
+ memory_stats = container_stats.get('memory_stats', {})
+
for metric in MEMORY_GAUGE_METRICS:
- value = container_stats['memory_stats']['stats'][metric]
- if value < CGROUP_NO_VALUE:
+ value = memory_stats.get('stats', {}).get(metric)
+ if value is not None and value < CGROUP_NO_VALUE:
self.gauge('ecs.fargate.mem.' + metric, value, tags)
for metric in MEMORY_RATE_METRICS:
- value = container_stats['memory_stats']['stats'][metric]
- self.rate('ecs.fargate.mem.' + metric, value, tags)
- self.gauge('ecs.fargate.mem.max_usage', container_stats['memory_stats']['max_usage'], tags)
- self.gauge('ecs.fargate.mem.usage', container_stats['memory_stats']['usage'], tags)
- self.gauge('ecs.fargate.mem.limit', container_stats['memory_stats']['limit'], tags)
+ value = memory_stats.get('stats', {}).get(metric)
+ if value is not None:
+ self.rate('ecs.fargate.mem.' + metric, value, tags)
+
+ value = memory_stats.get('max_usage')
+ if value is not None:
+ self.gauge('ecs.fargate.mem.max_usage', value, tags)
+
+ value = memory_stats.get('usage')
+ if value is not None:
+ self.gauge('ecs.fargate.mem.usage', value, tags)
+
+ value = memory_stats.get('limit')
+ if value is not None:
+ self.gauge('ecs.fargate.mem.limit', value, tags)
+
# I/O metrics
- for blkio_cat, metric_name in IO_METRICS.iteritems():
+ for blkio_cat, metric_name in iteritems(IO_METRICS):
read_counter = write_counter = 0
- for blkio_stat in container_stats["blkio_stats"][blkio_cat]:
+ for blkio_stat in container_stats.get("blkio_stats", {}).get(blkio_cat, []):
if blkio_stat["op"] == "Read" and "value" in blkio_stat:
read_counter += blkio_stat["value"]
elif blkio_stat["op"] == "Write" and "value" in blkio_stat:
|
[nginx] Incorrectly reporting vts stats and nginxplus as gauge instead of rate
The agent check code always uses gauge to report any number found in the vts/nginxplus json.
Those numbers are in reality ever increasing total counters, wich makes the gauge useless in datadog.
Some keys (for example the requests/sec) are correctly hard-coded as a rate metric.
A recent commit also added for some keys a count variant, but this makes no sense as they are running totals and not counter values.
| diff --git a/nginx/datadog_checks/nginx/nginx.py b/nginx/datadog_checks/nginx/nginx.py
--- a/nginx/datadog_checks/nginx/nginx.py
+++ b/nginx/datadog_checks/nginx/nginx.py
@@ -90,7 +90,7 @@ def check(self, instance):
funcs = {
'gauge': self.gauge,
'rate': self.rate,
- 'count': self.count
+ 'count': self.monotonic_count
}
conn = None
handled = None
|
query_string should not be dependant on status URL path
https://github.com/DataDog/integrations-core/blob/e74e94794e57ba3fa32214ddc78cb1d978faa230/php_fpm/datadog_checks/php_fpm/php_fpm.py#L200
This line sets the query_string only if the status URL is the default URL, which means any non-default status URLs will fail because the agent will not receive the JSON output.
This line needs to have the if-statement removed and just always set query_string to "json". There is no reason for the comparison to the default path of "/status"
| diff --git a/php_fpm/datadog_checks/php_fpm/php_fpm.py b/php_fpm/datadog_checks/php_fpm/php_fpm.py
--- a/php_fpm/datadog_checks/php_fpm/php_fpm.py
+++ b/php_fpm/datadog_checks/php_fpm/php_fpm.py
@@ -107,7 +107,7 @@ def _process_status(self, status_url, auth, tags, http_host, timeout, disable_ss
data = {}
try:
if use_fastcgi:
- data = json.loads(self.request_fastcgi(status_url))
+ data = json.loads(self.request_fastcgi(status_url, query='json'))
else:
# TODO: adding the 'full' parameter gets you per-process detailed
# informations, which could be nice to parse and output as metrics
@@ -187,7 +187,7 @@ def _process_ping(
self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK, tags=sc_tags)
@classmethod
- def request_fastcgi(cls, url):
+ def request_fastcgi(cls, url, query=''):
parsed_url = urlparse(url)
hostname = parsed_url.hostname
@@ -197,14 +197,12 @@ def request_fastcgi(cls, url):
port = str(parsed_url.port or 9000)
route = parsed_url.path
- query_string = 'json' if route == '/status' else ''
-
env = {
'CONTENT_LENGTH': '0',
'CONTENT_TYPE': '',
'DOCUMENT_ROOT': '/',
'GATEWAY_INTERFACE': 'FastCGI/1.1',
- 'QUERY_STRING': query_string,
+ 'QUERY_STRING': query,
'REDIRECT_STATUS': '200',
'REMOTE_ADDR': '127.0.0.1',
'REMOTE_PORT': '80',
|
postfix integration causing log spam.
**Output of the [info page](https://docs.datadoghq.com/agent/faq/agent-commands/#agent-status-and-information)**
```text
Getting the status from the agent.
==============
Agent (v6.5.1)
==============
Status date: 2018-09-24 15:37:09.556012 UTC
Pid: 14192
Python Version: 2.7.15
Logs: /var/log/datadog/agent.log
Check Runners: 4
Log Level: info
Paths
=====
Config File: /etc/datadog-agent/datadog.yaml
conf.d: /etc/datadog-agent/conf.d
checks.d: /etc/datadog-agent/checks.d
Clocks
======
NTP offset: -1.852ms
System UTC time: 2018-09-24 15:37:09.556012 UTC
Host Info
=========
bootTime: 2016-08-22 00:07:51.000000 UTC
kernelVersion: 2.6.32-642.3.1.el6.x86_64
os: linux
platform: centos
platformFamily: rhel
platformVersion: 6.8
procs: 169
uptime: 18204h39m45s
virtualizationRole: guest
virtualizationSystem: xen
Hostnames
=========
ec2-hostname: ***omitted***
hostname: ***omitted***
instance-id: ***omitted***
socket-fqdn: ***omitted***
socket-hostname: ***omitted***
hostname provider: os
unused hostname providers:
aws: not retrieving hostname from AWS: the host is not an ECS instance, and other providers already retrieve non-default hostnames
configuration/environment: hostname is empty
gce: unable to retrieve hostname from GCE: status code 404 trying to GET http://169.254.169.254/computeMetadata/v1/instance/hostname
=========
Collector
=========
Running Checks
==============
cpu
---
Instance ID: cpu [OK]
Total Runs: 29,478
Metric Samples: 6, Total: 176,862
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 0s
disk (1.3.0)
------------
Instance ID: disk:e5dffb8bef24336f [OK]
Total Runs: 29,478
Metric Samples: 32, Total: 943,296
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 16ms
file_handle
-----------
Instance ID: file_handle [OK]
Total Runs: 29,478
Metric Samples: 5, Total: 147,390
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 0s
haproxy (1.3.1)
---------------
Instance ID: haproxy:eb810271aef06ba8 [OK]
Total Runs: 29,479
Metric Samples: 1,204, Total: 1 M
Events: 0, Total: 0
Service Checks: 81, Total: 1 M
Average Execution Time : 33ms
io
--
Instance ID: io [OK]
Total Runs: 29,478
Metric Samples: 52, Total: 1 M
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 3ms
load
----
Instance ID: load [OK]
Total Runs: 29,478
Metric Samples: 6, Total: 176,868
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 0s
memory
------
Instance ID: memory [OK]
Total Runs: 29,478
Metric Samples: 17, Total: 501,126
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 0s
network (1.6.1)
---------------
Instance ID: network:2a218184ebe03606 [OK]
Total Runs: 29,478
Metric Samples: 18, Total: 530,604
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 1ms
ntp
---
Instance ID: ntp:b4579e02d1981c12 [OK]
Total Runs: 29,478
Metric Samples: 1, Total: 29,478
Events: 0, Total: 0
Service Checks: 1, Total: 29,478
Average Execution Time : 6ms
postfix (1.2.2)
---------------
Instance ID: postfix:17d6dacf688138c9 [OK]
Total Runs: 29,478
Metric Samples: 3, Total: 88,434
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 35ms
uptime
------
Instance ID: uptime [OK]
Total Runs: 29,478
Metric Samples: 1, Total: 29,478
Events: 0, Total: 0
Service Checks: 0, Total: 0
Average Execution Time : 0s
========
JMXFetch
========
Initialized checks
==================
no checks
Failed checks
=============
no checks
=========
Forwarder
=========
CheckRunsV1: 29,478
Dropped: 0
DroppedOnInput: 0
Errors: 0
Events: 0
HostMetadata: 0
IntakeV1: 2,241
Metadata: 0
Requeued: 0
Retried: 0
RetryQueueSize: 0
Series: 0
ServiceChecks: 0
SketchSeries: 0
Success: 61,197
TimeseriesV1: 29,478
API Keys status
===============
API key ending in ***omitted*** for endpoint https://app.datadoghq.com: API Key valid
==========
Logs Agent
==========
Logs Agent is not running
=========
DogStatsD
=========
Checks Metric Sample: 40.2 M
Event: 1
Events Flushed: 1
Number Of Flushes: 29,478
Series Flushed: 39.6 M
Service Check: 2.7 M
Service Checks Flushed: 2.7 M
```
**Additional environment details (Operating System, Cloud provider, etc):**
centos6 and centos7, though probably more widely applicable.
**Steps to reproduce the issue:**
1. configure the postfix integration, not using the postqueue method.
**Describe the results you received:**
Every time the postfix check is run, the output from `sudo -l` is sent to the agent's stdout.
On centos6, the output ends up in /var/log/datadog/errors.log, based on the upstart file.
On centos7, the output ends up in the systemd journal, and from there in syslog.
**Describe the results you expected:**
The check should not send the output of `sudo -l` to stdout.
**Additional information you deem important (e.g. issue happens only occasionally):**
| diff --git a/postfix/datadog_checks/postfix/postfix.py b/postfix/datadog_checks/postfix/postfix.py
--- a/postfix/datadog_checks/postfix/postfix.py
+++ b/postfix/datadog_checks/postfix/postfix.py
@@ -176,8 +176,9 @@ def _get_queue_count(self, directory, queues, tags):
count = sum(len(files) for root, dirs, files in os.walk(queue_path))
else:
# can dd-agent user run sudo?
- test_sudo = os.system('setsid sudo -l < /dev/null')
- if test_sudo == 0:
+ test_sudo = ['sudo', '-l']
+ _, _, exit_code = get_subprocess_output(test_sudo, self.log, False)
+ if exit_code == 0:
# default to `root` for backward compatibility
postfix_user = self.init_config.get('postfix_user', 'root')
cmd = ['sudo', '-u', postfix_user, 'find', queue_path, '-type', 'f']
|
[redis] KeyError in commandstats when `cmdstat_host` is present.
I am getting a `KeyError` when `command_stats` is enabled. This occurs only on some of the redis instances within my cluster.
I've traced back the error to [this line of code](https://github.com/DataDog/integrations-core/blob/master/redisdb/check.py#L364). Specifically, the `KeyError` is thrown on `stats['calls']`.
Here's the relevant `redis-cli` output:
```
$ redis-cli -h broken-host info commandstats
# Commandstats
cmdstat_lpush:calls=4,usec=56,usec_per_call=14.00
cmdstat_lpop:calls=2,usec=39,usec_per_call=19.50
cmdstat_select:calls=1,usec=2,usec_per_call=2.00
cmdstat_ping:calls=17123,usec=11024,usec_per_call=0.64
cmdstat_flushall:calls=1,usec=32,usec_per_call=32.00
cmdstat_info:calls=41051,usec=2933839,usec_per_call=71.47
cmdstat_config:calls=3,usec=47,usec_per_call=15.67
cmdstat_cluster:calls=42,usec=9417,usec_per_call=224.21
cmdstat_client:calls=40,usec=414,usec_per_call=10.35
cmdstat_slowlog:calls=3,usec=8,usec_per_call=2.67
cmdstat_host::calls=2,usec=145,usec_per_call=72.50
$ redis-cli -h working-host info commandstats
# Commandstats
cmdstat_lpush:calls=4,usec=53,usec_per_call=13.25
cmdstat_lpop:calls=4,usec=33,usec_per_call=8.25
cmdstat_ping:calls=8321,usec=8159,usec_per_call=0.98
cmdstat_psync:calls=1,usec=692,usec_per_call=692.00
cmdstat_replconf:calls=88066,usec=127795,usec_per_call=1.45
cmdstat_flushall:calls=1,usec=30,usec_per_call=30.00
cmdstat_info:calls=33406,usec=2437039,usec_per_call=72.95
cmdstat_config:calls=68,usec=1022,usec_per_call=15.03
cmdstat_cluster:calls=42,usec=8872,usec_per_call=211.24
cmdstat_client:calls=40,usec=416,usec_per_call=10.40
cmdstat_slowlog:calls=68,usec=201,usec_per_call=2.96
cmdstat_command:calls=5,usec=2232,usec_per_call=446.40
```
As you can see, the difference is in the `cmdstat_host::calls=2,usec=145,usec_per_call=72.50` line, wherein there are _two colons_ after the `cmdstat_host` identifier. This was confirmed by modifying the redis check to print out the `stats` dictionary before the error was thrown:
```
stats = {'usec_per_call': 72.5, 'usec': 145, ':calls': 2}
```
There are [virtually no google results for cmdstat_host](https://www.google.com/search?q=redis+cmdstat_host&oq=redis+cmdstat_host&aqs=chrome..69i57j69i59j69i64.769j0j4&sourceid=chrome&ie=UTF-8) but the [only result I could find](http://pingredis.blogspot.com/2017/04/redis-info-command-key-metrics-to.html) shows that two colons following `cmdstat_host` is (may be?) normal.
I'm happy to submit a mini-PR to fix this issue if need be; mostly wanted to get a DataDog maintainer's perspective on how you want it to be done.
| diff --git a/redisdb/datadog_checks/redisdb/redisdb.py b/redisdb/datadog_checks/redisdb/redisdb.py
--- a/redisdb/datadog_checks/redisdb/redisdb.py
+++ b/redisdb/datadog_checks/redisdb/redisdb.py
@@ -408,8 +408,13 @@ def _check_command_stats(self, conn, tags):
for key, stats in command_stats.iteritems():
command = key.split('_', 1)[1]
- command_tags = tags + ['command:%s' % command]
- self.gauge('redis.command.calls', stats['calls'], tags=command_tags)
+ command_tags = tags + ['command:{}'.format(command)]
+
+ # When `host:` is passed as a command, `calls` ends up having a leading `:`
+ # see https://github.com/DataDog/integrations-core/issues/839
+ calls = stats.get('calls') if command != 'host' else stats.get(':calls')
+
+ self.gauge('redis.command.calls', calls, tags=command_tags)
self.gauge('redis.command.usec_per_call', stats['usec_per_call'], tags=command_tags)
def check(self, instance):
|
Additional metrics for Elasticsearch
Currently, the Elasticsearch in flight requests circuit breaker metrics are not captured. Can we add the following metrics?
- breakers.in_flight_requests.tripped
- breakers.in_flight_requests.overhead
- breakers.in_flight_requests.estimated_size_in_bytes
| diff --git a/elastic/datadog_checks/elastic/elastic.py b/elastic/datadog_checks/elastic/elastic.py
--- a/elastic/datadog_checks/elastic/elastic.py
+++ b/elastic/datadog_checks/elastic/elastic.py
@@ -358,12 +358,16 @@ class ESCheck(AgentCheck):
"elasticsearch.thread_pool.force_merge.rejected": ("rate", "thread_pool.force_merge.rejected"),
}
- ADDITIONAL_METRICS_5_x = { # Stats are only valid for v5.x
+ ADDITIONAL_METRICS_5_x = {
"elasticsearch.fs.total.disk_io_op": ("rate", "fs.io_stats.total.operations"),
"elasticsearch.fs.total.disk_reads": ("rate", "fs.io_stats.total.read_operations"),
"elasticsearch.fs.total.disk_writes": ("rate", "fs.io_stats.total.write_operations"),
"elasticsearch.fs.total.disk_read_size_in_bytes": ("gauge", "fs.io_stats.total.read_kilobytes"),
"elasticsearch.fs.total.disk_write_size_in_bytes": ("gauge", "fs.io_stats.total.write_kilobytes"),
+ "elasticsearch.breakers.inflight_requests.tripped": ("gauge", "breakers.inflight_requests.tripped"),
+ "elasticsearch.breakers.inflight_requests.overhead": ("gauge", "breakers.inflight_requests.overhead"),
+ "elasticsearch.breakers.inflight_requests.estimated_size_in_bytes":
+ ("gauge", "breakers.inflight_requests.estimated_size_in_bytes"),
}
ADDITIONAL_METRICS_PRE_6_3 = {
|
[elastic] delayed_unassigned_shards should be included in CLUSTER_HEALTH_METRICS
The current version of the elastic integration does not collect the delayed_unassigned_shards metric from _cluster/health. The delayed_unassigned_shards metric is useful for tracking how shards are handled after a node leaves the cluster. This page describes delayed allocation: https://www.elastic.co/guide/en/elasticsearch/reference/current/delayed-allocation.html
| diff --git a/elastic/datadog_checks/elastic/elastic.py b/elastic/datadog_checks/elastic/elastic.py
--- a/elastic/datadog_checks/elastic/elastic.py
+++ b/elastic/datadog_checks/elastic/elastic.py
@@ -395,6 +395,10 @@ class ESCheck(AgentCheck):
"elasticsearch.cluster_status": ("gauge", "status", lambda v: {"red": 0, "yellow": 1, "green": 2}.get(v, -1)),
}
+ CLUSTER_HEALTH_METRICS_POST_2_4 = {
+ "elasticsearch.delayed_unassigned_shards": ("gauge", "delayed_unassigned_shards"),
+ }
+
CLUSTER_PENDING_TASKS = {
"elasticsearch.pending_tasks_total": ("gauge", "pending_task_total"),
"elasticsearch.pending_tasks_priority_high": ("gauge", "pending_tasks_priority_high"),
@@ -510,7 +514,7 @@ def check(self, instance):
# Load the health data.
health_url = self._join_url(config.url, health_url, admin_forwarder)
health_data = self._get_data(health_url, config)
- self._process_health_data(health_data, config)
+ self._process_health_data(health_data, config, version)
if config.pending_task_stats:
# Load the pending_tasks data.
@@ -822,7 +826,7 @@ def _process_metric(self, data, metric, xtype, path, xform=None,
else:
self._metric_not_found(metric, path)
- def _process_health_data(self, data, config):
+ def _process_health_data(self, data, config, version):
cluster_status = data.get('status')
if not self.cluster_status.get(config.url):
self.cluster_status[config.url] = cluster_status
@@ -835,7 +839,11 @@ def _process_health_data(self, data, config):
event = self._create_event(cluster_status, tags=config.tags)
self.event(event)
- for metric, desc in self.CLUSTER_HEALTH_METRICS.iteritems():
+ cluster_health_metrics = self.CLUSTER_HEALTH_METRICS
+ if version >= [2, 4, 0]:
+ cluster_health_metrics.update(self.CLUSTER_HEALTH_METRICS_POST_2_4)
+
+ for metric, desc in cluster_health_metrics.iteritems():
self._process_metric(data, metric, *desc, tags=config.tags)
# Process the service check
|
Feature request: monitoring resource requests only for pods in the running state
We're running datadog 6.4.2 in several GKE clusters, along with kube-state-metrics to help monitor the state of the cluster.
`kube_state_metrics.*` metrics are flowing through to datadog as expected, and we've got some useful dashboards going.
One thing we were hoping to setup was a monitor that warns us when a cluster is approaching capacity, particularly when the total CPU or memory requests are approaching the total size of the cluster and we're at risk of new pods being unschedulable.
For now, we're monitoring whether the sum of `kubernetes_state.container.cpu_requested` is close to the sum of `kubernetes_state.node.cpu_capacity`. That gives us a rough approximation, but isn't quite right because `kubernetes_state.container.cpu_requested` includes the CPU requests for pods that are in the Completed (and presumably evicted) state.
That makes it possible for our monitoring to say the cluster is approaching 100% capacity, even when there's enough room for pods to be scheduled.
Here's a chart that shows the two metrics we're tracking. This test cluster has a total of 6 vCPUs, and at the right most edge of the chart it suggests total pod CPU requests are 4.85 CPUs when they're actually ~4.1 (we had 3 Job pods in a completed state, each requesting .25 vCPUs).

I posted a comment about this on the kube-state-metrics repo (https://github.com/kubernetes/kube-state-metrics/issues/458#issuecomment-436511652), and their preference is for this to be solved on the client side rather than exporting a new metric.
Is the datadog kube_state_metrics check able to do joins like they suggest? If so, are you open to adding a version of `kubernetes_state.container.cpu_requested` that only includes running pods?
| diff --git a/datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py b/datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py
--- a/datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py
+++ b/datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py
@@ -295,23 +295,31 @@ def process(self, scraper_config, metric_transformers=None):
self.process_metric(metric, scraper_config, metric_transformers=metric_transformers)
def _store_labels(self, metric, scraper_config):
- scraper_config['label_joins']
# If targeted metric, store labels
if metric.name in scraper_config['label_joins']:
matching_label = scraper_config['label_joins'][metric.name]['label_to_match']
for sample in metric.samples:
- labels_list = []
+ # metadata-only metrics that are used for label joins are always equal to 1
+ # this is required for metrics where all combinations of a state are sent
+ # but only the active one is set to 1 (others are set to 0)
+ # example: kube_pod_status_phase in kube-state-metrics
+ if sample[self.SAMPLE_VALUE] != 1:
+ continue
+ label_dict = dict()
matching_value = None
for label_name, label_value in iteritems(sample[self.SAMPLE_LABELS]):
if label_name == matching_label:
matching_value = label_value
elif label_name in scraper_config['label_joins'][metric.name]['labels_to_get']:
- labels_list.append((label_name, label_value))
+ label_dict[label_name] = label_value
try:
- scraper_config['_label_mapping'][matching_label][matching_value] = labels_list
+ if scraper_config['_label_mapping'][matching_label].get(matching_value):
+ scraper_config['_label_mapping'][matching_label][matching_value].update(label_dict)
+ else:
+ scraper_config['_label_mapping'][matching_label][matching_value] = label_dict
except KeyError:
if matching_value is not None:
- scraper_config['_label_mapping'][matching_label] = {matching_value: labels_list}
+ scraper_config['_label_mapping'][matching_label] = {matching_value: label_dict}
def _join_labels(self, metric, scraper_config):
# Filter metric to see if we can enrich with joined labels
@@ -325,10 +333,11 @@ def _join_labels(self, metric, scraper_config):
scraper_config['_active_label_mapping'][label_name][sample[self.SAMPLE_LABELS][label_name]] = True
# If mapping found add corresponding labels
try:
- for label_tuple in (
- scraper_config['_label_mapping'][label_name][sample[self.SAMPLE_LABELS][label_name]]
+ for name, val in (
+ iteritems(scraper_config['_label_mapping'][label_name][sample[self.SAMPLE_LABELS]
+ [label_name]])
):
- sample[self.SAMPLE_LABELS][label_tuple[0]] = label_tuple[1]
+ sample[self.SAMPLE_LABELS][name] = val
except KeyError:
pass
diff --git a/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py b/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py
--- a/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py
+++ b/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py
@@ -248,6 +248,10 @@ def _create_kubernetes_state_prometheus_instance(self, instance):
'label_to_match': 'pod',
'labels_to_get': ['node']
},
+ 'kube_pod_status_phase': {
+ 'label_to_match': 'pod',
+ 'labels_to_get': ['phase']
+ },
'kube_persistentvolume_info': {
'label_to_match': 'persistentvolume',
'labels_to_get': ['storageclass']
|
vault: Allow custom CA bundle
FR: Allow setting a custom CA bundle for the `vault` integration, similar to the `http_check`
| diff --git a/vault/datadog_checks/vault/vault.py b/vault/datadog_checks/vault/vault.py
--- a/vault/datadog_checks/vault/vault.py
+++ b/vault/datadog_checks/vault/vault.py
@@ -5,6 +5,7 @@
from time import time as timestamp
import requests
+from six import string_types
from urllib3.exceptions import InsecureRequestWarning
from datadog_checks.checks import AgentCheck
@@ -126,7 +127,21 @@ def get_config(self, instance):
password = instance.get('password')
config['auth'] = (username, password) if username and password else None
- config['ssl_verify'] = is_affirmative(instance.get('ssl_verify', True))
+ ssl_cert = instance.get('ssl_cert')
+ ssl_private_key = instance.get('ssl_private_key')
+ if isinstance(ssl_cert, string_types):
+ if isinstance(ssl_private_key, string_types):
+ config['ssl_cert'] = (ssl_cert, ssl_private_key)
+ else:
+ config['ssl_cert'] = ssl_cert
+ else:
+ config['ssl_cert'] = None
+
+ if isinstance(instance.get('ssl_ca_cert'), string_types):
+ config['ssl_verify'] = instance['ssl_ca_cert']
+ else:
+ config['ssl_verify'] = is_affirmative(instance.get('ssl_verify', True))
+
config['ssl_ignore_warning'] = is_affirmative(instance.get('ssl_ignore_warning', False))
config['proxies'] = self.get_instance_proxy(instance, config['api_url'])
config['timeout'] = int(instance.get('timeout', 20))
@@ -149,6 +164,7 @@ def access_api(self, url, config, tags):
response = requests.get(
url,
auth=config['auth'],
+ cert=config['ssl_cert'],
verify=config['ssl_verify'],
proxies=config['proxies'],
timeout=config['timeout'],
|
[couch] error "local variable 'db_stats' referenced before assignment"
I just started using datadog and have an issue getting the couch integration to run (on MacOS Sierra).
`/usr/local/bin/datadog-agent info` reports this:
````
Checks
======
ntp
---
- Collected 0 metrics, 0 events & 1 service check
disk
----
- instance #0 [OK]
- Collected 44 metrics, 0 events & 1 service check
network
-------
- instance #0 [OK]
- Collected 27 metrics, 0 events & 1 service check
couch
-----
- instance #0 [ERROR]: "local variable 'db_stats' referenced before assignment"
- Collected 0 metrics, 0 events & 2 service checks
Emitters
========
- http_emitter [OK]
===================
Dogstatsd (v 5.8.0)
===================
Status date: 2017-02-22 17:11:34 (8s ago)
Pid: 85989
Platform: Darwin-16.4.0-x86_64-i386-64bit
Python Version: 2.7.11, 64bit
````
To me, `local variable 'db_stats' referenced before assignment` looks like an error in the couchdb integration library.
| diff --git a/couch/check.py b/couch/check.py
--- a/couch/check.py
+++ b/couch/check.py
@@ -4,6 +4,7 @@
# stdlib
from urlparse import urljoin
+from urllib import quote
# 3rd party
import requests
@@ -119,7 +120,7 @@ def get_data(self, server, instance):
databases = list(databases)[:self.MAX_DB]
for dbName in databases:
- url = urljoin(server, dbName)
+ url = urljoin(server, quote(dbName, safe = ''))
try:
db_stats = self._get_stats(url, instance)
except requests.exceptions.HTTPError as e:
|
MongoDB integration should run the `top` command on the admin database.
When configuring the agent check for a database != `admin` to gather stats on the collections, the `top` additional metric no longer works:
```
dd.collector[29969]: WARNING (mongo.py:997): Failed to record `top` metrics top may only be run against the admin database.
```
That's because the `top` command can only be run against the admin database: https://docs.mongodb.com/manual/reference/command/top/#dbcmd.top
The issue comes from this line: https://github.com/DataDog/integrations-core/blob/master/mongo/datadog_checks/mongo/mongo.py#L962
That should probably be changed to be:
```
dbtop = admindb.command('top')
```
as the `admindb` variable points to the admin database to be able to run commands that only run against the admin database as stated https://github.com/DataDog/integrations-core/blob/master/mongo/datadog_checks/mongo/mongo.py#L734
| diff --git a/mongo/datadog_checks/mongo/mongo.py b/mongo/datadog_checks/mongo/mongo.py
--- a/mongo/datadog_checks/mongo/mongo.py
+++ b/mongo/datadog_checks/mongo/mongo.py
@@ -959,7 +959,7 @@ def total_seconds(td):
# Report the usage metrics for dbs/collections
if 'top' in additional_metrics:
try:
- dbtop = db.command('top')
+ dbtop = admindb.command('top')
for ns, ns_metrics in iteritems(dbtop['totals']):
if "." not in ns:
continue
|
New Etcd preview integration doesn't fully work with default kubernetes etcd versions
The etcd integration with `use_preview=true` to use the new v3 specific integration determines leader using `/v3beta` paths
https://github.com/DataDog/integrations-core/blob/4400118a16668dd9fc4b1b98e8601ff9cdf74767/etcd/datadog_checks/etcd/etcd.py#L142
Per https://github.com/etcd-io/etcd/blob/master/Documentation/dev-guide/api_grpc_gateway.md#notes , `/v3beta` is added in etcd 3.3+. The default etcd version even as of kubernetes 1.13 is 3.2.24 (https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#external-dependencies). After configuring the agent to use_preview with a version of etcd <3.3, you are unable to determine the leader from any of the datadog metrics.
The new disk, latency, and go metrics are really important for understanding the behavior of etcd. Unfortunately, using the new integration means giving up all the metrics of the old agent (leading to a more difficult upgrade path) AND doesn't work on etcd versions commonly used to run kubernetes.
| diff --git a/etcd/datadog_checks/etcd/etcd.py b/etcd/datadog_checks/etcd/etcd.py
--- a/etcd/datadog_checks/etcd/etcd.py
+++ b/etcd/datadog_checks/etcd/etcd.py
@@ -99,7 +99,7 @@ def check(self, instance):
self.check_post_v3(instance)
else:
self.warning(
- 'In Agent 6.10 this check will only support ETCD v3+. If you '
+ 'In Agent 6.11 this check will only support ETCD v3+. If you '
'wish to preview the new version, set `use_preview` to `true`.'
)
self.check_pre_v3(instance)
@@ -139,7 +139,7 @@ def access_api(self, scraper_config, path, data='{}'):
def is_leader(self, scraper_config):
# Modify endpoint as etcd stabilizes
# https://github.com/etcd-io/etcd/blob/master/Documentation/dev-guide/api_grpc_gateway.md#notes
- response = self.access_api(scraper_config, '/v3beta/maintenance/status')
+ response = self.access_api(scraper_config, '/v3alpha/maintenance/status')
leader = response.get('leader')
member = response.get('header', {}).get('member_id')
|
[postgres] Improve config reading errors
I had this `postgres.yaml`:
```
init_config:
instances:
- host: pepepe
...
custom_metrics:
- query: SELECT %s FROM pg_locks WHERE granted = false;
metrics:
count(distinct pid): [postgresql.connections_locked]
descriptors: []
relation: false
```
with a few other hosts and custom metrics. When deploying this I got the following error:
```
2017-02-13 15:33:14 UTC | ERROR | dd.collector | checks.postgres(__init__.py:762) | Check 'postgres' instance #0 failed
Traceback (most recent call last):
File "/opt/datadog-agent/agent/checks/__init__.py", line 745, in run
self.check(copy.deepcopy(instance))
File "/opt/datadog-agent/agent/checks.d/postgres.py", line 606, in check
custom_metrics = self._get_custom_metrics(instance.get('custom_metrics', []), key)
File "/opt/datadog-agent/agent/checks.d/postgres.py", line 576, in _get_custom_metrics
for ref, (_, mtype) in m['metrics'].iteritems():
ValueError: need more than 1 value to unpack
```
This was caused by a missing metric type in the yaml above i.e. it should have been `[postgresql.connections_locked, GAUGE]`.
Because the error message is unclear and also doesn't point to the offending metric (remember I have other hosts and custom metrics), it took me a couple of hours to figure out the cause of this error.
Please consider improving the error messages around config reading.
| diff --git a/postgres/check.py b/postgres/check.py
--- a/postgres/check.py
+++ b/postgres/check.py
@@ -651,14 +651,17 @@ def _get_custom_metrics(self, custom_metrics, key):
self.log.debug("Metric: {0}".format(m))
- for ref, (_, mtype) in m['metrics'].iteritems():
- cap_mtype = mtype.upper()
- if cap_mtype not in ('RATE', 'GAUGE', 'MONOTONIC'):
- raise CheckException("Collector method {0} is not known."
- " Known methods are RATE, GAUGE, MONOTONIC".format(cap_mtype))
-
- m['metrics'][ref][1] = getattr(PostgreSql, cap_mtype)
- self.log.debug("Method: %s" % (str(mtype)))
+ try:
+ for ref, (_, mtype) in m['metrics'].iteritems():
+ cap_mtype = mtype.upper()
+ if cap_mtype not in ('RATE', 'GAUGE', 'MONOTONIC'):
+ raise CheckException("Collector method {0} is not known."
+ " Known methods are RATE, GAUGE, MONOTONIC".format(cap_mtype))
+
+ m['metrics'][ref][1] = getattr(PostgreSql, cap_mtype)
+ self.log.debug("Method: %s" % (str(mtype)))
+ except Exception as e:
+ raise CheckException("Error processing custom metric '{}': {}".format(m, e))
self.custom_metrics[key] = custom_metrics
return custom_metrics
|
Misleading gearman.workers metric
The gearmand check is using `get_status()` to get the number of workers. The code is getting the number of workers for each task and adding all together to get the number of workers. The problem is that a single worker can be registered for multiple tasks. Unless you have all your workers dedicated to a single task the result will be much higher than reality.
To get the correct number of workers it should use the `get_workers()` function, possibly discarding any entry with no tasks.
Rough solution:
```python
workers_list = client.get_workers()
workers = len([w for w in workers_list if w['tasks']])
self.gauge("gearman.workers", workers, tags=tags)
```
| diff --git a/gearmand/datadog_checks/gearmand/gearmand.py b/gearmand/datadog_checks/gearmand/gearmand.py
--- a/gearmand/datadog_checks/gearmand/gearmand.py
+++ b/gearmand/datadog_checks/gearmand/gearmand.py
@@ -32,15 +32,13 @@ def _get_client(self, host, port):
return self.gearman_clients[(host, port)]
- def _get_aggregate_metrics(self, tasks, tags):
+ def _get_aggregate_metrics(self, tasks, workers, tags):
running = 0
queued = 0
- workers = 0
for stat in tasks:
running += stat['running']
queued += stat['queued']
- workers += stat['workers']
unique_tasks = len(tasks)
@@ -108,7 +106,8 @@ def check(self, instance):
try:
tasks = client.get_status()
- self._get_aggregate_metrics(tasks, tags)
+ workers = len([w for w in client.get_workers() if w['tasks']])
+ self._get_aggregate_metrics(tasks, workers, tags)
self._get_per_task_metrics(tasks, task_filter, tags)
self.service_check(
self.SERVICE_CHECK_NAME,
|
[vault] Fix sealed nodes management
Vault `/sys/health` API endpoint returns a HTTP 503 code for sealed nodes.
The `access_api` method of the Vault check will handle this as an HTTPError and set the `SERVICE_CHECK_CONNECT` to `critical` while the API is perfectly reachable. It will also prevent other checks (`SERVICE_CHECK_UNSEALED` and `SERVICE_CHECK_INITIALIZED`) from being set to proper values resulting in wrong values for these 3 checks.
The check shouldn't handle [Vault specific HTTP codes](https://www.vaultproject.io/api/system/health.html#read-health-information) as errors.
| diff --git a/vault/datadog_checks/vault/vault.py b/vault/datadog_checks/vault/vault.py
--- a/vault/datadog_checks/vault/vault.py
+++ b/vault/datadog_checks/vault/vault.py
@@ -30,6 +30,17 @@ class Vault(AgentCheck):
'ssl_ignore_warning': {'name': 'tls_ignore_warning'},
}
+ # Expected HTTP Error codes for /sys/health endpoint
+ # https://www.vaultproject.io/api/system/health.html
+ SYS_HEALTH_DEFAULT_CODES = {
+ 200: "initialized, unsealed, and active",
+ 429: "unsealed and standby",
+ 472: "data recovery mode replication secondary and active",
+ 473: "performance standby",
+ 501: "not initialized",
+ 503: "sealed",
+ }
+
def __init__(self, name, init_config, instances):
super(Vault, self).__init__(name, init_config, instances)
self.api_versions = {
@@ -57,8 +68,9 @@ def check(self, instance):
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.OK, tags=tags)
def check_leader_v1(self, config, tags):
- url = config['api_url'] + '/sys/leader'
- leader_data = self.access_api(url, tags)
+ path = '/sys/leader'
+ url = config['api_url']
+ leader_data = self.access_api(url, path, tags)
is_leader = is_affirmative(leader_data.get('is_self'))
tags.append('is_leader:{}'.format('true' if is_leader else 'false'))
@@ -84,9 +96,9 @@ def check_leader_v1(self, config, tags):
config['leader'] = current_leader
def check_health_v1(self, config, tags):
- url = config['api_url'] + '/sys/health'
- health_params = {'standbyok': True, 'perfstandbyok': True}
- health_data = self.access_api(url, tags, params=health_params)
+ path = '/sys/health'
+ url = config['api_url']
+ health_data = self.access_api(url, path, tags)
cluster_name = health_data.get('cluster_name')
if cluster_name:
@@ -141,28 +153,34 @@ def get_config(self, instance):
return config
- def access_api(self, url, tags, params=None):
+ def access_api(self, url, path, tags, params=None):
try:
- response = self.http.get(url, params=params)
- response.raise_for_status()
+ full_url = url + path
+ response = self.http.get(full_url, params=params)
json_data = response.json()
+ response.raise_for_status()
except requests.exceptions.HTTPError:
- msg = 'The Vault endpoint `{}` returned {}.'.format(url, response.status_code)
- self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
- self.log.exception(msg)
- raise ApiUnreachable
+ rsc = response.status_code
+ msg = 'The Vault endpoint `{}` returned {}'.format(full_url, rsc)
+ if path.endswith("/sys/health") and rsc in self.SYS_HEALTH_DEFAULT_CODES:
+ # Ignores expected HTTPError status codes for `/sys/health` endpoint.
+ self.log.debug('{} - node is {}.'.format(msg, self.SYS_HEALTH_DEFAULT_CODES[rsc]))
+ else:
+ self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
+ self.log.exception(msg)
+ raise ApiUnreachable
except JSONDecodeError:
- msg = 'The Vault endpoint `{}` returned invalid json data.'.format(url)
+ msg = 'The Vault endpoint `{}` returned invalid json data.'.format(full_url)
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
self.log.exception(msg)
raise ApiUnreachable
except requests.exceptions.Timeout:
- msg = 'Vault endpoint `{}` timed out after {} seconds'.format(url, self.http.options['timeout'])
+ msg = 'Vault endpoint `{}` timed out after {} seconds'.format(full_url, self.http.options['timeout'])
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
self.log.exception(msg)
raise ApiUnreachable
except (requests.exceptions.RequestException, requests.exceptions.ConnectionError):
- msg = 'Error accessing Vault endpoint `{}`'.format(url)
+ msg = 'Error accessing Vault endpoint `{}`'.format(full_url)
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
self.log.exception(msg)
raise ApiUnreachable
|
Datadog's kubelet integration partially broken with kube 1.16.x
**Note:** If you have a feature request, you should [contact support](https://docs.datadoghq.com/help/) so the request can be properly tracked.
https://github.com/DataDog/integrations-core/blob/master/kubelet/datadog_checks/kubelet/prometheus.py#L18 (note "container_name" and "pod_name")
https://github.com/DataDog/integrations-core/blob/master/kubelet/datadog_checks/kubelet/prometheus.py#L111 (note "container_name")
https://github.com/kubernetes/kubernetes/pull/80376/files
pod_name and container_name on kubelet stats have been deprecated in favor of pod and container respectively.
Verification of kubelet output:
```
root@dd-prod-datadog-4jrsn:/# curl -ks -H "Authorization: Bearer `cat /var/run/secrets/kubernetes.io/serviceaccount/token`" https://172.27.188.233:10250/metrics/cadvisor | grep container_memory_usage_bytes | grep container | head
# HELP container_memory_usage_bytes Current memory usage in bytes, including all memory regardless of when it was accessed
# TYPE container_memory_usage_bytes gauge
container_memory_usage_bytes{container="",id="/",image="",name="",namespace="",pod=""} 1.1965345792e+10 1572383868784
container_memory_usage_bytes{container="",id="/kubepods",image="",name="",namespace="",pod=""} 1.844432896e+09 1572383868791
container_memory_usage_bytes{container="",id="/kubepods/besteffort",image="",name="",namespace="",pod=""} 6.0973056e+07 1572383860208
container_memory_usage_bytes{container="",id="/kubepods/besteffort/pod19434bfe-7c58-497d-9216-29cbec0ab81e",image="",name="",namespace="k8s-goldpinger",pod="goldpinger-qhrl9"} 2.5808896e+07 1572383867597
container_memory_usage_bytes{container="",id="/kubepods/besteffort/pod91d589137af585dfa17062eec4a40d0a",image="",name="",namespace="kube-system",pod="pod-checkpointer-5vp77-ip-172-27-188-233.us-west-2.compute.internal"} 7.770112e+06 1572383868022
container_memory_usage_bytes{container="",id="/kubepods/besteffort/podc8f0dcef-df4c-459c-af1b-ac5a932ef701",image="",name="",namespace="kube-system",pod="pod-checkpointer-5vp77"} 1.777664e+07 1572383864882
container_memory_usage_bytes{container="",id="/kubepods/besteffort/podd758c5bb-a42c-4737-bbeb-3f9c8a996463",image="",name="",namespace="k8s-startup-script",pod="startup-script-hpm5q"} 3.895296e+06 1572383866673
container_memory_usage_bytes{container="",id="/kubepods/besteffort/pode0fc390a-d571-4f0e-bd28-9cc72e96a448",image="",name="",namespace="k8s-sysdig",pod="sysdig-4zc4d"} 1.26976e+06 1572383858071
```
This breaks many of the kubernetes.memory, cpu and network stats
**Additional environment details (Operating System, Cloud provider, etc):**
Kubernetes 1.16.x, dd agent 6.12-6.15.0-rc.8 inclusive tested
**Steps to reproduce the issue:**
1. deploy agent to 1.16.x kubernetes cluster
2. attempt to look up something like kubernetes.memory.usage_pct for any given container
3. notice that there are no stats
**Describe the results you received:**
No stats.
**Describe the results you expected:**
Stats.
**Additional information you deem important (e.g. issue happens only occasionally):**
| diff --git a/kubelet/datadog_checks/kubelet/kubelet.py b/kubelet/datadog_checks/kubelet/kubelet.py
--- a/kubelet/datadog_checks/kubelet/kubelet.py
+++ b/kubelet/datadog_checks/kubelet/kubelet.py
@@ -132,7 +132,7 @@ def __init__(self, name, init_config, agentConfig, instances=None):
self.cadvisor_scraper_config = self.get_scraper_config(cadvisor_instance)
# Filter out system slices (empty pod name) to reduce memory footprint
- self.cadvisor_scraper_config['_text_filter_blacklist'] = ['pod_name=""']
+ self.cadvisor_scraper_config['_text_filter_blacklist'] = ['pod_name=""', 'pod=""']
self.kubelet_scraper_config = self.get_scraper_config(kubelet_instance)
diff --git a/kubelet/datadog_checks/kubelet/prometheus.py b/kubelet/datadog_checks/kubelet/prometheus.py
--- a/kubelet/datadog_checks/kubelet/prometheus.py
+++ b/kubelet/datadog_checks/kubelet/prometheus.py
@@ -15,7 +15,8 @@
METRIC_TYPES = ['counter', 'gauge', 'summary']
# container-specific metrics should have all these labels
-CONTAINER_LABELS = ['container_name', 'namespace', 'pod_name', 'name', 'image', 'id']
+PRE_1_16_CONTAINER_LABELS = set(['namespace', 'name', 'image', 'id', 'container_name', 'pod_name'])
+POST_1_16_CONTAINER_LABELS = set(['namespace', 'name', 'image', 'id', 'container', 'pod'])
class CadvisorPrometheusScraperMixin(object):
@@ -107,13 +108,15 @@ def _is_container_metric(labels):
:param metric:
:return: bool
"""
- for lbl in CONTAINER_LABELS:
- if lbl == 'container_name':
- if lbl in labels:
- if labels[lbl] == '' or labels[lbl] == 'POD':
- return False
- if lbl not in labels:
+ label_set = set(labels)
+ if POST_1_16_CONTAINER_LABELS.issubset(label_set):
+ if labels.get('container') in ['', 'POD']:
return False
+ elif PRE_1_16_CONTAINER_LABELS.issubset(label_set):
+ if labels.get('container_name') in ['', 'POD']:
+ return False
+ else:
+ return False
return True
@staticmethod
@@ -125,20 +128,22 @@ def _is_pod_metric(labels):
:param metric
:return bool
"""
- if 'container_name' in labels:
- if labels['container_name'] == 'POD':
- return True
- # containerd does not report container_name="POD"
- elif labels['container_name'] == '' and labels.get('pod_name', False):
- return True
+ # k8s >= 1.16
+ # docker reports container==POD (first case), containerd does not (second case)
+ if labels.get('container') == 'POD' or (labels.get('container') == '' and labels.get('pod', False)):
+ return True
+ # k8s < 1.16 && > 1.8
+ if labels.get('container_name') == 'POD' or (
+ labels.get('container_name') == '' and labels.get('pod_name', False)
+ ):
+ return True
+ # k8s < 1.8
# container_cpu_usage_seconds_total has an id label that is a cgroup path
# eg: /kubepods/burstable/pod531c80d9-9fc4-11e7-ba8b-42010af002bb
# FIXME: this was needed because of a bug:
# https://github.com/kubernetes/kubernetes/pull/51473
- # starting from k8s 1.8 we can remove this
- if 'id' in labels:
- if labels['id'].split('/')[-1].startswith('pod'):
- return True
+ if labels.get('id', '').split('/')[-1].startswith('pod'):
+ return True
return False
@staticmethod
@@ -161,8 +166,14 @@ def _get_container_id(self, labels):
:return str or None
"""
namespace = CadvisorPrometheusScraperMixin._get_container_label(labels, "namespace")
- pod_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "pod_name")
- container_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "container_name")
+ # k8s >= 1.16
+ pod_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "pod")
+ container_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "container")
+ # k8s < 1.16
+ if not pod_name:
+ pod_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "pod_name")
+ if not container_name:
+ container_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "container_name")
return self.pod_list_utils.get_cid_by_name_tuple((namespace, pod_name, container_name))
def _get_entity_id_if_container_metric(self, labels):
@@ -175,7 +186,7 @@ def _get_entity_id_if_container_metric(self, labels):
"""
if CadvisorPrometheusScraperMixin._is_container_metric(labels):
pod = self._get_pod_by_metric_label(labels)
- if is_static_pending_pod(pod):
+ if pod is not None and is_static_pending_pod(pod):
# If the pod is static, ContainerStatus is unavailable.
# Return the pod UID so that we can collect metrics from it later on.
return self._get_pod_uid(labels)
@@ -188,7 +199,10 @@ def _get_pod_uid(self, labels):
:return: str or None
"""
namespace = CadvisorPrometheusScraperMixin._get_container_label(labels, "namespace")
- pod_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "pod_name")
+ pod_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "pod")
+ # k8s < 1.16
+ if not pod_name:
+ pod_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "pod_name")
return self.pod_list_utils.get_uid_by_name_tuple((namespace, pod_name))
def _get_pod_uid_if_pod_metric(self, labels):
@@ -232,7 +246,10 @@ def _get_kube_container_name(labels):
:param labels: metric labels: iterable
:return: list
"""
- container_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "container_name")
+ container_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "container")
+ # k8s < 1.16
+ if not container_name:
+ container_name = CadvisorPrometheusScraperMixin._get_container_label(labels, "container_name")
if container_name:
return ["kube_container_name:%s" % container_name]
return []
|
MySQL Replication Lag doesn't tag by channel name
MySQL 5.7 introduces "replication channels", which means a slave can replicate from multiple masters ("sources") at any given time.
To measure replication lag across `n` channels, it'd be great if DataDog could tag the `mysql.replication.seconds_behind_master` metric with `channel:<CHANNEL_NAME>`. That way users could get an average across the channels, or have monitors alert when one of the channels falls behind.
Currently, the check fetches only the _first_ replication channel (https://github.com/DataDog/integrations-core/blob/master/mysql/check.py#L855)
| diff --git a/mysql/check.py b/mysql/check.py
--- a/mysql/check.py
+++ b/mysql/check.py
@@ -543,12 +543,12 @@ def _collect_metrics(self, host, db, tags, options, queries):
# MySQL 5.7.x might not have 'Slave_running'. See: https://bugs.mysql.com/bug.php?id=78544
# look at replica vars collected at the top of if-block
if self._version_compatible(db, host, (5, 7, 0)):
- slave_io_running = self._collect_string('Slave_IO_Running', results)
- slave_sql_running = self._collect_string('Slave_SQL_Running', results)
+ slave_io_running = self._collect_type('Slave_IO_Running', results, dict)
+ slave_sql_running = self._collect_type('Slave_SQL_Running', results, dict)
if slave_io_running:
- slave_io_running = (slave_io_running.lower().strip() == "yes")
+ slave_io_running = any(v.lower().strip() == 'yes' for v in slave_io_running.itervalues())
if slave_sql_running:
- slave_sql_running = (slave_sql_running.lower().strip() == "yes")
+ slave_sql_running = any(v.lower().strip() == 'yes' for v in slave_sql_running.itervalues())
if not (slave_io_running is None and slave_sql_running is None):
if slave_io_running and slave_sql_running:
@@ -851,10 +851,20 @@ def _get_replica_stats(self, db):
try:
with closing(db.cursor(pymysql.cursors.DictCursor)) as cursor:
replica_results = {}
+
cursor.execute("SHOW SLAVE STATUS;")
- slave_results = cursor.fetchone()
- if slave_results:
- replica_results.update(slave_results)
+ slave_results = cursor.fetchall()
+ if len(slave_results) > 0:
+ for slave_result in slave_results:
+ # MySQL <5.7 does not have Channel_Name.
+ # For MySQL >=5.7 'Channel_Name' is set to an empty string by default
+ channel = slave_result.get('Channel_Name') or 'default'
+ for key in slave_result:
+ if slave_result[key] is not None:
+ if key not in replica_results:
+ replica_results[key] = {}
+ replica_results[key]["channel:{0}".format(channel)] = slave_result[key]
+
cursor.execute("SHOW MASTER STATUS;")
binlog_results = cursor.fetchone()
if binlog_results:
|
Agent 7.16.0 cannot parse non-semantic nginx version
After upgrading DataDog agent to 7.16.0 nginx integration stopped working.
Agent Log:
```Dec 18 15:42:50 nginxproxy agent[29323]: 2019-12-18 15:42:50 EST | CORE | ERROR | (pkg/collector/python/datadog_agent.go:116 in LogMessage) | nginx:17d65b5a97c80644 | (core.py:45) | Unable to transform `version` metadata value `nginx`: Version does not adhere to semantic versioning```
nginx -v:
`nginx version: nginx/1.13.9 (Ubuntu)`
Nothing changed in `nginx.d/conf.yaml` except status url:
`- nginx_status_url: http://localhost:8080/stub_status/`
datadog-agent version:
`Agent 7.16.0 - Commit: 3e13b77 - Serialization version: 4.15.0 - Go version: go1.12.9`
Worked fine with version `Agent 6.15.1-1`
| diff --git a/nginx/datadog_checks/nginx/nginx.py b/nginx/datadog_checks/nginx/nginx.py
--- a/nginx/datadog_checks/nginx/nginx.py
+++ b/nginx/datadog_checks/nginx/nginx.py
@@ -215,7 +215,7 @@ def _get_plus_api_data(self, api_url, plus_api_version, endpoint, nest):
return payload
def _set_version_metadata(self, version):
- if version:
+ if version and version != 'nginx':
if '/' in version:
version = version.split('/')[1]
self.set_metadata('version', version)
|
Vulnerability in secondary dependency of datadog-checks-dev[cli]
Version-pinning for python packages in datadog-checks-dev[cli] is outdated and links to a repo with vulnerability CVE-2017-18342.
**Steps to reproduce the issue:**
1. Create a Pipfile with this content:
```
[packages]
"datadog-checks-dev[cli]" = "*"
pyyaml = ">5.1"
```
2. Invoke the command
```
pipenv lock
```
**Describe the results you received:**
I get an incompatible-versions exception:
```
ERROR: Could not find a version that matches pyyaml<4,>5.1,>=3.10,>=5.1
Tried: 3.10, 3.10, 3.11, 3.11, 3.12, 3.12, [...] 5.3, 5.3
Skipped pre-versions: 3.13b1, 3.13b1, 3.13b1, [...] 5.3b1, 5.3b1, 5.3b1, 5.3b1, 5.3b1, 5.3b1, 5.3b1
There are incompatible versions in the resolved dependencies.
[pipenv.exceptions.ResolutionFailure]: File "/var/tmp/foo/lib/python3.6/site-packages/pipenv/utils.py", line 726, in resolve_deps
...
```
**Describe the results you expected:**
I should be able to install the datadog-checks-dev[cli] without version conflicts against current versions of its dependencies.
**Additional information you deem important (e.g. issue happens only occasionally):**
The problem lies in [setup.py](https://github.com/DataDog/integrations-core/blob/master/datadog_checks_dev/setup.py) where it specifies an outdated docker-compose version.
| diff --git a/datadog_checks_dev/setup.py b/datadog_checks_dev/setup.py
--- a/datadog_checks_dev/setup.py
+++ b/datadog_checks_dev/setup.py
@@ -26,7 +26,7 @@
'coverage==4.5.4', # pinned due to https://github.com/nedbat/coveragepy/issues/883
'mock',
'psutil',
- 'PyYAML>=5.1',
+ 'PyYAML>=5.3',
'pytest',
'pytest-benchmark>=3.2.1',
'pytest-cov>=2.6.1',
@@ -73,7 +73,7 @@
'atomicwrites',
'click',
'colorama',
- 'docker-compose>=1.23.1,<1.24.0',
+ 'docker-compose>=1.25',
'in-toto>=0.4.1',
'pip-tools',
'pylint',
|
WMI integration throws Exception: SWbemLocator Not enough storage is available to process this command
```text
===============
Agent (v7.16.0)
===============
Status date: 2020-02-05 15:56:45.740020 GMT
Agent start: 2020-02-05 15:03:08.601503 GMT
Pid: 25188
Go Version: go1.12.9
Python Version: 3.7.4
Build arch: amd64
Host Info
=========
bootTime: 2020-01-30 09:06:55.000000 GMT
os: windows
platform: Windows Server 2016 Datacenter
platformFamily: Windows Server 2016 Datacenter
platformVersion: 10.0 Build 14393
procs: 255
uptime: 149h56m12s
wmi_check (1.6.0)
```
**Steps to reproduce the issue:**
The WMI Check integration is configured to capture metrics for multiple instances of a specific process and tag them using the command line, as below
```yaml
- class: Win32_PerfFormattedData_PerfProc_Process
metrics:
- - ThreadCount
- proc.threads.count
- gauge
- - VirtualBytes
- proc.mem.virtual
- gauge
- - PrivateBytes
- proc.mem.private
- gauge
- - WorkingSet
- proc.mem.workingset
- gauge
- - PageFaultsPerSec
- proc.mem.page_faults_per_sec
- gauge
- - PercentProcessorTime
- proc.cpu_pct
- gauge
- - IOReadBytesPerSec
- proc.io.bytes_read
- gauge
- - IOWriteBytesPerSec
- proc.io.bytes_written
- gauge
filters:
- Name: Calastone.Core.MessageAdapter.Console%
tag_by: Name
tag_queries:
- [IDProcess, Win32_Process, Handle, CommandLine]
```
There are 17 instances of the process running.
**Describe the results you received:**
- After a period of time (can be 40+ minutes) the following error starts to be logged
```
2020-02-04 16:31:29 GMT | CORE | WARN | (pkg/collector/python/datadog_agent.go:118 in LogMessage) | wmi_check:a7174f61bd7a5360 | (sampler.py:469) | Failed to execute WMI query (Select CommandLine from Win32_Process WHERE ( Handle = '8408' ))
Traceback (most recent call last):
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\datadog_checks\base\checks\win\wmi\sampler.py", line 464, in _query
raw_results = self.get_connection().ExecQuery(wql, "WQL", query_flags)
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\datadog_checks\base\checks\win\wmi\sampler.py", line 351, in get_connection
connection = locator.ConnectServer(self.host, self.namespace, self.username, self.password, *additional_args)
File "<COMObject WbemScripting.SWbemLocator>", line 5, in ConnectServer
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\win32com\client\dynamic.py", line 287, in _ApplyTypes_
result = self._oleobj_.InvokeTypes(*(dispid, LCID, wFlags, retType, argTypes) + args)
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, 'SWbemLocator', 'Not enough storage is available to process this command. ', None, 0, -2147024888), None)
2020-02-04 16:31:29 GMT | CORE | WARN | (pkg/collector/python/datadog_agent.go:118 in LogMessage) | wmi_check:a7174f61bd7a5360 | (__init__.py:88) | Failed to extract a tag from `tag_queries` parameter: no result was returned. wmi_object={'threadcount': 27.0, 'virtualbytes': 823386112.0, 'privatebytes': 304635904.0, 'workingset': 367628288.0, 'pagefaultspersec': 0.0, 'percentprocessortime': 0.0, 'ioreadbytespersec': 0.0, 'iowritebytespersec': 0.0, 'idprocess': 8408.0, 'name': 'Calastone.Core.MessageAdapter.Console#3'} - query=['IDProcess', 'Win32_Process', 'Handle', 'CommandLine']
2020-02-04 16:31:29 GMT | CORE | WARN | (pkg/collector/python/datadog_agent.go:118 in LogMessage) | wmi_check:a7174f61bd7a5360 | (sampler.py:469) | Failed to execute WMI query (Select CommandLine from Win32_Process WHERE ( Handle = '14836' ))
```
- The number of threads used by the agent process is observed to be rocketing (> 1700)
- The server becomes unresponsive
**Diagnosis:**
This issue didn't occur on the previous version of the agent we were using (6.7.0).
Looking at the source code suggests the problem was introduced as part of #3987
https://github.com/DataDog/integrations-core/blob/010ed622d62c9dd7de28d76f1191a4be5960a965/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py#L117 creates a WMISampler for EVERY tag query that needs to be run. With the new logic that creates a thread for each query that is never released!
**Solution:**
The follow hack fixes the problem. I'll put it into a PR.
Change `sampler.py`
```python
def _query_sample_loop(self):
...
while True:
self._runSampleEvent.wait()
if self._stopping:
return
def dispose(self):
"""
Dispose of the internal thread
"""
self._stopping = True
self._runSampleEvent.set()
```
Change `__init__.py`
```python
def _get_tag_query_tag(self, sampler, wmi_obj, tag_query):
...
tag = "{tag_name}:{tag_value}".format(tag_name=target_property.lower(), tag_value="_".join(link_value.split()))
tag_query_sampler.dispose()
```
There also looks to be scope to cache these WMISampler classes like the main metric samplers. Also the connection created in `get_connection` could be created in the sampler thread method since it is now bound to that thread
| diff --git a/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py b/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
--- a/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
+++ b/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
@@ -114,14 +114,15 @@ def _get_tag_query_tag(self, sampler, wmi_obj, tag_query):
target_class, target_property, filters = self._format_tag_query(sampler, wmi_obj, tag_query)
# Create a specific sampler
- tag_query_sampler = WMISampler(self.log, target_class, [target_property], filters=filters, **sampler.connection)
+ with WMISampler(
+ self.log, target_class, [target_property], filters=filters, **sampler.connection
+ ) as tag_query_sampler:
+ tag_query_sampler.sample()
- tag_query_sampler.sample()
+ # Extract tag
+ self._raise_on_invalid_tag_query_result(tag_query_sampler, wmi_obj, tag_query)
- # Extract tag
- self._raise_on_invalid_tag_query_result(tag_query_sampler, wmi_obj, tag_query)
-
- link_value = str(tag_query_sampler[0][target_property]).lower()
+ link_value = str(tag_query_sampler[0][target_property]).lower()
tag = "{tag_name}:{tag_value}".format(tag_name=target_property.lower(), tag_value="_".join(link_value.split()))
@@ -235,14 +236,17 @@ def _get_instance_key(self, host, namespace, wmi_class, other=None):
return "{host}:{namespace}:{wmi_class}".format(host=host, namespace=namespace, wmi_class=wmi_class)
- def _get_wmi_sampler(self, instance_key, wmi_class, properties, tag_by="", **kwargs):
+ def _get_running_wmi_sampler(self, instance_key, wmi_class, properties, tag_by="", **kwargs):
"""
- Create and cache a WMISampler for the given (class, properties)
+ Return a running WMISampler for the given (class, properties).
+
+ If no matching WMISampler is running yet, start one and cache it.
"""
properties = list(properties) + [tag_by] if tag_by else list(properties)
if instance_key not in self.wmi_samplers:
wmi_sampler = WMISampler(self.log, wmi_class, properties, **kwargs)
+ wmi_sampler.start()
self.wmi_samplers[instance_key] = wmi_sampler
return self.wmi_samplers[instance_key]
diff --git a/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py b/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py
--- a/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py
+++ b/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py
@@ -105,6 +105,7 @@ def __init__(
# Sampling state
self._sampling = False
+ self._stopping = False
self.logger = logger
@@ -146,12 +147,35 @@ def __init__(
self._runSampleEvent = Event()
self._sampleCompleteEvent = Event()
- thread = Thread(target=self._query_sample_loop, name=class_name)
- thread.daemon = True
+ def start(self):
+ """
+ Start internal thread for sampling
+ """
+ thread = Thread(target=self._query_sample_loop, name=self.class_name)
+ thread.daemon = True # Python 2 does not support daemon as Thread constructor parameter
thread.start()
+ def stop(self):
+ """
+ Dispose of the internal thread
+ """
+ self._stopping = True
+ self._runSampleEvent.set()
+ self._sampleCompleteEvent.wait()
+
+ def __enter__(self):
+ self.start()
+ return self
+
+ def __exit__(self, type, value, traceback):
+ self.stop()
+
def _query_sample_loop(self):
try:
+ # Initialize COM for the current (dedicated) thread
+ # WARNING: any python COM object (locator, connection, etc) created in a thread
+ # shouldn't be used in other threads (can lead to memory/handle leaks if done
+ # without a deep knowledge of COM's threading model).
pythoncom.CoInitialize()
except Exception as e:
self.logger.info("exception in CoInitialize: %s", e)
@@ -159,6 +183,11 @@ def _query_sample_loop(self):
while True:
self._runSampleEvent.wait()
+ if self._stopping:
+ self.logger.debug("_query_sample_loop stopping")
+ self._sampleCompleteEvent.set()
+ return
+
self._runSampleEvent.clear()
if self.is_raw_perf_class and not self._previous_sample:
self._current_sample = self._query()
@@ -335,11 +364,6 @@ def get_connection(self):
self.username,
)
- # Initialize COM for the current thread
- # WARNING: any python COM object (locator, connection, etc) created in a thread
- # shouldn't be used in other threads (can lead to memory/handle leaks if done
- # without a deep knowledge of COM's threading model). Because of this and given
- # that we run each query in its own thread, we don't cache connections
additional_args = []
if self.provider != ProviderArchitecture.DEFAULT:
diff --git a/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py b/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py
--- a/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py
+++ b/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py
@@ -115,7 +115,7 @@ def check(self, instance):
filters.append(query)
- wmi_sampler = self._get_wmi_sampler(
+ wmi_sampler = self._get_running_wmi_sampler(
instance_key,
self.EVENT_CLASS,
event_properties,
diff --git a/wmi_check/datadog_checks/wmi_check/wmi_check.py b/wmi_check/datadog_checks/wmi_check/wmi_check.py
--- a/wmi_check/datadog_checks/wmi_check/wmi_check.py
+++ b/wmi_check/datadog_checks/wmi_check/wmi_check.py
@@ -52,7 +52,7 @@ def check(self, instance):
metric_name_and_type_by_property, properties = self._get_wmi_properties(instance_key, metrics, tag_queries)
- wmi_sampler = self._get_wmi_sampler(
+ wmi_sampler = self._get_running_wmi_sampler(
instance_key,
wmi_class,
properties,
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 23