content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
`ClusterGenerator` is a generator wrapper that is available to configure a generator cluster-wide. The purpose of this generator is that the user doesn't have to redefine the generator in every namespace. They could define it once in the cluster and then reference that in the consuming `ExternalSecret`. ## Limitations - The generator will continue to create objects in the same namespace as the referencing ExternalSecret (ES) object. This behavior is subject to change in future updates. - The objects referenced within the ClusterGenerator must also reside in the same namespace as the ES object that references them. This is due to the inherent, namespace-scoped nature of the embedded generator types. ## Example Manifest ```yaml {% include 'generator-cluster.yaml' %} ``` Example `ExternalSecret` that references the Cluster generator: ```yaml {% include 'generator-cluster-example.yaml' %} ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/cluster.md | main | external-secrets | [
-0.08241111785173416,
-0.039177056401968,
-0.016764355823397636,
0.08761664479970932,
0.07223307341337204,
-0.003033066401258111,
-0.008924998342990875,
-0.0014985371381044388,
0.005680638365447521,
-0.05689655989408493,
0.03573639690876007,
-0.12166737765073776,
0.04141865670681,
-0.07283... | 0.111686 |
The `VaultDynamicSecret` Generator provides an interface to HashiCorp Vault's [Secrets engines](https://developer.hashicorp.com/vault/docs/secrets). Specifically, it enables obtaining dynamic secrets not covered by the [HashiCorp Vault provider](../../provider/hashicorp-vault.md). Any Vault authentication method supported by the provider can be used here (`provider` block of the spec). All secrets engines should be supported by providing matching `path`, `method` and `parameters` values to the Generator spec (see example below). Exact output keys and values depend on the Vault secret engine used; nested values are stored into the resulting Secret in JSON format. The generator exposes `data` section of the response from Vault API by default. To adjust the behaviour, use `resultType` key. ## Example manifest ```yaml {% include 'generator-vault.yaml' %} ``` Example `ExternalSecret` that references the Vault generator: ```yaml {% include 'generator-vault-example.yaml' %} ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/vault.md | main | external-secrets | [
-0.04025222733616829,
0.0568913072347641,
-0.10286014527082443,
0.006239539943635464,
-0.014395352452993393,
-0.07845502346754074,
-0.09336918592453003,
0.03605325520038605,
0.041757937520742416,
-0.08578566461801529,
0.015419437550008297,
-0.026187213137745857,
0.05811857432126999,
-0.034... | -0.002138 |
The UUID generator provides random UUIDs that you can feed into your applications. A UUID (Universally Unique Identifier) is a 128-bit label used for information in computer systems. Please see below for the format in use. ## Output Keys and Values | Key | Description | | ---- | ------------------ | | uuid | the generated UUID | ## Parameters The UUID generator does not require any additional parameters. ## Example Manifest ```yaml {% include 'generator-uuid.yaml' %} ``` Example `ExternalSecret` that references the UUID generator: ```yaml {% include 'generator-uuid-example.yaml' %} ``` Which will generate a `Kind=Secret` with a key called 'uuid' that may look like: ``` EA111697-E7D0-452C-A24C-8E396947E865 ``` With default values you would get something like: ``` 4BEE258F-64C9-4755-92DC-AFF76451471B ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/uuid.md | main | external-secrets | [
-0.07728607207536697,
-0.008011949248611927,
-0.0681147575378418,
-0.000974842521827668,
0.021623721346259117,
-0.06005324050784111,
0.0454258993268013,
0.040295325219631195,
-0.000049398506234865636,
-0.04375747963786125,
0.029662154614925385,
-0.07206126302480698,
0.08644897490739822,
-0... | 0.10513 |
## GitHub App Authentication Documentation ### 1. Register a GitHub App To create a GitHub app, follow the instructions provided by GitHub: - \*\*Visit\*\*: [Registering a GitHub App](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app#registering-a-github-app) - \*\*Procedure\*\*: - Fill in the necessary details for your app. - Note the `App ID` provided after registration. - At the bottom of the registration page, click on `Generate a private key`. Download and securely store this key. ### 2. Store the Private Key After generating your private key, you need to store it securely. If you are using Kubernetes, you can store it as a secret: ```bash kubectl create secret generic github-app-pem --from-file=key=path/to/your/private-key.pem ``` ### 3. Set Permissions for the GitHub App Configure the necessary permissions for your GitHub app depending on what actions it needs to perform: - \*\*Visit\*\*: [Choosing Permissions for a GitHub App](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/choosing-permissions-for-a-github-app#choosing-permissions-for-rest-api-access) - \*\*Example\*\*: - For managing OCI images, set read and write permissions for packages. ### 4. Install Your GitHub App Install the GitHub app on your repository or organization to start using it: - \*\*Visit\*\*: [Installing Your Own GitHub App](https://docs.github.com/en/apps/using-github-apps/installing-your-own-github-app) ### 5. Obtain an Installation ID After installation, you need to get the installation ID to authenticate API requests: - \*\*Visit\*\*: [Generating an Installation Access Token for a GitHub App](https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-an-installation-access-token-for-a-github-app#generating-an-installation-access-token) - \*\*Procedure\*\*: - Find the installation ID from the URL or API response. ### Example Kubernetes Manifest for GitHub Access Token Generator ```yaml {% include 'generator-github.yaml' %} ``` ```yaml {% include 'generator-github-example.yaml' %} ``` ```yaml {% include 'generator-github-example-basicauth.yaml' %} ``` ### Notes - Ensure that all sensitive data such as private keys and IDs are securely handled and stored. - Adjust the permissions and configurations according to your specific requirements and security policies. - Github tokens expire after 60 minutes by default and this is non-configurable, make sure you choose a refreshInterval that is below this number. | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/github.md | main | external-secrets | [
-0.003192187286913395,
-0.05777920037508011,
-0.06137438118457794,
-0.03663025051355362,
-0.051847025752067566,
0.031074272468686104,
-0.012649538926780224,
0.09005699306726456,
0.08115706592798233,
0.04605923965573311,
0.019787916913628578,
-0.08031372725963593,
0.09099236875772476,
-0.07... | 0.056158 |
ECRAuthorizationTokenSpec uses the GetAuthorizationToken API to retrieve an authorization token. The authorization token is valid for 12 hours. For more information, see [registry authentication](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry\_auth) in the Amazon Elastic Container Registry User Guide. ## Output Keys and Values | Key | Description | | -------------- | --------------------------------------------------------------------------------- | | username | username for the `docker login` command. | | password | password for the `docker login` command. | | proxy\_endpoint | The registry URL to use for this authorization token in a `docker login` command. | | expires\_at | time when token expires in UNIX time (seconds since January 1, 1970 UTC). | ## Authentication You can choose from three authentication mechanisms: \* static credentials using `spec.auth.secretRef` \* point to a IRSA Service Account with `spec.auth.jwt` \* use credentials from the [SDK default credentials chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default) from the controller environment ## Example Manifest ```yaml {% include 'generator-ecr.yaml' %} ``` Example `ExternalSecret` that references the ECR generator: ```yaml {% include 'generator-ecr-example.yaml' %} ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/ecr.md | main | external-secrets | [
0.004968424793332815,
0.05568203702569008,
-0.04437629505991936,
-0.017957543954253197,
0.0553438626229763,
-0.0036951059009879827,
0.004846476949751377,
0.01881599985063076,
0.04413613677024841,
0.02794768288731575,
-0.06509606540203094,
-0.07233266532421112,
0.009878323413431644,
-0.0577... | 0.037063 |
The Password generator provides random passwords that you can feed into your applications. It uses lower and uppercase alphanumeric characters as well as symbols. Please see below for the symbols in use. !!! warning "Passwords are completely randomized" It is possible that we may generate passwords that don't match the expected character set from your application. ## Output Keys and Values | Key | Description | | -------- | ---------------------- | | password | the generated password. If `spec.secretKeys` is set, each listed key is populated with its own unique password | ## Parameters You can influence the behavior of the generator by providing the following args | Key | Default | Description | | ---------------- | ---------------------------------- | --------------------------------------------------------------------------- | | length | 24 | Length of the password to be generated. | | digits | 25% of the length | Specify the number of digits in the generated password. | | symbols | 25% of the length | Specify the number of symbol characters in the generated. | | symbolCharacters | ~!@#$%^&\\*()\\_+`-={}\|[]\\:"<>?,./ | Specify the character set that should be used when generating the password. | | noUpper | false | disable uppercase characters. | | allowRepeat | false | allow repeating characters. | | encoding | raw | Encoding format for the generated password. Valid values: `raw`, `base64`, `base64url`, `base32`, `hex`. | ## Example Manifest ```yaml {% include 'generator-password.yaml' %} ``` Example `ExternalSecret` that references the Password generator: ```yaml {% include 'generator-password-example.yaml' %} ``` Which will generate a `Kind=Secret` with a key called 'password' that may look like: ``` RMngCHKtZ@@h@3aja$WZDuDVhkCkN48JBa9OF8jH$R VB$pX8SSUMIlk9K8g@XxJAhGz$0$ktbJ1ArMukg-bD Hi$-aK\_3Rrrw1Pj9-sIpPZuk5abvEDJlabUYUcS$9L ``` With default values you would get something like: ``` 2Cp=O\*&8x6sdwM!<74G\_gUz5 -MS`e#n24K|h5A<&6q9Yv7Cj ZRv-k!y6x/V"29:43aErSf$1 Vk9\*mwXE30Q+>H?lY$5I64\_q ``` ## Encoding Examples The password generator supports different encoding formats for the output: ```yaml {% include 'generator-password-encoding-examples.yaml' %} ``` ### Encoding Output Examples For the same password `Test>>Pass??word`, the different encodings would produce: - \*\*raw\*\* (default): `Test>>Pass??word` (original password string) - \*\*base64\*\*: `VGVzdD4+UGFzcz8/d29yZA==` (standard base64) - \*\*base64url\*\*: `VGVzdD4-UGFzcz8\_d29yZA==` (URL-safe base64) - \*\*base32\*\*: `ORSXG5BRGIYTEMJQGQYQ====` (base32 encoding) - \*\*hex\*\*: `546573743e3e506173733f3f776f7264` (hexadecimal encoding) Key differences between `base64` and `base64url`: - \*\*base64\*\*: `VGVzdD4+UGFzcz8/d29yZA==` uses `+`, `/`, and `=` for padding - \*\*base64url\*\*: `VGVzdD4-UGFzcz8\_d29yZA==` uses `-`, `\_`, and no padding (URL-safe) | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/password.md | main | external-secrets | [
-0.08569060266017914,
0.0012442530132830143,
-0.11591075360774994,
0.0194941945374012,
-0.0736498013138771,
0.014028964564204216,
0.02714790590107441,
-0.014595530927181244,
0.0023120734840631485,
-0.0505916066467762,
-0.0025938116014003754,
-0.0514458492398262,
0.15178918838500977,
-0.119... | -0.034381 |
The Webhook generator is very similar to SecretStore generator, and provides a way to use external systems to generate sensitive information. ## Output Keys and Values Webhook calls are expected to produce valid JSON objects. All keys within that JSON object will be exported as keys to the kubernetes Secret. ## Example Manifest ```yaml {% include 'generator-webhook.yaml' %} ``` Example `ExternalSecret` that references the Webhook generator using an internal `Secret`: ```yaml {% include 'generator-webhook-example.yaml' %} ``` This will generate a kubernetes secret with the following values: ```yaml parameter: test ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/webhook.md | main | external-secrets | [
-0.10929728299379349,
0.09287439286708832,
-0.04358803108334541,
0.011524229310452938,
-0.014451042748987675,
-0.047381509095430374,
-0.023381227627396584,
-0.018703289330005646,
0.062384434044361115,
-0.007222585380077362,
0.03366435319185257,
-0.13178035616874695,
0.05291168764233589,
-0... | 0.119168 |
`QuayAccessToken` creates a short-lived Quay Access token that can be used to authenticate against quay.io or a self-hosted instance of Quay in order to push or pull images. This requires a [Quay Robot Account configured to federate](https://docs.projectquay.io/manage\_quay.html#setting-robot-federation) with a Kubernetes service account. ## Output Keys and Values | Key | Description | | ---------- | ------------------------------------------------------------------------------ | | registry | Domain name of the registry you are authenticating to (defaults to `quay.io`). | | auth | Base64 encoded authentication string. | | expiry | Time when token expires in UNIX time (seconds since January 1, 1970 UTC). | ## Authentication To configure Robot Account federation, your cluster must have a publicly available [OIDC service account issuer](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery) endpoint for Quay to validate tokens against. You can determine the issuer and subject fields by creating and decoding a service account token for the service account you wish to federate with (this is the service account you will use in `spec.serviceAccountRef`). For example, if federating with the `default` service account in the `default` namespace: Obtain issuer: ```bash kubectl create token default -n default | cut -d '.' -f 2 | sed 's/[^=]$/&==/' | base64 -d | jq -r '.iss' ``` Obtain subject: ```bash kubectl create token default -n default | cut -d '.' -f 2 | sed 's/[^=]$/&==/' | base64 -d | jq -r '.sub' ``` Then use the instructions [here](https://docs.projectquay.io/manage\_quay.html#setting-robot-federation) to set up a robot account and federation. ## Example Manifest ```yaml {% include 'generator-quay.yaml' %} ``` Example `ExternalSecret` that references the Quay generator: ```yaml {% include 'generator-quay-example.yaml' %} ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/quay.md | main | external-secrets | [
-0.00439803721383214,
-0.04810859635472298,
-0.022826744243502617,
-0.08556889742612839,
0.008828196674585342,
-0.040741294622421265,
0.02243410423398018,
-0.006475708447396755,
0.04206019267439842,
0.04132303595542908,
0.040973372757434845,
-0.06908917427062988,
0.04745146632194519,
0.014... | 0.148102 |
GCRAccessToken creates a GCP Access token that can be used to authenticate with GCR in order to pull OCI images. You won't need any extra permissions to request for a token, but the token would only work against a GCR if the token requester (service Account or WI) has the appropriate access You must specify the `spec.projectID` in which GCR is located. ## Output Keys and Values | Key | Description | | ---------- | ------------------------------------------------------------------------- | | username | username for the `docker login` command. | | password | password for the `docker login` command. | | expiry | time when token expires in UNIX time (seconds since January 1, 1970 UTC). | ## Authentication ### Workload Identity Use `spec.auth.workloadIdentity` to point to a Service Account that has Workload Identity enabled. For details see [GCP Secret Manager](../../provider/google-secrets-manager.md#authentication). ### GCP Service Account Use `spec.auth.secretRef` to point to a Secret that contains a GCP Service Account. For details see [GCP Secret Manager](../../provider/google-secrets-manager.md#authentication). ## Example Manifest ```yaml {% include 'generator-gcr.yaml' %} ``` Example `ExternalSecret` that references the GCR generator: ```yaml {% include 'generator-gcr-example.yaml' %} ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/gcr.md | main | external-secrets | [
-0.05393967404961586,
0.04049275815486908,
-0.016678782179951668,
-0.048109471797943115,
-0.023124298080801964,
-0.09153103083372116,
0.05532360449433327,
0.05477822199463844,
0.02014291286468506,
0.007896740920841694,
-0.03714868426322937,
-0.07205965369939804,
0.044096168130636215,
0.036... | -0.000052 |
STSSessionToken uses the GetSessionToken API to retrieve a temporary session token. ## Output Keys and Values | Key | Description | |-------------------|-------------------------------------------------------------------------------------| | access\_key\_id | The access key ID that identifies the temporary security credentials. | | secret\_access\_key | The secret access key that can be used to sign requests. | | session\_token | The token that users must pass to the service API to use the temporary credentials. | | expiration | The date on which the current credentials expire. | ## Authentication You can choose from one authentication mechanisms: \* static credentials using `spec.auth.secretRef` \_Note\_: STSSessionToken uses GetSessionToken API. This API can \_only\_ be used by long-term credentials such as an id + key. Therefore, it is only usable with a secretRef for authentication. ## Request Parameters The following request parameters can be provided: - duration seconds -> can specify the TTL of the generated token - serial number -> define the serial number of the MFA device used by the user - token code -> possible code generated by the above-referenced MFA device ## Example Manifest ```yaml {% include 'generator-sts.yaml' %} ``` Example `ExternalSecret` that references the STS Session Token generator: ```yaml {% include 'generator-sts-example.yaml' %} ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/sts.md | main | external-secrets | [
-0.10797765105962753,
-0.015414690598845482,
-0.10854457318782806,
0.011635193601250648,
-0.037337664514780045,
0.012807134538888931,
0.1121809259057045,
0.041238125413656235,
0.03691587597131729,
0.030394628643989563,
-0.015028038062155247,
-0.052421052008867264,
0.049322351813316345,
0.0... | 0.10012 |
# MFA Generator This generator can create [RFC 4226](https://datatracker.ietf.org/doc/html/rfc4226) compliant TOTP tokens given a seed secret. The seed secret is usually provided through a QR code. However, the provider will always also provide a text based format of that QR code. That's the secret that this generator will use to create tokens. ## Output Keys and Values | Key | Description | |----------|--------------------------------------------------| | token | the generated N letter token | | timeLeft | the time left until the token expires in seconds | ## Parameters The following configuration options are available when generating a token: | Key | Default | Description | |------------|----------|----------------------------------------------------------------------------------------------------------------| | length | 6 | Digit length of the generated code. Some providers allow larger tokens. | | timePeriod | 30 | Number of seconds the code can be valid. This is provider specific, usually it's 30 seconds | | secret | empty | This is a secret ref pointing to the seed secret | | algorithm | sha1 | Algorithm for encoding. The RFC defines SHA1, though a provider will set it to SHA256 or SHA512 sometimes | | when | time.Now | This allows for pinning the creation date of the token makes for reproducible tokens. Mostly used for testing. | ## Example Manifest ```yaml {% include 'generator-mfa.yaml' %} ``` This will generate an output like this: ``` token: 123456 timeLeft: 25 ``` !!! warning "Usage of the token might fail on first try if it JUST expired" It is possible that from requesting the token to actually using it, the token might be already out of date if timeLeft was very low to begin with. Therefore, the code that uses this token should allow for retries with new tokens. | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/mfa.md | main | external-secrets | [
-0.14993999898433685,
0.01301856804639101,
-0.08247150480747223,
0.038337189704179764,
-0.033749260008335114,
0.05117843300104141,
0.020005319267511368,
-0.04859886318445206,
0.022141817957162857,
0.0026311110705137253,
0.04187434911727905,
-0.06675820052623749,
0.09895248711109161,
-0.042... | 0.115436 |
The Azure Container Registry (ACR) generator creates a short-lived refresh or access token for accessing ACR. The token is generated for a particular ACR registry defined in `spec.registry`. ## Output Keys and Values | Key | Description | | -------- | ----------- | | username | username for the `docker login` command | | password | password for the `docker login` command | ## Authentication You must choose one out of three authentication mechanisms: - service principal - managed identity - workload identity The generated token will inherit the permissions from the assigned policy. I.e. when you assign a read-only policy all generated tokens will be read-only. You \*\*must\*\* [assign a Azure RBAC role](https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-steps), such as `AcrPush` or `AcrPull` to the service principal or managed identity in order to be able to authenticate with the Azure container registry API. You can also use a kubelet managed identity with the default `AcrPull` role to authenticate to the integrated Azure Container Registry. You can scope tokens to a particular repository using `spec.scope`. ## Scope First, a Microsoft Entra ID access token is obtained with the desired authentication method. This Microsoft Entra ID access token will be used to authenticate against ACR to issue a refresh token or access token. If `spec.scope` if it is defined it obtains an ACR access token. If `spec.scope` is missing it obtains an ACR refresh token: - access tokens are scoped to a specific repository or action (pull,push) - refresh tokens can are scoped to whatever policy is attached to the identity that creates the acr refresh token The Scope grammar is defined in the [Docker Registry spec](https://docs.docker.com/registry/spec/auth/scope/). Note: You \*\*can not\*\* use wildcards in the scope parameter -- you can match exactly one repository and can define multiple actions like `pull` or `push`. Example scopes: ``` repository:my-repository:pull,push repository:my-repository:pull ``` ## Example Manifest ```yaml {% include 'generator-acr.yaml' %} ``` Example `ExternalSecret` that references the ACR generator: ```yaml {% include 'generator-acr-example.yaml' %} ``` Example using AKS kubelet managed identity to create [Argo CD helm chart repository](https://argo-cd.readthedocs.io/en/latest/operator-manual/declarative-setup/#helm-chart-repositories) secret: ```yaml {% include 'generator-acr-argocd-helm-repo.yaml' %} ``` | https://github.com/external-secrets/external-secrets/blob/main//docs/api/generator/acr.md | main | external-secrets | [
-0.04790666326880455,
0.008393601514399052,
-0.09085844457149506,
0.05811803787946701,
-0.0214313343167305,
0.03174682706594467,
0.07114211469888687,
-0.025817036628723145,
0.046543560922145844,
0.09632724523544312,
-0.019392628222703934,
-0.04530881717801094,
0.05743854492902756,
0.019908... | 0.013568 |
CAUTION: Starting with Prometheus 3.0, console templates and libraries are no longer bundled with Prometheus. If you wish to use console templates, you must provide your own templates and libraries by specifying the `--web.console.templates` and `--web.console.libraries` command-line flags. This documentation page is maintained for historical reference and to demonstrate the capabilities of console templates. Please be aware that any referenced console libraries from the Prometheus 2.x branch are no longer maintained and may contain known security vulnerabilities (CVEs). Console templates allow for creation of arbitrary consoles using the [Go templating language](http://golang.org/pkg/text/template/). These are served from the Prometheus server. Console templates are the most powerful way to create templates that can be easily managed in source control. There is a learning curve though, so users new to this style of monitoring should try out [Grafana](/docs/visualization/grafana/) first. ## Getting started Prometheus comes with an example set of consoles to get you going. These can be found at `/consoles/index.html.example` on a running Prometheus and will display Node Exporter consoles if Prometheus is scraping Node Exporters with a `job="node"` label. The example consoles have 5 parts: 1. A navigation bar on top 1. A menu on the left 1. Time controls on the bottom 1. The main content in the center, usually graphs 1. A table on the right The navigation bar is for links to other systems, such as other Prometheis [1](/docs/introduction/faq/#what-is-the-plural-of-prometheus), documentation, and whatever else makes sense to you. The menu is for navigation inside the same Prometheus server, which is very useful to be able to quickly open a console in another tab to correlate information. Both are configured in `console\_libraries/menu.lib`. The time controls allow changing of the duration and range of the graphs. Console URLs can be shared and will show the same graphs for others. The main content is usually graphs. There is a configurable JavaScript graphing library provided that will handle requesting data from Prometheus, and rendering it via [Rickshaw](https://shutterstock.github.io/rickshaw/). Finally, the table on the right can be used to display statistics in a more compact form than graphs. ## Example Console This is a basic console. It shows the number of tasks, how many of them are up, the average CPU usage, and the average memory usage in the right-hand-side table. The main content has a queries-per-second graph. ``` {{template "head" .}} {{template "prom\_right\_table\_head"}}| MyJob | {{ template "prom\_query\_drilldown" (args "sum(up{job='myjob'})") }} / {{ template "prom\_query\_drilldown" (args "count(up{job='myjob'})") }} | | CPU | {{ template "prom\_query\_drilldown" (args "avg by(job)(rate(process\_cpu\_seconds\_total{job='myjob'}[5m]))" "s/s" "humanizeNoSmallPrefix") }} | | Memory | {{ template "prom\_query\_drilldown" (args "avg by(job)(process\_resident\_memory\_bytes{job='myjob'})" "B" "humanize1024") }} | {{template "prom\_right\_table\_tail"}} {{template "prom\_content\_head" .}} # MyJob ### Queries new PromConsole.Graph({ node: document.querySelector("#queryGraph"), expr: "sum(rate(http\_query\_count{job='myjob'}[5m]))", name: "Queries", yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yUnits: "/s", yTitle: "Queries" }) {{template "prom\_content\_tail" .}} {{template "tail"}} ``` The `prom\_right\_table\_head` and `prom\_right\_table\_tail` templates contain the right-hand-side table. This is optional. `prom\_query\_drilldown` is a template that will evaluate the expression passed to it, format it, and link to the expression in the [expression browser](/docs/visualization/browser/). The first argument is the expression. The second argument is the unit to use. The third argument is how to format the output. Only the first argument is required. Valid output formats for the third argument to `prom\_query\_drilldown`: \* Not specified: Default Go display output. \* `humanize`: Display the result using [metric prefixes](http://en.wikipedia.org/wiki/Metric\_prefix). \* `humanizeNoSmallPrefix`: For absolute values greater than 1, display the result using [metric prefixes](http://en.wikipedia.org/wiki/Metric\_prefix). For absolute values less than 1, display 3 significant digits. This is useful to avoid units such as milliqueries per second that can be produced by `humanize`. \* `humanize1024`: Display the humanized result using a base of | https://github.com/prometheus/docs/blob/main//docs/visualization/consoles.md | main | prometheus | [
-0.03307027369737625,
0.04624273255467415,
-0.044822391122579575,
-0.03924252837896347,
0.042707376182079315,
-0.06273659318685532,
-0.0027328182477504015,
0.05652819573879242,
0.001411855104379356,
0.02255612425506115,
-0.022524641826748848,
-0.05308596044778824,
0.0007842426421120763,
0.... | 0.199074 |
`humanizeNoSmallPrefix`: For absolute values greater than 1, display the result using [metric prefixes](http://en.wikipedia.org/wiki/Metric\_prefix). For absolute values less than 1, display 3 significant digits. This is useful to avoid units such as milliqueries per second that can be produced by `humanize`. \* `humanize1024`: Display the humanized result using a base of 1024 rather than 1000. This is usually used with `B` as the second argument to produce units such as `KiB` and `MiB`. \* `printf.3g`: Display 3 significant digits. Custom formats can be defined. See [prom.lib](https://github.com/prometheus/prometheus/blob/release-2.55/console\_libraries/prom.lib) for examples. ## Graph Library The graph library is invoked as: ``` new PromConsole.Graph({ node: document.querySelector("#queryGraph"), expr: "sum(rate(http\_query\_count{job='myjob'}[5m]))" }) ``` The `head` template loads the required Javascript and CSS. Parameters to the graph library: | Name | Description | ------------- | ------------- | expr | Required. Expression to graph. Can be a list. | node | Required. DOM node to render into. | duration | Optional. Duration of the graph. Defaults to 1 hour. | endTime | Optional. Unixtime the graph ends at. Defaults to now. | width | Optional. Width of the graph, excluding titles. Defaults to auto-detection. | height | Optional. Height of the graph, excluding titles and legends. Defaults to 200 pixels. | min | Optional. Minimum x-axis value. Defaults to lowest data value. | max | Optional. Maximum y-axis value. Defaults to highest data value. | renderer | Optional. Type of graph. Options are `line` and `area` (stacked graph). Defaults to `line`. | name | Optional. Title of plots in legend and hover detail. If passed a string, `[[ label ]]` will be substituted with the label value. If passed a function, it will be passed a map of labels and should return the name as a string. Can be a list. | xTitle | Optional. Title of the x-axis. Defaults to `Time`. | yUnits | Optional. Units of the y-axis. Defaults to empty. | yTitle | Optional. Title of the y-axis. Defaults to empty. | yAxisFormatter | Optional. Number formatter for the y-axis. Defaults to `PromConsole.NumberFormatter.humanize`. | yHoverFormatter | Optional. Number formatter for the hover detail. Defaults to `PromConsole.NumberFormatter.humanizeExact`. | colorScheme | Optional. Color scheme to be used by the plots. Can be either a list of hex color codes or one of the [color scheme names](https://github.com/shutterstock/rickshaw/blob/master/src/js/Rickshaw.Fixtures.Color.js) supported by Rickshaw. Defaults to `'colorwheel'`. If both `expr` and `name` are lists, they must be of the same length. The name will be applied to the plots for the corresponding expression. Valid options for the `yAxisFormatter` and `yHoverFormatter`: \* `PromConsole.NumberFormatter.humanize`: Format using [metric prefixes](http://en.wikipedia.org/wiki/Metric\_prefix). \* `PromConsole.NumberFormatter.humanizeNoSmallPrefix`: For absolute values greater than 1, format using using [metric prefixes](http://en.wikipedia.org/wiki/Metric\_prefix). For absolute values less than 1, format with 3 significant digits. This is useful to avoid units such as milliqueries per second that can be produced by `PromConsole.NumberFormatter.humanize`. \* `PromConsole.NumberFormatter.humanize1024`: Format the humanized result using a base of 1024 rather than 1000. | https://github.com/prometheus/docs/blob/main//docs/visualization/consoles.md | main | prometheus | [
-0.06131671741604805,
0.041254397481679916,
-0.1258816421031952,
0.025331776589155197,
-0.07187303155660629,
-0.050709810107946396,
-0.0021283335518091917,
0.07037972658872604,
-0.042685795575380325,
-0.006710078567266464,
0.01632171869277954,
-0.0893382653594017,
0.025035619735717773,
0.0... | 0.156032 |
[Perses](https://perses.dev) is an open-source dashboard and visualization platform designed for observability, with native support for Prometheus as a data source. It enables users to create, manage, and share dashboards for monitoring metrics and visualizing data. Perses aims to provide a simple, flexible, and extensible alternative to other dashboarding tools, focusing on ease of use, community-driven development, GitOps capabilities and dashboard as code approach. Here is an example of a Perses dashboard querying Prometheus for data: [](/assets/docs/perses\_prometheus.png) ## Installing To install Perses, see the official [Perses documentation](https://perses.dev/perses/docs/installation/in-a-container/). ## Using By default, Perses will be listening on port `8080`. You can access the web UI at `http://localhost:8080`. There is no login by default. ### Creating a Prometheus data source To learn about how to set up a data source in Perses, please refer to [Perses documentation](https://perses.dev/perses/docs/concepts/datasources). Once this connection to your Prometheus instance is configured, you are able to query it from the Dashboard and Explore views. ### Importing pre-built dashboards Perses is providing a set of pre-built dashboards that you can import into your instance. These dashboards are maintained by the community and can be found in the [Perses dashboard repository](https://github.com/perses/community-dashboards) | https://github.com/prometheus/docs/blob/main//docs/visualization/perses.md | main | prometheus | [
-0.0640082061290741,
0.03441225737333298,
-0.07234068214893341,
-0.04764917865395546,
0.010436995886266232,
-0.14095282554626465,
0.010794741101562977,
0.02946174517273903,
0.0420912504196167,
0.03335230425000191,
-0.008013848215341568,
-0.029597122222185135,
0.03973378986120224,
0.0350419... | 0.251079 |
[Grafana](http://grafana.com/) is an open-source analytics and visualization platform used to monitor and analyze metrics from various data sources. It allows users to create, explore, and share interactive dashboards, supporting integrations with databases like Prometheus, InfluxDB, Elasticsearch, and more. Grafana is widely used for observability, providing alerting, plugin extensibility, and a flexible query editor for real-time data visualization. Note: The Grafana data source for Prometheus is included since Grafana 2.5.0 (2015-10-28). The following shows an example Grafana dashboard which queries Prometheus for data: [](/assets/docs/grafana\_prometheus.png) ## Installing To install Grafana see the [official Grafana documentation](https://grafana.com/grafana/download/). ## Using By default, Grafana will be listening on [http://localhost:3000](http://localhost:3000). The default login is "admin" / "admin". ### Creating a Prometheus data source To create a Prometheus data source in Grafana: 1. Click on the "cogwheel" in the sidebar to open the Configuration menu. 2. Click on "Data Sources". 3. Click on "Add data source". 4. Select "Prometheus" as the type. 5. Set the appropriate Prometheus server URL (for example, `http://localhost:9090/`) 6. Adjust other data source settings as desired (for example, choosing the right Access method). 7. Click "Save & Test" to save the new data source. The following shows an example data source configuration: [](/assets/docs/grafana\_configuring\_datasource.png) ### Creating a Prometheus graph Follow the standard way of adding a new Grafana graph. Then: 1. Click the graph title, then click "Edit". 2. Under the "Metrics" tab, select your Prometheus data source (bottom right). 3. Enter any Prometheus expression into the "Query" field, while using the "Metric" field to lookup metrics via autocompletion. 4. To format the legend names of time series, use the "Legend format" input. For example, to show only the `method` and `status` labels of a returned query result, separated by a dash, you could use the legend format string `{{method}} - {{status}}`. 5. Tune other graph settings until you have a working graph. The following shows an example Prometheus graph configuration: [](/assets/docs/grafana\_qps\_graph.png) In Grafana 7.2 and later, the `$\_\_rate\_interval` variable is [recommended](https://grafana.com/docs/grafana/latest/datasources/prometheus/#using-\_\_rate\_interval) for use in the `rate`and `increase` functions. ### Importing pre-built dashboards from Grafana.com Grafana.com maintains [a collection of shared dashboards](https://grafana.com/dashboards) which can be downloaded and used with standalone instances of Grafana. Use the Grafana.com "Filter" option to browse dashboards for the "Prometheus" data source only. You must currently manually edit the downloaded JSON files and correct the `datasource:` entries to reflect the Grafana data source name which you chose for your Prometheus server. Use the "Dashboards" → "Home" → "Import" option to import the edited dashboard file into your Grafana install. | https://github.com/prometheus/docs/blob/main//docs/visualization/grafana.md | main | prometheus | [
-0.09883452951908112,
-0.010842195712029934,
-0.10509312897920609,
0.004594637081027031,
-0.004984086845070124,
-0.13373036682605743,
-0.02735189162194729,
-0.018905337899923325,
0.003251816378906369,
0.019463930279016495,
-0.026012932881712914,
-0.0629010871052742,
-0.0006242567324079573,
... | 0.256851 |
Occasionally you will need to monitor components which cannot be scraped. The [Prometheus Pushgateway](https://github.com/prometheus/pushgateway) allows you to push time series from [short-lived service-level batch jobs](/docs/practices/pushing/) to an intermediary job which Prometheus can scrape. Combined with Prometheus's simple text-based exposition format, this makes it easy to instrument even shell scripts without a client library. \* For more information on using the Pushgateway and use from a Unix shell, see the project's [README.md](https://github.com/prometheus/pushgateway/blob/master/README.md). \* For use from Java see the [Pushgateway documentation](https://prometheus.github.io/client\_java/exporters/pushgateway/). \* For use from Go see the [Push](https://godoc.org/github.com/prometheus/client\_golang/prometheus/push#Pusher.Push) and [Add](https://godoc.org/github.com/prometheus/client\_golang/prometheus/push#Pusher.Add) methods. \* For use from Python see [Exporting to a Pushgateway](https://prometheus.github.io/client\_python/exporting/pushgateway/). \* For use from Ruby see the [Pushgateway documentation](https://github.com/prometheus/client\_ruby#pushgateway). \* To find out about Pushgateway support of [client libraries maintained outside of the Prometheus project](/docs/instrumenting/clientlibs/), refer to their respective documentation. | https://github.com/prometheus/docs/blob/main//docs/instrumenting/pushing.md | main | prometheus | [
-0.11696941405534744,
-0.0003928530786652118,
-0.036336541175842285,
-0.013533910736441612,
-0.013760345987975597,
-0.13165901601314545,
-0.06513508409261703,
0.01641707494854927,
-0.027859607711434364,
-0.03918032720685005,
-0.010174218565225601,
-0.013639025390148163,
-0.012567594647407532... | 0.176851 |
There are a number of libraries and servers which help in exporting existing metrics from third-party systems as Prometheus metrics. This is useful for cases where it is not feasible to instrument a given system with Prometheus metrics directly (for example, HAProxy or Linux system stats). ## Third-party exporters Some of these exporters are maintained as part of the official [Prometheus GitHub organization](https://github.com/prometheus), those are marked as \*official\*, others are externally contributed and maintained. We encourage the creation of more exporters but cannot vet all of them for [best practices](/docs/instrumenting/writing\_exporters/). Commonly, those exporters are hosted outside of the Prometheus GitHub organization. The [exporter default port](https://github.com/prometheus/prometheus/wiki/Default-port-allocations) wiki page has become another catalog of exporters, and may include exporters not listed here due to overlapping functionality or still being in development. The [JMX exporter](https://github.com/prometheus/jmx\_exporter) can export from a wide variety of JVM-based applications, for example [Kafka](http://kafka.apache.org/) and [Cassandra](http://cassandra.apache.org/). ### Databases \* [Aerospike exporter](https://github.com/aerospike/aerospike-prometheus-exporter) \* [AWS RDS exporter](https://github.com/qonto/prometheus-rds-exporter) \* [ClickHouse exporter](https://github.com/f1yegor/clickhouse\_exporter) \* [Consul exporter](https://github.com/prometheus/consul\_exporter) (\*\*official\*\*) \* [Couchbase exporter](https://github.com/couchbase/couchbase-exporter) \* [CouchDB exporter](https://github.com/gesellix/couchdb-exporter) \* [Druid Exporter](https://github.com/opstree/druid-exporter) \* [Elasticsearch exporter](https://github.com/prometheus-community/elasticsearch\_exporter) \* [EventStore exporter](https://github.com/marcinbudny/eventstore\_exporter) \* [IoTDB exporter](https://github.com/fagnercarvalho/prometheus-iotdb-exporter) \* [KDB+ exporter](https://github.com/KxSystems/prometheus-kdb-exporter) \* [Memcached exporter](https://github.com/prometheus/memcached\_exporter) (\*\*official\*\*) \* [MongoDB exporter](https://github.com/percona/mongodb\_exporter) \* [MongoDB query exporter](https://github.com/raffis/mongodb-query-exporter) \* [MongoDB Node.js Driver exporter](https://github.com/christiangalsterer/mongodb-driver-prometheus-exporter) \* [MSSQL server exporter](https://github.com/awaragi/prometheus-mssql-exporter) \* [MySQL router exporter](https://github.com/rluisr/mysqlrouter\_exporter) \* [MySQL server exporter](https://github.com/prometheus/mysqld\_exporter) (\*\*official\*\*) \* [OpenTSDB Exporter](https://github.com/cloudflare/opentsdb\_exporter) \* [Oracle DB Exporter](https://github.com/iamseth/oracledb\_exporter) \* [PgBouncer exporter](https://github.com/prometheus-community/pgbouncer\_exporter) \* [PostgreSQL exporter](https://github.com/prometheus-community/postgres\_exporter) \* [Presto exporter](https://github.com/yahoojapan/presto\_exporter) \* [ProxySQL exporter](https://github.com/percona/proxysql\_exporter) \* [RavenDB exporter](https://github.com/marcinbudny/ravendb\_exporter) \* [Redis exporter](https://github.com/oliver006/redis\_exporter) \* [RethinkDB exporter](https://github.com/oliver006/rethinkdb\_exporter) \* [SQL exporter](https://github.com/burningalchemist/sql\_exporter) \* [Tarantool metric library](https://github.com/tarantool/metrics) \* [Twemproxy](https://github.com/stuartnelson3/twemproxy\_exporter) ### Hardware related \* [apcupsd exporter](https://github.com/mdlayher/apcupsd\_exporter) \* [BIG-IP exporter](https://github.com/ExpressenAB/bigip\_exporter) \* [Bosch Sensortec BMP/BME exporter](https://github.com/David-Igou/bsbmp-exporter) \* [Collins exporter](https://github.com/soundcloud/collins\_exporter) \* [Dell Hardware OMSA exporter](https://github.com/galexrt/dellhw\_exporter) \* [Disk usage exporter](https://github.com/dundee/disk\_usage\_exporter) \* [Fortigate exporter](https://github.com/bluecmd/fortigate\_exporter) \* [IBM Z HMC exporter](https://github.com/zhmcclient/zhmc-prometheus-exporter) \* [IoT Edison exporter](https://github.com/roman-vynar/edison\_exporter) \* [InfiniBand exporter](https://github.com/treydock/infiniband\_exporter) \* [IPMI exporter](https://github.com/soundcloud/ipmi\_exporter) \* [knxd exporter](https://github.com/RichiH/knxd\_exporter) \* [Modbus exporter](https://github.com/RichiH/modbus\_exporter) \* [Netgear Cable Modem Exporter](https://github.com/ickymettle/netgear\_cm\_exporter) \* [Netgear Router exporter](https://github.com/DRuggeri/netgear\_exporter) \* [Network UPS Tools (NUT) exporter](https://github.com/DRuggeri/nut\_exporter) \* [Node/system metrics exporter](https://github.com/prometheus/node\_exporter) (\*\*official\*\*) \* [NVIDIA DCGM (GPU) exporter](https://github.com/NVIDIA/dcgm-exporter) \* [ProSAFE exporter](https://github.com/dalance/prosafe\_exporter) \* [Redfish exporter](https://github.com/comcast/fishymetrics) \* [SmartRAID exporter](https://gitlab.com/calestyo/prometheus-smartraid-exporter) \* [Waveplus Radon Sensor Exporter](https://github.com/jeremybz/waveplus\_exporter) \* [Weathergoose Climate Monitor Exporter](https://github.com/branttaylor/watchdog-prometheus-exporter) \* [Windows exporter](https://github.com/prometheus-community/windows\_exporter) \* [Intel® Optane™ Persistent Memory Controller Exporter](https://github.com/intel/ipmctl-exporter) ### Issue trackers and continuous integration \* [Bamboo exporter](https://github.com/AndreyVMarkelov/bamboo-prometheus-exporter) \* [Bitbucket exporter](https://github.com/AndreyVMarkelov/prom-bitbucket-exporter) \* [Confluence exporter](https://github.com/AndreyVMarkelov/prom-confluence-exporter) \* [Jenkins exporter](https://github.com/lovoo/jenkins\_exporter) \* [JIRA exporter](https://github.com/AndreyVMarkelov/jira-prometheus-exporter) ### Messaging systems \* [Beanstalkd exporter](https://github.com/messagebird/beanstalkd\_exporter) \* [EMQ exporter](https://github.com/nuvo/emq\_exporter) \* [Gearman exporter](https://github.com/bakins/gearman-exporter) \* [IBM MQ exporter](https://github.com/ibm-messaging/mq-metric-samples/tree/master/cmd/mq\_prometheus) \* [Kafka exporter](https://github.com/danielqsj/kafka\_exporter) \* [NATS exporter](https://github.com/nats-io/prometheus-nats-exporter) \* [NSQ exporter](https://github.com/lovoo/nsq\_exporter) \* [Mirth Connect exporter](https://github.com/vynca/mirth\_exporter) \* [MQTT blackbox exporter](https://github.com/inovex/mqtt\_blackbox\_exporter) \* [MQTT2Prometheus](https://github.com/hikhvar/mqtt2prometheus) \* [RabbitMQ exporter](https://github.com/kbudde/rabbitmq\_exporter) \* [RabbitMQ Management Plugin exporter](https://github.com/deadtrickster/prometheus\_rabbitmq\_exporter) \* [RocketMQ exporter](https://github.com/apache/rocketmq-exporter) \* [Solace exporter](https://github.com/solacecommunity/solace-prometheus-exporter) ### Storage \* [Ceph exporter](https://github.com/digitalocean/ceph\_exporter) \* [Ceph RADOSGW exporter](https://github.com/blemmenes/radosgw\_usage\_exporter) \* [Gluster exporter](https://github.com/ofesseler/gluster\_exporter) \* [GPFS exporter](https://github.com/treydock/gpfs\_exporter) \* [Hadoop HDFS FSImage exporter](https://github.com/marcelmay/hadoop-hdfs-fsimage-exporter) \* [HPE CSI info metrics provider](https://scod.hpedev.io/csi\_driver/metrics.html) \* [HPE storage array exporter](https://hpe-storage.github.io/array-exporter/) \* [Lustre exporter](https://github.com/HewlettPackard/lustre\_exporter) \* [NetApp E-Series exporter](https://github.com/treydock/eseries\_exporter) \* [Pure Storage exporter](https://github.com/PureStorage-OpenConnect/pure-exporter) \* [ScaleIO exporter](https://github.com/syepes/sio2prom) \* [Tivoli Storage Manager/IBM Spectrum Protect exporter](https://github.com/treydock/tsm\_exporter) \* [IBM Storage Scale metrics exporter](https://github.com/IBM/ibm-spectrum-scale-bridge-for-grafana) ### HTTP \* [Apache exporter](https://github.com/Lusitaniae/apache\_exporter) \* [HAProxy exporter](https://github.com/prometheus/haproxy\_exporter) (\*\*official\*\*) \* [Nginx metric library](https://github.com/knyar/nginx-lua-prometheus) \* [Nginx VTS exporter](https://github.com/sysulq/nginx-vts-exporter) \* [Passenger exporter](https://github.com/stuartnelson3/passenger\_exporter) \* [Squid exporter](https://github.com/boynux/squid-exporter) \* [Tinyproxy exporter](https://github.com/gmm42/tinyproxy\_exporter) \* [Varnish exporter](https://github.com/jonnenauha/prometheus\_varnish\_exporter) \* [WebDriver exporter](https://github.com/mattbostock/webdriver\_exporter) ### APIs \* [AWS ECS exporter](https://github.com/slok/ecs-exporter) \* [AWS Health exporter](https://github.com/Jimdo/aws-health-exporter) \* [AWS SQS exporter](https://github.com/jmal98/sqs\_exporter) \* [AWS SQS Prometheus exporter](https://github.com/jmriebold/sqs-prometheus-exporter) \* [Azure Health exporter](https://github.com/matzefriedrich/az-health-exporter) \* [BigBlueButton](https://github.com/greenstatic/bigbluebutton-exporter) \* [Cloudflare exporter](https://gitlab.com/gitlab-org/cloudflare\_exporter) \* [Cryptowat exporter](https://github.com/nbarrientos/cryptowat\_exporter) \* [DigitalOcean exporter](https://github.com/metalmatze/digitalocean\_exporter) \* [Docker Cloud exporter](https://github.com/infinityworks/docker-cloud-exporter) \* [Docker Hub exporter](https://github.com/infinityworks/docker-hub-exporter) \* [Fastly exporter](https://github.com/peterbourgon/fastly-exporter) \* [GitHub exporter](https://github.com/githubexporter/github-exporter) \* [Gmail exporter](https://github.com/jamesread/prometheus-gmail-exporter/) \* [GraphQL exporter](https://github.com/ricardbejarano/graphql\_exporter) \* [InstaClustr exporter](https://github.com/fcgravalos/instaclustr\_exporter) \* [IO River exporter](https://github.com/ioriver/ioriver-exporter) \* [Mozilla Observatory exporter](https://github.com/Jimdo/observatory-exporter) \* [OpenWeatherMap exporter](https://github.com/RichiH/openweathermap\_exporter) \* [Pagespeed exporter](https://github.com/foomo/pagespeed\_exporter) \* [Rancher exporter](https://github.com/infinityworks/prometheus-rancher-exporter) \* [Speedtest exporter](https://github.com/nlamirault/speedtest\_exporter) \* [Tankerkönig API Exporter](https://github.com/lukasmalkmus/tankerkoenig\_exporter) ### Logging \* [Fluentd exporter](https://github.com/V3ckt0r/fluentd\_exporter) \* | https://github.com/prometheus/docs/blob/main//docs/instrumenting/exporters.md | main | prometheus | [
-0.0946497917175293,
-0.0622226782143116,
-0.04607322812080383,
-0.01232169009745121,
-0.00488827470690012,
-0.07022131979465485,
-0.03998316079378128,
0.02092134580016136,
0.005002625286579132,
-0.01530588697642088,
-0.016589641571044922,
-0.05069207027554512,
-0.005175196100026369,
0.013... | 0.16729 |
exporter](https://github.com/infinityworks/docker-cloud-exporter) \* [Docker Hub exporter](https://github.com/infinityworks/docker-hub-exporter) \* [Fastly exporter](https://github.com/peterbourgon/fastly-exporter) \* [GitHub exporter](https://github.com/githubexporter/github-exporter) \* [Gmail exporter](https://github.com/jamesread/prometheus-gmail-exporter/) \* [GraphQL exporter](https://github.com/ricardbejarano/graphql\_exporter) \* [InstaClustr exporter](https://github.com/fcgravalos/instaclustr\_exporter) \* [IO River exporter](https://github.com/ioriver/ioriver-exporter) \* [Mozilla Observatory exporter](https://github.com/Jimdo/observatory-exporter) \* [OpenWeatherMap exporter](https://github.com/RichiH/openweathermap\_exporter) \* [Pagespeed exporter](https://github.com/foomo/pagespeed\_exporter) \* [Rancher exporter](https://github.com/infinityworks/prometheus-rancher-exporter) \* [Speedtest exporter](https://github.com/nlamirault/speedtest\_exporter) \* [Tankerkönig API Exporter](https://github.com/lukasmalkmus/tankerkoenig\_exporter) ### Logging \* [Fluentd exporter](https://github.com/V3ckt0r/fluentd\_exporter) \* [Google's mtail log data extractor](https://github.com/google/mtail) \* [Grok exporter](https://github.com/fstab/grok\_exporter) ### FinOps \* [AWS Cost Exporter](https://github.com/electrolux-oss/aws-cost-exporter) \* [Azure Cost Exporter](https://github.com/electrolux-oss/azure-cost-exporter) \* [Kubernetes Cost Exporter](https://github.com/electrolux-oss/kubernetes-cost-exporter) ### Other monitoring systems \* [Akamai Cloudmonitor exporter](https://github.com/ExpressenAB/cloudmonitor\_exporter) \* [Alibaba Cloudmonitor exporter](https://github.com/aylei/aliyun-exporter) \* [AWS CloudWatch exporter](https://github.com/prometheus/cloudwatch\_exporter) (\*\*official\*\*) \* [Azure Monitor exporter](https://github.com/RobustPerception/azure\_metrics\_exporter) \* [CCF HuaTuo exporter](https://github.com/ccfos/huatuo) \* [Cloud Foundry Firehose exporter](https://github.com/cloudfoundry-community/firehose\_exporter) \* [Collectd exporter](https://github.com/prometheus/collectd\_exporter) (\*\*official\*\*) \* [Google Stackdriver exporter](https://github.com/frodenas/stackdriver\_exporter) \* [Graphite exporter](https://github.com/prometheus/graphite\_exporter) (\*\*official\*\*) \* [Heka dashboard exporter](https://github.com/docker-infra/heka\_exporter) \* [Heka exporter](https://github.com/imgix/heka\_exporter) \* [Huawei Cloudeye exporter](https://github.com/huaweicloud/cloudeye-exporter) \* [InfluxDB exporter](https://github.com/prometheus/influxdb\_exporter) (\*\*official\*\*) \* [ITM exporter](https://github.com/rafal-szypulka/itm\_exporter) \* [Java GC exporter](https://github.com/loyispa/jgc\_exporter) \* [JavaMelody exporter](https://github.com/fschlag/javamelody-prometheus-exporter) \* [JMX exporter](https://github.com/prometheus/jmx\_exporter) (\*\*official\*\*) \* [Munin exporter](https://github.com/pvdh/munin\_exporter) \* [Nagios / Naemon exporter](https://github.com/Griesbacher/Iapetos) \* [Neptune Apex exporter](https://github.com/dl-romero/neptune\_exporter) \* [New Relic exporter](https://github.com/mrf/newrelic\_exporter) \* [NRPE exporter](https://github.com/robustperception/nrpe\_exporter) \* [Osquery exporter](https://github.com/zwopir/osquery\_exporter) \* [OTC CloudEye exporter](https://github.com/tiagoReichert/otc-cloudeye-prometheus-exporter) \* [Pingdom exporter](https://github.com/giantswarm/prometheus-pingdom-exporter) \* [Promitor (Azure Monitor)](https://promitor.io) \* [scollector exporter](https://github.com/tgulacsi/prometheus\_scollector) \* [Sensu exporter](https://github.com/reachlin/sensu\_exporter) \* [site24x7\_exporter](https://github.com/svenstaro/site24x7\_exporter) \* [SNMP exporter](https://github.com/prometheus/snmp\_exporter) (\*\*official\*\*) \* [StatsD exporter](https://github.com/prometheus/statsd\_exporter) (\*\*official\*\*) \* [TencentCloud monitor exporter](https://github.com/tencentyun/tencentcloud-exporter) \* [ThousandEyes exporter](https://github.com/sapcc/1000eyes\_exporter) \* [StatusPage exporter](https://github.com/sergeyshevch/statuspage-exporter) ### Miscellaneous \* [ACT Fibernet Exporter](https://git.captnemo.in/nemo/prometheus-act-exporter) \* [BIND exporter](https://github.com/prometheus-community/bind\_exporter) \* [BIND query exporter](https://github.com/DRuggeri/bind\_query\_exporter) \* [Bitcoind exporter](https://github.com/LePetitBloc/bitcoind-exporter) \* [Blackbox exporter](https://github.com/prometheus/blackbox\_exporter) (\*\*official\*\*) \* [Bungeecord exporter](https://github.com/weihao/bungeecord-prometheus-exporter) \* [BOSH exporter](https://github.com/cloudfoundry-community/bosh\_exporter) \* [cAdvisor](https://github.com/google/cadvisor) \* [Cachet exporter](https://github.com/ContaAzul/cachet\_exporter) \* [ccache exporter](https://github.com/virtualtam/ccache\_exporter) \* [c-lightning exporter](https://github.com/lightningd/plugins/tree/master/prometheus) \* [DHCPD leases exporter](https://github.com/DRuggeri/dhcpd\_leases\_exporter) \* [Dovecot exporter](https://github.com/kumina/dovecot\_exporter) \* [Dnsmasq exporter](https://github.com/google/dnsmasq\_exporter) \* [eBPF exporter](https://github.com/cloudflare/ebpf\_exporter) \* [eBPF network traffic exporter](https://github.com/kasd/texporter) \* [Ethereum Client exporter](https://github.com/31z4/ethereum-prometheus-exporter) \* [FFmpeg exporter](https://github.com/domcyrus/ffmpeg\_exporter) \* [File statistics exporter](https://github.com/michael-doubez/filestat\_exporter) \* [JFrog Artifactory Exporter](https://github.com/peimanja/artifactory\_exporter) \* [Hostapd Exporter](https://github.com/Fundacio-i2CAT/hostapd\_prometheus\_exporter) \* [IBM Security Verify Access / Security Access Manager Exporter](https://gitlab.com/zeblawson/isva-prometheus-exporter) \* [IPsec exporter](https://github.com/torilabs/ipsec-prometheus-exporter) \* [ipset exporter](https://github.com/hatamiarash7/ipset-exporter) \* [IRCd exporter](https://github.com/dgl/ircd\_exporter) \* [Linux HA ClusterLabs exporter](https://github.com/ClusterLabs/ha\_cluster\_exporter) \* [JMeter plugin](https://github.com/johrstrom/jmeter-prometheus-plugin) \* [JSON exporter](https://github.com/prometheus-community/json\_exporter) \* [Kannel exporter](https://github.com/apostvav/kannel\_exporter) \* [Kemp LoadBalancer exporter](https://github.com/giantswarm/prometheus-kemp-exporter) \* [Kibana Exporter](https://github.com/pjhampton/kibana-prometheus-exporter) \* [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) \* [Locust Exporter](https://github.com/ContainerSolutions/locust\_exporter) \* [Meteor JS web framework exporter](https://atmospherejs.com/sevki/prometheus-exporter) \* [Minecraft exporter module](https://github.com/Baughn/PrometheusIntegration) \* [Minecraft exporter](https://github.com/dirien/minecraft-prometheus-exporter) \* [NetBird exporter](https://github.com/matanbaruch/netbird-api-exporter) \* [Nomad exporter](https://gitlab.com/yakshaving.art/nomad-exporter) \* [nftables exporter](https://github.com/Intrinsec/nftables\_exporter) \* [OpenStack exporter](https://github.com/openstack-exporter/openstack-exporter) \* [OpenStack blackbox exporter](https://github.com/infraly/openstack\_client\_exporter) \* [OpenVPN exporter](https://github.com/Fadi-hamwi/OpenVPN-Metrics-Exporter) \* [oVirt exporter](https://github.com/czerwonk/ovirt\_exporter) \* [Pact Broker exporter](https://github.com/ContainerSolutions/pactbroker\_exporter) \* [PHP-FPM exporter](https://github.com/bakins/php-fpm-exporter) \* [Podman exporter](https://github.com/containers/prometheus-podman-exporter) \* [Prefect2 exporter](https://github.com/pathfinder177/prefect2-prometheus-exporter) \* [Process exporter](https://github.com/ncabatoff/process-exporter) \* [rTorrent exporter](https://github.com/mdlayher/rtorrent\_exporter) \* [Rundeck exporter](https://github.com/phsmith/rundeck\_exporter) \* [SABnzbd exporter](https://github.com/msroest/sabnzbd\_exporter) \* [SAML exporter](https://github.com/DoodleScheduling/saml-exporter) \* [Scraparr](https://github.com/thecfu/scraparr) \* [Script exporter](https://github.com/adhocteam/script\_exporter) \* [Shield exporter](https://github.com/cloudfoundry-community/shield\_exporter) \* [Smokeping prober](https://github.com/SuperQ/smokeping\_prober) \* [SMTP/Maildir MDA blackbox prober](https://github.com/cherti/mailexporter) \* [SoftEther exporter](https://github.com/dalance/softether\_exporter) \* [SSH exporter](https://github.com/treydock/ssh\_exporter) \* [Teamspeak3 exporter](https://github.com/hikhvar/ts3exporter) \* [Transmission exporter](https://github.com/metalmatze/transmission-exporter) \* [Unbound exporter](https://github.com/kumina/unbound\_exporter) \* [WireGuard exporter](https://github.com/MindFlavor/prometheus\_wireguard\_exporter) \* [Xen exporter](https://github.com/lovoo/xenstats\_exporter) \* [ZLMediaKit exporter](https://github.com/guohuachan/ZLMediaKit\_exporter) When implementing a new Prometheus exporter, please follow the [guidelines on writing exporters](/docs/instrumenting/writing\_exporters) Please also consider consulting the [development mailing list](https://groups.google.com/forum/#!forum/prometheus-developers). We are happy to give advice on how to make your exporter as useful and consistent as possible. ## Software exposing Prometheus metrics Some third-party software exposes metrics in the Prometheus format, so no separate exporters are needed: \* [Ansible Automation Platform Automation Controller (AWX)](https://docs.ansible.com/automation-controller/latest/html/administration/metrics.html) \* [App Connect Enterprise](https://github.com/ot4i/ace-docker) \* [Ballerina](https://ballerina.io/) \* [BFE](https://github.com/baidu/bfe) \* [Caddy](https://caddyserver.com/docs/metrics) (\*\*direct\*\*) \* [Ceph](https://docs.ceph.com/en/latest/mgr/prometheus/) \* [CockroachDB](https://www.cockroachlabs.com/docs/stable/monitoring-and-alerting.html#prometheus-endpoint) \* [Collectd](https://collectd.org/wiki/index.php/Plugin:Write\_Prometheus) \* [Concourse](https://concourse-ci.org/) \* [CRG Roller Derby Scoreboard](https://github.com/rollerderby/scoreboard) (\*\*direct\*\*) \* [Diffusion](https://docs.pushtechnology.com/docs/latest/manual/html/administratorguide/systemmanagement/r\_statistics.html) \* [Docker Daemon](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-metrics) \* [Doorman](https://github.com/youtube/doorman) (\*\*direct\*\*) \* [Dovecot](https://doc.dovecot.org/main/core/config/statistics.html#openmetrics) \* [Envoy](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#get--stats?format=prometheus) \* [Etcd](https://github.com/coreos/etcd) (\*\*direct\*\*) \* [Flink](https://github.com/apache/flink) \* [FreeBSD Kernel](https://www.freebsd.org/cgi/man.cgi?query=prometheus\_sysctl\_exporter&apropos=0&sektion=8&manpath=FreeBSD+12-current&arch=default&format=html) \* [GitLab](https://docs.gitlab.com/ee/administration/monitoring/prometheus/gitlab\_metrics.html) \* [Grafana](https://grafana.com/docs/grafana/latest/administration/view-server/internal-metrics/) \* [JavaMelody](https://github.com/javamelody/javamelody/wiki/UserGuideAdvanced#exposing-metrics-to-prometheus) \* [Kong](https://github.com/Kong/kong-plugin-prometheus) \* [Kubernetes](https://github.com/kubernetes/kubernetes) (\*\*direct\*\*) \* [LavinMQ](https://lavinmq.com/) \* [Linkerd](https://github.com/BuoyantIO/linkerd) \* [mgmt](https://github.com/purpleidea/mgmt/blob/master/docs/prometheus.md) \* [MidoNet](https://github.com/midonet/midonet) \* [midonet-kubernetes](https://github.com/midonet/midonet-kubernetes) (\*\*direct\*\*) \* [MinIO](https://docs.minio.io/docs/how-to-monitor-minio-using-prometheus.html) \* [PATROL with Monitoring Studio X](https://www.sentrysoftware.com/library/swsyx/prometheus/exposing-patrol-parameters-in-prometheus.html) \* [Netdata](https://github.com/firehol/netdata) \* [OpenZiti](https://openziti.github.io) \* [Pomerium](https://pomerium.com/reference/#metrics-address) \* [Pretix](https://pretix.eu/) \* [Quobyte](https://www.quobyte.com/) (\*\*direct\*\*) \* [RabbitMQ](https://rabbitmq.com/prometheus.html) \* [RobustIRC](http://robustirc.net/) \* [ScyllaDB](http://github.com/scylladb/scylla) \* [Skipper](https://github.com/zalando/skipper) \* [SkyDNS](https://github.com/skynetservices/skydns) (\*\*direct\*\*) \* [Telegraf](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/prometheus\_client) \* [Traefik](https://github.com/containous/traefik) \* [Vector](https://vector.dev) \* [VerneMQ](https://github.com/vernemq/vernemq) \* [Flux](https://github.com/fluxcd/flux2) \* [Xandikos](https://www.xandikos.org/) (\*\*direct\*\*) \* | https://github.com/prometheus/docs/blob/main//docs/instrumenting/exporters.md | main | prometheus | [
-0.12014564871788025,
0.00394954439252615,
-0.0026144033763557673,
-0.005577339790761471,
0.09838895499706268,
-0.06337318569421768,
-0.034303851425647736,
-0.008081668987870216,
-0.04210680350661278,
0.02948974259197712,
0.04710383340716362,
-0.0650365874171257,
-0.0071389502845704556,
-0... | 0.135639 |
[mgmt](https://github.com/purpleidea/mgmt/blob/master/docs/prometheus.md) \* [MidoNet](https://github.com/midonet/midonet) \* [midonet-kubernetes](https://github.com/midonet/midonet-kubernetes) (\*\*direct\*\*) \* [MinIO](https://docs.minio.io/docs/how-to-monitor-minio-using-prometheus.html) \* [PATROL with Monitoring Studio X](https://www.sentrysoftware.com/library/swsyx/prometheus/exposing-patrol-parameters-in-prometheus.html) \* [Netdata](https://github.com/firehol/netdata) \* [OpenZiti](https://openziti.github.io) \* [Pomerium](https://pomerium.com/reference/#metrics-address) \* [Pretix](https://pretix.eu/) \* [Quobyte](https://www.quobyte.com/) (\*\*direct\*\*) \* [RabbitMQ](https://rabbitmq.com/prometheus.html) \* [RobustIRC](http://robustirc.net/) \* [ScyllaDB](http://github.com/scylladb/scylla) \* [Skipper](https://github.com/zalando/skipper) \* [SkyDNS](https://github.com/skynetservices/skydns) (\*\*direct\*\*) \* [Telegraf](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/prometheus\_client) \* [Traefik](https://github.com/containous/traefik) \* [Vector](https://vector.dev) \* [VerneMQ](https://github.com/vernemq/vernemq) \* [Flux](https://github.com/fluxcd/flux2) \* [Xandikos](https://www.xandikos.org/) (\*\*direct\*\*) \* [Zipkin](https://github.com/openzipkin/zipkin/tree/master/zipkin-server#metrics) The software marked \*direct\* is also directly instrumented with a Prometheus client library. ## Other third-party utilities This section lists libraries and other utilities that help you instrument code in a certain language. They are not Prometheus client libraries themselves but make use of one of the normal Prometheus client libraries under the hood. As for all independently maintained software, we cannot vet all of them for best practices. \* Clojure: [iapetos](https://github.com/clj-commons/iapetos) \* Go: [go-metrics instrumentation library](https://github.com/armon/go-metrics) \* Go: [gokit](https://github.com/peterbourgon/gokit) \* Go: [prombolt](https://github.com/mdlayher/prombolt) \* Java/JVM: [EclipseLink metrics collector](https://github.com/VitaNuova/eclipselinkexporter) \* Java/JVM: [Hystrix metrics publisher](https://github.com/ahus1/prometheus-hystrix) \* Java/JVM: [Jersey metrics collector](https://github.com/VitaNuova/jerseyexporter) \* Java/JVM: [Micrometer Prometheus Registry](https://micrometer.io/docs/registry/prometheus) \* Python-Django: [django-prometheus](https://github.com/korfuri/django-prometheus) \* Node.js: [swagger-stats](https://github.com/slanatech/swagger-stats) | https://github.com/prometheus/docs/blob/main//docs/instrumenting/exporters.md | main | prometheus | [
-0.07358133047819138,
0.058978717774152756,
-0.02552257478237152,
-0.015445162542164326,
0.04726321995258331,
-0.09081375598907471,
-0.005144996102899313,
0.04464314877986908,
-0.014167931862175465,
-0.00034342060098424554,
0.03739142045378685,
-0.06859111785888672,
-0.03563806414604187,
0... | 0.316468 |
## Abstract This document specifies the protocol negotiation mechanism used by Prometheus when scraping metrics from targets. It defines the Accept header format, supported Content Types, and the negotiation process for determining the best available format for metric exposition. ## Introduction Prometheus supports multiple formats for scraping metrics, including both text-based and binary protobuf formats. Based on the value of the Accept header, the target will pick the best available Content Type for its reply. ## Protocol Types ### Supported Protocols The following protocols are supported by Prometheus: 1. `PrometheusProto` - Binary protobuf format 2. `PrometheusText0.0.4` - Prometheus text format version 0.0.4 3. `PrometheusText1.0.0` - Prometheus text format version 1.0.0 4. `OpenMetricsText0.0.1` - OpenMetrics text format version 0.0.1 5. `OpenMetricsText1.0.0` - OpenMetrics text format version 1.0.0 ### Protocol Headers Each protocol MUST be associated with a specific MIME type and version: | Protocol | MIME Type | Parameters | | -------------------- | ------------------------------- | ---------------------------------------------------------- | | PrometheusProto | application/vnd.google.protobuf | proto=io.prometheus.client.MetricFamily;encoding=delimited | | PrometheusText0.0.4 | text/plain | version=0.0.4 | | PrometheusText1.0.0 | text/plain | version=1.0.0 | | OpenMetricsText0.0.1 | application/openmetrics-text | version=0.0.1 | | OpenMetricsText1.0.0 | application/openmetrics-text | version=1.0.0 | ## Accept Header Construction The Accept header is constructed by Prometheus to indicate what formats it supports. ### Basic Format The Accept header MUST be constructed as follows: 1. For each protocol supported by the target: - The protocol's MIME type and parameters MUST be specified. - For protobuf protocols, an encoding of "delimited" MUST be specified. - For PrometheusText1.0.0 and OpenMetricsText1.0.0, the escaping scheme parameter SHOULD be appended. - A quality value (q) parameter SHOULD be appended. 2. A catch-all `\*/\*` with the lowest quality value SHOULD be appended. ### Quality Values Quality values SHOULD be assigned in descending order based on the protocol's position in the Accept header: - First protocol: q=0.{n+1} - Second protocol: q=0.{n} - And so on, where n is the number of supported protocols ### Escaping Scheme For PrometheusText1.0.0 and OpenMetricsText1.0.0 protocols, the Accept header SHOULD include an escaping scheme parameter: `escaping=` Where `` MUST be one of: - `allow-utf-8` - `underscores` - `dots` - `values` See [Escaping Schemes](escaping\_schemes.md) spec for details on how the escaping schemes function. ### Compression The Accept-Encoding header SHOULD be set to: - `gzip` if compression is enabled - `identity` if compression is disabled ## Selection of Format The scrape target SHOULD use the following process to select an appropriate Content-Type based on the list of protocols in the Accept header generated by Prometheus: 1. It MUST use the protocol in the Accept header with the highest weighting that is supported by Prometheus. 2. If no protocols are supported, the target MAY use a user-configured fallback scrape protocol. 3. If no fallback is specified, the target MUST use PrometheusText0.0.4 as a last resort. ## Content-Type Response Targets SHOULD respond with a Content-Type header that matches one of the accepted formats. The Content-Type header MUST include: 1. The appropriate MIME type. 2. The version parameter. 3. For text formats version 1.0.0 and above, the escaping scheme parameter. ## Security Considerations 1. Targets MUST validate the Accept header to prevent potential injection attacks 2. The escaping scheme parameter MUST be validated to prevent protocol confusion 3. Content-Type headers MUST be properly sanitized to prevent MIME type confusion ## Examples ### Default Accept Header ``` Accept: application/openmetrics-text;version=1.0.0;escaping=allow-utf-8;q=0.5,application/openmetrics-text;version=0.0.1;q=0.4,text/plain;version=1.0.0;escaping=allow-utf-8;q=0.3,text/plain;version=0.0.4;q=0.2,/;q=0.1 ``` ### Protobuf-First Accept Header ``` Accept: application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited;q=0.5,application/ openmetrics-text;version=1.0.0;escaping=allow-utf-8;q=0.4,application/openmetrics-text;version=0.0.1;q=0.3,text/plain;version=1.0.0;escaping=allow-utf-8;q=0.2,text/plain;version=0.0.4;q=0.1,/;q=0.0 ``` | https://github.com/prometheus/docs/blob/main//docs/instrumenting/content_negotiation.md | main | prometheus | [
-0.07639344781637192,
0.0494297556579113,
-0.010895447805523872,
-0.03417028486728668,
0.029283126816153526,
-0.10530604422092438,
-0.01674404926598072,
-0.016819702461361885,
-0.02019977755844593,
0.00619955500587821,
-0.018420018255710602,
-0.06853047758340836,
0.024318737909197807,
0.03... | 0.163875 |
Default Accept Header ``` Accept: application/openmetrics-text;version=1.0.0;escaping=allow-utf-8;q=0.5,application/openmetrics-text;version=0.0.1;q=0.4,text/plain;version=1.0.0;escaping=allow-utf-8;q=0.3,text/plain;version=0.0.4;q=0.2,/;q=0.1 ``` ### Protobuf-First Accept Header ``` Accept: application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited;q=0.5,application/ openmetrics-text;version=1.0.0;escaping=allow-utf-8;q=0.4,application/openmetrics-text;version=0.0.1;q=0.3,text/plain;version=1.0.0;escaping=allow-utf-8;q=0.2,text/plain;version=0.0.4;q=0.1,/;q=0.0 ``` | https://github.com/prometheus/docs/blob/main//docs/instrumenting/content_negotiation.md | main | prometheus | [
-0.0455915592610836,
0.049621373414993286,
0.00033872565836645663,
-0.12351527810096741,
0.00044547076686285436,
-0.095530204474926,
-0.023336142301559448,
0.004129937384277582,
-0.0463908351957798,
0.019535185769200325,
0.003660155925899744,
-0.1500440090894699,
-0.021010132506489754,
0.0... | 0.046724 |
This document covers what functionality and API Prometheus client libraries should offer, with the aim of consistency across libraries, making the easy use cases easy and avoiding offering functionality that may lead users down the wrong path. There are [10 languages already supported](/docs/instrumenting/clientlibs) at the time of writing, so we’ve gotten a good sense by now of how to write a client. These guidelines aim to help authors of new client libraries produce good libraries. ## Conventions MUST/MUST NOT/SHOULD/SHOULD NOT/MAY have the meanings given in [https://www.ietf.org/rfc/rfc2119.txt](https://www.ietf.org/rfc/rfc2119.txt) In addition ENCOURAGED means that a feature is desirable for a library to have, but it’s okay if it’s not present. In other words, a nice to have. Things to keep in mind: \* Take advantage of each language’s features. \* The common use cases should be easy. \* The correct way to do something should be the easy way. \* More complex use cases should be possible. The common use cases are (in order): \* Counters without labels spread liberally around libraries/applications. \* Timing functions/blocks of code in Summaries/Histograms. \* Gauges to track current states of things (and their limits). \* Monitoring of batch jobs. ## Overall structure Clients MUST be written to be callback based internally. Clients SHOULD generally follow the structure described here. The key class is the Collector. This has a method (typically called ‘collect’) that returns zero or more metrics and their samples. Collectors get registered with a CollectorRegistry. Data is exposed by passing a CollectorRegistry to a class/method/function "bridge", which returns the metrics in a format Prometheus supports. Every time the CollectorRegistry is scraped it must callback to each of the Collectors’ collect method. The interface most users interact with are the Counter, Gauge, Summary, and Histogram Collectors. These represent a single metric, and should cover the vast majority of use cases where a user is instrumenting their own code. More advanced uses cases (such as proxying from another monitoring/instrumentation system) require writing a custom Collector. Someone may also want to write a "bridge" that takes a CollectorRegistry and produces data in a format a different monitoring/instrumentation system understands, allowing users to only have to think about one instrumentation system. CollectorRegistry SHOULD offer `register()`/`unregister()` functions, and a Collector SHOULD be allowed to be registered to multiple CollectorRegistrys. Client libraries MUST be thread safe. For non-OO languages such as C, client libraries should follow the spirit of this structure as much as is practical. ### Naming Client libraries SHOULD follow function/method/class names mentioned in this document, keeping in mind the naming conventions of the language they’re working in. For example, `set\_to\_current\_time()` is good for a method name in Python, but `SetToCurrentTime()` is better in Go and `setToCurrentTime()` is the convention in Java. Where names differ for technical reasons (e.g. not allowing function overloading), documentation/help strings SHOULD point users towards the other names. Libraries MUST NOT offer functions/methods/classes with the same or similar names to ones given here, but with different semantics. ## Metrics The Counter, Gauge, Summary and Histogram [metric types](/docs/concepts/metric\_types/) are the primary interface by users. Counter and Gauge MUST be part of the client library. At least one of Summary and Histogram MUST be offered. These should be primarily used as file-static variables, that is, global variables defined in the same file as the code they’re instrumenting. The client library SHOULD enable this. The common use case is instrumenting a piece of code overall, not a piece of code in the context of one instance of an object. Users shouldn’t have to worry about plumbing their metrics throughout their code, the client library should do that for | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_clientlibs.md | main | prometheus | [
-0.1525016725063324,
0.024067720398306847,
0.008766633458435535,
-0.08468662947416306,
0.004295710939913988,
-0.018825538456439972,
0.0036275419406592846,
0.08017493784427643,
-0.04654953256249428,
-0.043116096407175064,
0.010805033147335052,
-0.0007179991225712001,
0.03418460860848427,
0.... | 0.187106 |
The client library SHOULD enable this. The common use case is instrumenting a piece of code overall, not a piece of code in the context of one instance of an object. Users shouldn’t have to worry about plumbing their metrics throughout their code, the client library should do that for them (and if it doesn’t, users will write a wrapper around the library to make it "easier" - which rarely tends to go well). There MUST be a default CollectorRegistry, the standard metrics MUST by default implicitly register into it with no special work required by the user. There MUST be a way to have metrics not register to the default CollectorRegistry, for use in batch jobs and unittests. Custom collectors SHOULD also follow this. Exactly how the metrics should be created varies by language. For some (Java, Go) a builder approach is best, whereas for others (Python) function arguments are rich enough to do it in one call. For example in the Java Simpleclient we have: ```java class YourClass { static final Counter requests = Counter.build() .name("requests\_total") .help("Requests.").register(); } ``` This will register requests with the default CollectorRegistry. By calling `build()` rather than `register()` the metric won’t be registered (handy for unittests), you can also pass in a CollectorRegistry to `register()` (handy for batch jobs). ### Counter [Counter](/docs/concepts/metric\_types/#counter) is a monotonically increasing counter. It MUST NOT allow the value to decrease, however it MAY be reset to 0 (such as by server restart). A counter MUST have the following methods: \* `inc()`: Increment the counter by 1 \* `inc(double v)`: Increment the counter by the given amount. MUST check that v >= 0. A counter is ENCOURAGED to have: A way to count exceptions throw/raised in a given piece of code, and optionally only certain types of exceptions. This is count\_exceptions in Python. Counters MUST start at 0. ### Gauge [Gauge](/docs/concepts/metric\_types/#gauge) represents a value that can go up and down. A gauge MUST have the following methods: \* `inc()`: Increment the gauge by 1 \* `inc(double v)`: Increment the gauge by the given amount \* `dec()`: Decrement the gauge by 1 \* `dec(double v)`: Decrement the gauge by the given amount \* `set(double v)`: Set the gauge to the given value Gauges MUST start at 0, you MAY offer a way for a given gauge to start at a different number. A gauge SHOULD have the following methods: \* `set\_to\_current\_time()`: Set the gauge to the current unixtime in seconds. A gauge is ENCOURAGED to have: A way to track in-progress requests in some piece of code/function. This is `track\_inprogress` in Python. A way to time a piece of code and set the gauge to its duration in seconds. This is useful for batch jobs. This is startTimer/setDuration in Java and the `time()` decorator/context manager in Python. This SHOULD match the pattern in Summary/Histogram (though `set()` rather than `observe()`). ### Summary A [summary](/docs/concepts/metric\_types/#summary) samples observations (usually things like request durations) over sliding windows of time and provides instantaneous insight into their distributions, frequencies, and sums. A summary MUST NOT allow the user to set "quantile" as a label name, as this is used internally to designate summary quantiles. A summary is ENCOURAGED to offer quantiles as exports, though these can’t be aggregated and tend to be slow. A summary MUST allow not having quantiles, as just `\_count`/`\_sum` is quite useful and this MUST be the default. A summary MUST have the following methods: \* `observe(double v)`: Observe the given amount A summary SHOULD have the following methods: Some way to time code for users in seconds. In Python this is | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_clientlibs.md | main | prometheus | [
-0.14496150612831116,
-0.03879139572381973,
-0.13908971846103668,
0.03250396251678467,
-0.191152885556221,
0.010494074784219265,
-0.019682398065924644,
0.0576026551425457,
0.0071941097266972065,
-0.04856077954173088,
-0.04449714347720146,
-0.09620115160942078,
0.05063692852854729,
0.028297... | 0.088652 |
allow not having quantiles, as just `\_count`/`\_sum` is quite useful and this MUST be the default. A summary MUST have the following methods: \* `observe(double v)`: Observe the given amount A summary SHOULD have the following methods: Some way to time code for users in seconds. In Python this is the `time()` decorator/context manager. In Java this is startTimer/observeDuration. Units other than seconds MUST NOT be offered (if a user wants something else, they can do it by hand). This should follow the same pattern as Gauge/Histogram. Summary `\_count`/`\_sum` MUST start at 0. ### Histogram [Histograms](/docs/concepts/metric\_types/#histogram) allow aggregatable distributions of events, such as request latencies. This is at its core a counter per bucket. A histogram MUST NOT allow `le` as a user-set label, as `le` is used internally to designate buckets. A histogram MUST offer a way to manually choose the buckets. Ways to set buckets in a `linear(start, width, count)` and `exponential(start, factor, count)` fashion SHOULD be offered. Count MUST include the `+Inf` bucket. A histogram SHOULD have the same default buckets as other client libraries. Buckets MUST NOT be changeable once the metric is created. A histogram MUST have the following methods: \* `observe(double v)`: Observe the given amount A histogram SHOULD have the following methods: Some way to time code for users in seconds. In Python this is the `time()` decorator/context manager. In Java this is `startTimer`/`observeDuration`. Units other than seconds MUST NOT be offered (if a user wants something else, they can do it by hand). This should follow the same pattern as Gauge/Summary. Histogram `\_count`/`\_sum` and the buckets MUST start at 0. \*\*Further metrics considerations\*\* Providing additional functionality in metrics beyond what’s documented above as makes sense for a given language is ENCOURAGED. If there’s a common use case you can make simpler then go for it, as long as it won’t encourage undesirable behaviours (such as suboptimal metric/label layouts, or doing computation in the client). ### Labels Labels are one of the [most powerful aspects](/docs/practices/instrumentation/#use-labels) of Prometheus, but [easily abused](/docs/practices/instrumentation/#do-not-overuse-labels). Accordingly client libraries must be very careful in how labels are offered to users. Client libraries SHOULD NOT allow users to have different label names for the same metric for Gauge/Counter/Summary/Histogram or any other Collector offered by the library. Metrics from custom collectors should almost always have consistent label names. As there are still rare but valid use cases where this is not the case, client libraries should not verify this. While labels are powerful, the majority of metrics will not have labels. Accordingly the API should allow for labels but not dominate it. A client library MUST allow for optionally specifying a list of label names at Gauge/Counter/Summary/Histogram creation time. A client library SHOULD support any number of label names. A client library MUST validate that label names meet the [documented requirements](/docs/concepts/data\_model/#metric-names-and-labels). The general way to provide access to labeled dimension of a metric is via a `labels()` method that takes either a list of the label values or a map from label name to label value and returns a "Child". The usual `.inc()`/`.dec()`/`.observe()` etc. methods can then be called on the Child. The Child returned by `labels()` SHOULD be cacheable by the user, to avoid having to look it up again - this matters in latency-critical code. Metrics with labels SHOULD support a `remove()` method with the same signature as `labels()` that will remove a Child from the metric no longer exporting it, and a `clear()` method that removes all Children from the metric. These invalidate caching of Children. There SHOULD be a way to initialize a given Child with the | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_clientlibs.md | main | prometheus | [
-0.05291759595274925,
0.04621734470129013,
-0.04012107104063034,
-0.004078214056789875,
-0.12431759387254715,
-0.004107852466404438,
0.05036379396915436,
0.03670496866106987,
0.04535416513681412,
-0.026831787079572678,
-0.007731026038527489,
-0.11015757918357849,
0.01237052958458662,
0.072... | 0.164293 |
support a `remove()` method with the same signature as `labels()` that will remove a Child from the metric no longer exporting it, and a `clear()` method that removes all Children from the metric. These invalidate caching of Children. There SHOULD be a way to initialize a given Child with the default value, usually just calling `labels()`. Metrics without labels MUST always be initialized to avoid [problems with missing metrics](/docs/practices/instrumentation/#avoid-missing-metrics). ### Metric names Metric names must follow the [specification](/docs/concepts/data\_model/#metric-names-and-labels). As with label names, this MUST be met for uses of Gauge/Counter/Summary/Histogram and in any other Collector offered with the library. Many client libraries offer setting the name in three parts: `namespace\_subsystem\_name` of which only the `name` is mandatory. Dynamic/generated metric names or subparts of metric names MUST be discouraged, except when a custom Collector is proxying from other instrumentation/monitoring systems. Generated/dynamic metric names are a sign that you should be using labels instead. ### Metric description and help Gauge/Counter/Summary/Histogram MUST require metric descriptions/help to be provided. Any custom Collectors provided with the client libraries MUST have descriptions/help on their metrics. It is suggested to make it a mandatory argument, but not to check that it’s of a certain length as if someone really doesn’t want to write docs we’re not going to convince them otherwise. Collectors offered with the library (and indeed everywhere we can within the ecosystem) SHOULD have good metric descriptions, to lead by example. ## Exposition Clients MUST implement the text-based exposition format outlined in the [exposition formats](/docs/instrumenting/exposition\_formats) documentation. Reproducible order of the exposed metrics is ENCOURAGED (especially for human readable formats) if it can be implemented without a significant resource cost. ## Standard and runtime collectors Client libraries SHOULD offer what they can of the Standard exports, documented below. These SHOULD be implemented as custom Collectors, and registered by default on the default CollectorRegistry. There SHOULD be a way to disable these, as there are some very niche use cases where they get in the way. ### Process metrics These metrics have the prefix `process\_`. If obtaining a necessary value is problematic or even impossible with the used language or runtime, client libraries SHOULD prefer leaving out the corresponding metric over exporting bogus, inaccurate, or special values (like `NaN`). All memory values in bytes, all times in unixtime/seconds. | Metric name | Help string | Unit | | ---------------------------------- | ------------------------------------------------------ | --------------- | | `process\_cpu\_seconds\_total` | Total user and system CPU time spent in seconds. | seconds | | `process\_open\_fds` | Number of open file descriptors. | file descriptors | | `process\_max\_fds` | Maximum number of open file descriptors. | file descriptors | | `process\_virtual\_memory\_bytes` | Virtual memory size in bytes. | bytes | | `process\_virtual\_memory\_max\_bytes` | Maximum amount of virtual memory available in bytes. | bytes | | `process\_resident\_memory\_bytes` | Resident memory size in bytes. | bytes | | `process\_heap\_bytes` | Process heap size in bytes. | bytes | | `process\_start\_time\_seconds` | Start time of the process since unix epoch in seconds. | seconds | | `process\_threads` | Number of OS threads in the process. | threads | ### Runtime metrics In addition, client libraries are ENCOURAGED to also offer whatever makes sense in terms of metrics for their language’s runtime (e.g. garbage collection stats), with an appropriate prefix such as `go\_`, `hotspot\_` etc. ## Unit tests Client libraries SHOULD have unit tests covering the core instrumentation library and exposition. Client libraries are ENCOURAGED to offer ways that make it easy for users to unit-test their use of the instrumentation code. For example, the `CollectorRegistry.get\_sample\_value` in Python. ## Packaging and dependencies Ideally, a client library | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_clientlibs.md | main | prometheus | [
-0.09014271944761276,
0.02204960398375988,
-0.07064345479011536,
0.0474557988345623,
-0.08698698878288269,
-0.011138970963656902,
0.04611987993121147,
-0.011611077934503555,
0.06417272239923477,
-0.04267100244760513,
0.04803390055894852,
-0.09774303436279297,
-0.004056672565639019,
0.00473... | 0.121051 |
Unit tests Client libraries SHOULD have unit tests covering the core instrumentation library and exposition. Client libraries are ENCOURAGED to offer ways that make it easy for users to unit-test their use of the instrumentation code. For example, the `CollectorRegistry.get\_sample\_value` in Python. ## Packaging and dependencies Ideally, a client library can be included in any application to add some instrumentation without breaking the application. Accordingly, caution is advised when adding dependencies to the client library. For example, if you add a library that uses a Prometheus client that requires version x.y of a library but the application uses x.z elsewhere, will that have an adverse impact on the application? It is suggested that where this may arise, that the core instrumentation is separated from the bridges/exposition of metrics in a given format. For example, the Java simpleclient `simpleclient` module has no dependencies, and the `simpleclient\_servlet` has the HTTP bits. ## Performance considerations As client libraries must be thread-safe, some form of concurrency control is required and consideration must be given to performance on multi-core machines and applications. In our experience the least performant is mutexes. Processor atomic instructions tend to be in the middle, and generally acceptable. Approaches that avoid different CPUs mutating the same bit of RAM work best, such as the DoubleAdder in Java’s simpleclient. There is a memory cost though. As noted above, the result of `labels()` should be cacheable. The concurrent maps that tend to back metric with labels tend to be relatively slow. Special-casing metrics without labels to avoid `labels()`-like lookups can help a lot. Metrics SHOULD avoid blocking when they are being incremented/decremented/set etc. as it’s undesirable for the whole application to be held up while a scrape is ongoing. Having benchmarks of the main instrumentation operations, including labels, is ENCOURAGED. Resource consumption, particularly RAM, should be kept in mind when performing exposition. Consider reducing the memory footprint by streaming results, and potentially having a limit on the number of concurrent scrapes. | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_clientlibs.md | main | prometheus | [
-0.14011554419994354,
0.0008435784257017076,
-0.014948560856282711,
-0.03828032687306404,
0.01635492593050003,
-0.08723510801792145,
-0.041709329932928085,
0.002625222085043788,
0.0004834041465073824,
-0.07107379287481308,
-0.0538146086037159,
-0.014761253260076046,
0.06558574736118317,
0.... | 0.073426 |
If you are instrumenting your own code, the [general rules of how to instrument code with a Prometheus client library](/docs/practices/instrumentation/) should be followed. When taking metrics from another monitoring or instrumentation system, things tend not to be so black and white. This document contains things you should consider when writing an exporter or custom collector. The theory covered will also be of interest to those doing direct instrumentation. If you are writing an exporter and are unclear on anything here, please contact us on IRC (#prometheus on libera) or the [mailing list](/community). ## Maintainability and purity The main decision you need to make when writing an exporter is how much work you’re willing to put in to get perfect metrics out of it. If the system in question has only a handful of metrics that rarely change, then getting everything perfect is an easy choice, a good example of this is the [HAProxy exporter](https://github.com/prometheus/haproxy\_exporter). On the other hand, if you try to get things perfect when the system has hundreds of metrics that change frequently with new versions, then you’ve signed yourself up for a lot of ongoing work. The [MySQL exporter](https://github.com/prometheus/mysqld\_exporter) is on this end of the spectrum. The [node exporter](https://github.com/prometheus/node\_exporter) is a mix of these, with complexity varying by module. For example, the `mdadm` collector hand-parses a file and exposes metrics created specifically for that collector, so we may as well get the metrics right. For the `meminfo` collector the results vary across kernel versions so we end up doing just enough of a transform to create valid metrics. ## Configuration When working with applications, you should aim for an exporter that requires no custom configuration by the user beyond telling it where the application is. You may also need to offer the ability to filter out certain metrics if they may be too granular and expensive on large setups, for example the [HAProxy exporter](https://github.com/prometheus/haproxy\_exporter) allows filtering of per-server stats. Similarly, there may be expensive metrics that are disabled by default. When working with other monitoring systems, frameworks and protocols you will often need to provide additional configuration or customization to generate metrics suitable for Prometheus. In the best case scenario, a monitoring system has a similar enough data model to Prometheus that you can automatically determine how to transform metrics. This is the case for [Cloudwatch](https://github.com/prometheus/cloudwatch\_exporter), [SNMP](https://github.com/prometheus/snmp\_exporter) and [collectd](https://github.com/prometheus/collectd\_exporter). At most, we need the ability to let the user select which metrics they want to pull out. In other cases, metrics from the system are completely non-standard, depending on the usage of the system and the underlying application. In that case the user has to tell us how to transform the metrics. The [JMX exporter](https://github.com/prometheus/jmx\_exporter) is the worst offender here, with the [Graphite](https://github.com/prometheus/graphite\_exporter) and [StatsD](https://github.com/prometheus/statsd\_exporter) exporters also requiring configuration to extract labels. Ensuring the exporter works out of the box without configuration, and providing a selection of example configurations for transformation if required, is advised. YAML is the standard Prometheus configuration format, all configuration should use YAML by default. ## Metrics ### Naming Follow the [best practices on metric naming](/docs/practices/naming). Generally metric names should allow someone who is familiar with Prometheus but not a particular system to make a good guess as to what a metric means. A metric named `http\_requests\_total` is not extremely useful - are these being measured as they come in, in some filter or when they get to the user’s code? And `requests\_total` is even worse, what type of requests? With direct instrumentation, a given metric should exist within exactly one file. Accordingly, within exporters and collectors, a metric should apply to exactly | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_exporters.md | main | prometheus | [
-0.0767076313495636,
0.006863728631287813,
-0.04026151821017265,
0.0023976515512913465,
-0.05285897105932236,
-0.10373476147651672,
-0.03564046695828438,
0.04235408082604408,
-0.0101741598919034,
-0.0124670946970582,
-0.046394314616918564,
-0.08287575840950012,
0.007830914109945297,
-0.032... | 0.136685 |
these being measured as they come in, in some filter or when they get to the user’s code? And `requests\_total` is even worse, what type of requests? With direct instrumentation, a given metric should exist within exactly one file. Accordingly, within exporters and collectors, a metric should apply to exactly one subsystem and be named accordingly. Metric names should never be procedurally generated, except when writing a custom collector or exporter. Metric names for applications should generally be prefixed by the exporter name, e.g. `haproxy\_up`. Metrics must use base units (e.g. seconds, bytes) and leave converting them to something more readable to graphing tools. No matter what units you end up using, the units in the metric name must match the units in use. Similarly, expose ratios, not percentages. Even better, specify a counter for each of the two components of the ratio. Metric names should not include the labels that they’re exported with, e.g. `by\_type`, as that won’t make sense if the label is aggregated away. The one exception is when you’re exporting the same data with different labels via multiple metrics, in which case that’s usually the sanest way to distinguish them. For direct instrumentation, this should only come up when exporting a single metric with all the labels would have too high a cardinality. Prometheus metrics and label names are written in `snake\_case`. Converting `camelCase` to `snake\_case` is desirable, though doing so automatically doesn’t always produce nice results for things like `myTCPExample` or `isNaN` so sometimes it’s best to leave them as-is. Exposed metrics should not contain colons, these are reserved for user defined recording rules to use when aggregating. Only `[a-zA-Z0-9:\_]` are valid in metric names. The `\_sum`, `\_count`, `\_bucket` and `\_total` suffixes are used by Summaries, Histograms and Counters. Unless you’re producing one of those, avoid these suffixes. `\_total` is a convention for counters, you should use it if you’re using the COUNTER type. The `process\_` and `scrape\_` prefixes are reserved. It’s okay to add your own prefix on to these if they follow matching semantics. For example, Prometheus has `scrape\_duration\_seconds` for how long a scrape took, it's good practice to also have an exporter-centric metric, e.g. `jmx\_scrape\_duration\_seconds`, saying how long the specific exporter took to do its thing. For process stats where you have access to the PID, both Go and Python offer collectors that’ll handle this for you. A good example of this is the [HAProxy exporter](https://github.com/prometheus/haproxy\_exporter). When you have a successful request count and a failed request count, the best way to expose this is as one metric for total requests and another metric for failed requests. This makes it easy to calculate the failure ratio. Do not use one metric with a failed or success label. Similarly, with hit or miss for caches, it’s better to have one metric for total and another for hits. Consider the likelihood that someone using monitoring will do a code or web search for the metric name. If the names are very well-established and unlikely to be used outside of the realm of people used to those names, for example SNMP and network engineers, then leaving them as-is may be a good idea. This logic doesn’t apply for all exporters, for example the MySQL exporter metrics may be used by a variety of people, not just DBAs. A `HELP` string with the original name can provide most of the same benefits as using the original names. ### Labels Read the [general advice](/docs/practices/instrumentation/#things-to-watch-out-for) on labels. Avoid `type` as a label name, it’s too generic and often meaningless. You should also try where possible to avoid names | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_exporters.md | main | prometheus | [
-0.09444534778594971,
0.01018691249191761,
-0.12294398248195648,
0.018942635506391525,
-0.052819717675447464,
-0.050114698708057404,
-0.009019057266414165,
0.05277000740170479,
0.04778276011347771,
-0.028227968141436577,
-0.00958909559994936,
-0.1332107037305832,
0.016082512214779854,
-0.0... | 0.111458 |
just DBAs. A `HELP` string with the original name can provide most of the same benefits as using the original names. ### Labels Read the [general advice](/docs/practices/instrumentation/#things-to-watch-out-for) on labels. Avoid `type` as a label name, it’s too generic and often meaningless. You should also try where possible to avoid names that are likely to clash with target labels, such as `region`, `zone`, `cluster`, `availability\_zone`, `az`, `datacenter`, `dc`, `owner`, `customer`, `stage`, `service`, `environment` and `env`. If, however, that’s what the application calls some resource, it’s best not to cause confusion by renaming it. Avoid the temptation to put things into one metric just because they share a prefix. Unless you’re sure something makes sense as one metric, multiple metrics is safer. The label `le` has special meaning for Histograms, and `quantile` for Summaries. Avoid these labels generally. Read/write and send/receive are best as separate metrics, rather than as a label. This is usually because you care about only one of them at a time, and it is easier to use them that way. The rule of thumb is that one metric should make sense when summed or averaged. There is one other case that comes up with exporters, and that’s where the data is fundamentally tabular and doing otherwise would require users to do regexes on metric names to be usable. Consider the voltage sensors on your motherboard, while doing math across them is meaningless, it makes sense to have them in one metric rather than having one metric per sensor. All values within a metric should (almost) always have the same unit, for example consider if fan speeds were mixed in with the voltages, and you had no way to automatically separate them. Don’t do this: ``` my_metric{label="a"} 1 my_metric{label="b"} 6 my_metric{label="total"} 7 ``` or this: ``` my_metric{label="a"} 1 my_metric{label="b"} 6 my_metric{} 7 ``` The former breaks for people who do a `sum()` over your metric, and the latter breaks sum and is quite difficult to work with. Some client libraries, for example Go, will actively try to stop you doing the latter in a custom collector, and all client libraries should stop you from doing the latter with direct instrumentation. Never do either of these, rely on Prometheus aggregation instead. If your monitoring exposes a total like this, drop the total. If you have to keep it around for some reason, for example the total includes things not counted individually, use different metric names. Instrumentation labels should be minimal, every extra label is one more that users need to consider when writing their PromQL. Accordingly, avoid having instrumentation labels which could be removed without affecting the uniqueness of the time series. Additional information around a metric can be added via an info metric, for an example see below how to handle version numbers. However, there are cases where it is expected that virtually all users of a metric will want the additional information. If so, adding a non-unique label, rather than an info metric, is the right solution. For example the [mysqld\_exporter](https://github.com/prometheus/mysqld\_exporter)'s `mysqld\_perf\_schema\_events\_statements\_total`'s `digest` label is a hash of the full query pattern and is sufficient for uniqueness. However, it is of little use without the human readable `digest\_text` label, which for long queries will contain only the start of the query pattern and is thus not unique. Thus we end up with both the `digest\_text` label for humans and the `digest` label for uniqueness. ### Target labels, not static scraped labels If you ever find yourself wanting to apply the same label to all of your metrics, stop. There’s generally two cases where this comes up. The first | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_exporters.md | main | prometheus | [
-0.05697852000594139,
-0.05017194524407387,
-0.07892973721027374,
-0.0023213308304548264,
-0.11148364096879959,
-0.018849194049835205,
0.08824153244495392,
0.055501025170087814,
0.0027273735031485558,
-0.007749095093458891,
-0.01742732897400856,
-0.07624677568674088,
0.07330704480409622,
0... | 0.156863 |
we end up with both the `digest\_text` label for humans and the `digest` label for uniqueness. ### Target labels, not static scraped labels If you ever find yourself wanting to apply the same label to all of your metrics, stop. There’s generally two cases where this comes up. The first is for some label it would be useful to have on the metrics such as the version number of the software. Instead, use the approach described at [https://www.robustperception.io/how-to-have-labels-for-machine-roles/](http://www.robustperception.io/how-to-have-labels-for-machine-roles/). The second case is when a label is really a target label. These are things like region, cluster names, and so on, that come from your infrastructure setup rather than the application itself. It’s not for an application to say where it fits in your label taxonomy, that’s for the person running the Prometheus server to configure and different people monitoring the same application may give it different names. Accordingly, these labels belong up in the scrape configs of Prometheus via whatever service discovery you’re using. It’s okay to apply the concept of machine roles here as well, as it’s likely useful information for at least some people scraping it. ### Types You should try to match up the types of your metrics to Prometheus types. This usually means counters and gauges. The `\_count` and `\_sum` of summaries are also relatively common, and on occasion you’ll see quantiles. Histograms are rare, if you come across one remember that the exposition format exposes cumulative values. Often it won’t be obvious what the type of metric is, especially if you’re automatically processing a set of metrics. In general `UNTYPED` is a safe default. Counters can’t go down, so if you have a counter type coming from another instrumentation system that can be decremented, for example Dropwizard metrics then it's not a counter, it's a gauge. `UNTYPED` is probably the best type to use there, as `GAUGE` would be misleading if it were being used as a counter. ### Help strings When you’re transforming metrics it’s useful for users to be able to track back to what the original was, and what rules were in play that caused that transformation. Putting in the name of the collector or exporter, the ID of any rule that was applied and the name and details of the original metric into the help string will greatly aid users. Prometheus doesn’t like one metric having different help strings. If you’re making one metric from many others, choose one of them to put in the help string. For examples of this, the SNMP exporter uses the OID and the JMX exporter puts in a sample mBean name. The [HAProxy exporter](https://github.com/prometheus/haproxy\_exporter) has hand-written strings. The [node exporter](https://github.com/prometheus/node\_exporter) also has a wide variety of examples. ### Drop less useful statistics Some instrumentation systems expose 1m, 5m, 15m rates, average rates since application start (these are called `mean` in Dropwizard metrics for example) in addition to minimums, maximums and standard deviations. These should all be dropped, as they’re not very useful and add clutter. Prometheus can calculate rates itself, and usually more accurately as the averages exposed are usually exponentially decaying. You don’t know what time the min or max were calculated over, and the standard deviation is statistically useless and you can always expose sum of squares, `\_sum` and `\_count` if you ever need to calculate it. Quantiles have related issues, you may choose to drop them or put them in a Summary. ### Dotted strings Many monitoring systems don’t have labels, instead doing things like `my.class.path.mymetric.labelvalue1.labelvalue2.labelvalue3`. The [Graphite](https://github.com/prometheus/graphite\_exporter) and [StatsD](https://github.com/prometheus/statsd\_exporter) exporters share a way of transforming these with a small configuration language. | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_exporters.md | main | prometheus | [
-0.015187597833573818,
0.03498099371790886,
-0.059340838342905045,
-0.020449433475732803,
-0.005187090951949358,
-0.10167691856622696,
0.09032465517520905,
0.0155979935079813,
-0.033788517117500305,
-0.1022757887840271,
0.04993962496519089,
-0.08824783563613892,
0.07794307917356491,
-0.007... | 0.113891 |
need to calculate it. Quantiles have related issues, you may choose to drop them or put them in a Summary. ### Dotted strings Many monitoring systems don’t have labels, instead doing things like `my.class.path.mymetric.labelvalue1.labelvalue2.labelvalue3`. The [Graphite](https://github.com/prometheus/graphite\_exporter) and [StatsD](https://github.com/prometheus/statsd\_exporter) exporters share a way of transforming these with a small configuration language. Other exporters should implement the same. The transformation is currently implemented only in Go, and would benefit from being factored out into a separate library. ## Collectors When implementing the collector for your exporter, you should never use the usual direct instrumentation approach and then update the metrics on each scrape. Rather create new metrics each time. In Go this is done with [MustNewConstMetric](https://godoc.org/github.com/prometheus/client\_golang/prometheus#MustNewConstMetric) in your `Collect()` method. For Python see [https://github.com/prometheus/client\_python#custom-collectors](https://prometheus.github.io/client\_python/collector/custom/) and for Java generate a `List` in your collect method, see [StandardExports.java](https://github.com/prometheus/client\_java/blob/master/simpleclient\_hotspot/src/main/java/io/prometheus/client/hotspot/StandardExports.java) for an example. The reason for this is two-fold. Firstly, two scrapes could happen at the same time, and direct instrumentation uses what are effectively file-level global variables, so you’ll get race conditions. Secondly, if a label value disappears, it’ll still be exported. Instrumenting your exporter itself via direct instrumentation is fine, e.g. total bytes transferred or calls performed by the exporter across all scrapes. For exporters such as the [blackbox exporter](https://github.com/prometheus/blackbox\_exporter) and [SNMP exporter](https://github.com/prometheus/snmp\_exporter), which aren’t tied to a single target, these should only be exposed on a vanilla `/metrics` call, not on a scrape of a particular target. ### Metrics about the scrape itself Sometimes you’d like to export metrics that are about the scrape, like how long it took or how many records you processed. These should be exposed as gauges as they’re about an event, the scrape, and the metric name prefixed by the exporter name, for example `jmx\_scrape\_duration\_seconds`. Usually the `\_exporter` is excluded and if the exporter also makes sense to use as just a collector, then definitely exclude it. Other scrape "meta" metrics should be avoided. For example, a counter for the number of scrapes, or a histogram of the scrape duration. Having the exporter track these metrics duplicate the [automatically generated metrics](/docs/concepts/jobs\_instances/#automatically-generated-labels-and-time-series) of Prometheus itself. This adds to the storage cost of every exporter instance. ### Machine and process metrics Many systems, for example Elasticsearch, expose machine metrics such as CPU, memory and filesystem information. As the [node exporter](https://github.com/prometheus/node\_exporter) provides these in the Prometheus ecosystem, such metrics should be dropped. In the Java world, many instrumentation frameworks expose process-level and JVM-level stats such as CPU and GC. The Java client and JMX exporter already include these in the preferred form via [DefaultExports.java](https://github.com/prometheus/client\_java/blob/master/simpleclient\_hotspot/src/main/java/io/prometheus/client/hotspot/DefaultExports.java), so these should also be dropped. Similarly with other languages and frameworks. ## Deployment Each exporter should monitor exactly one instance application, preferably sitting right beside it on the same machine. That means for every HAProxy you run, you run a `haproxy\_exporter` process. For every machine with a Mesos worker, you run the [Mesos exporter](https://github.com/mesosphere/mesos\_exporter) on it, and another one for the master, if a machine has both. The theory behind this is that for direct instrumentation this is what you’d be doing, and we’re trying to get as close to that as we can in other layouts. This means that all service discovery is done in Prometheus, not in exporters. This also has the benefit that Prometheus has the target information it needs to allow users probe your service with the [blackbox exporter](https://github.com/prometheus/blackbox\_exporter). There are two exceptions: The first is where running beside the application you are monitoring is completely nonsensical. The SNMP, blackbox and IPMI exporters are the main examples of this. The IPMI and SNMP exporters as the devices are often black boxes that | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_exporters.md | main | prometheus | [
-0.11458490788936615,
0.06233352795243263,
-0.06124301627278328,
-0.0028044143691658974,
-0.06089210510253906,
-0.0390474759042263,
0.01039739791303873,
0.028260905295610428,
-0.021253440529108047,
-0.022964326664805412,
0.03527503088116646,
-0.13466976583003998,
-0.04659285023808479,
0.01... | 0.074071 |
probe your service with the [blackbox exporter](https://github.com/prometheus/blackbox\_exporter). There are two exceptions: The first is where running beside the application you are monitoring is completely nonsensical. The SNMP, blackbox and IPMI exporters are the main examples of this. The IPMI and SNMP exporters as the devices are often black boxes that it’s impossible to run code on (though if you could run a node exporter on them instead that’d be better), and the blackbox exporter where you’re monitoring something like a DNS name, where there’s also nothing to run on. In this case, Prometheus should still do service discovery, and pass on the target to be scraped. See the blackbox and SNMP exporters for examples. Note that it is only currently possible to write this type of exporter with the Go, Python and Java client libraries. The second exception is where you’re pulling some stats out of a random instance of a system and don’t care which one you’re talking to. Consider a set of MySQL replicas you wanted to run some business queries against the data to then export. Having an exporter that uses your usual load balancing approach to talk to one replica is the sanest approach. This doesn’t apply when you’re monitoring a system with master-election, in that case you should monitor each instance individually and deal with the "masterness" in Prometheus. This is as there isn’t always exactly one master, and changing what a target is underneath Prometheus’s feet will cause oddities. ### Scheduling Metrics should only be pulled from the application when Prometheus scrapes them, exporters should not perform scrapes based on their own timers. That is, all scrapes should be synchronous. Accordingly, you should not set timestamps on the metrics you expose, let Prometheus take care of that. If you think you need timestamps, then you probably need the [Pushgateway](https://prometheus.io/docs/instrumenting/pushing/) instead. If a metric is particularly expensive to retrieve, i.e. takes more than a minute, it is acceptable to cache it. This should be noted in the `HELP` string. The default scrape timeout for Prometheus is 10 seconds. If your exporter can be expected to exceed this, you should explicitly call this out in your user documentation. ### Pushes Some applications and monitoring systems only push metrics, for example StatsD, Graphite and collectd. There are two considerations here. Firstly, when do you expire metrics? Collectd and things talking to Graphite both export regularly, and when they stop we want to stop exposing the metrics. Collectd includes an expiry time so we use that, Graphite doesn’t so it is a flag on the exporter. StatsD is a bit different, as it is dealing with events rather than metrics. The best model is to run one exporter beside each application and restart them when the application restarts so that the state is cleared. Secondly, these sort of systems tend to allow your users to send either deltas or raw counters. You should rely on the raw counters as far as possible, as that’s the general Prometheus model. For service-level metrics, e.g. service-level batch jobs, you should have your exporter push into the Pushgateway and exit after the event rather than handling the state yourself. For instance-level batch metrics, there is no clear pattern yet. The options are either to abuse the node exporter’s textfile collector, rely on in-memory state (probably best if you don’t need to persist over a reboot) or implement similar functionality to the textfile collector. ### Failed scrapes There are currently two patterns for failed scrapes where the application you’re talking to doesn’t respond or has other problems. The first is to return a | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_exporters.md | main | prometheus | [
-0.08594579249620438,
-0.0363154262304306,
-0.05309801921248436,
0.005562011618167162,
0.02391292341053486,
-0.1265091598033905,
-0.009445841424167156,
-0.015954483300447464,
0.010979153215885162,
-0.016346773132681847,
-0.04710911959409714,
-0.048121947795152664,
0.038705937564373016,
-0.... | 0.151309 |
in-memory state (probably best if you don’t need to persist over a reboot) or implement similar functionality to the textfile collector. ### Failed scrapes There are currently two patterns for failed scrapes where the application you’re talking to doesn’t respond or has other problems. The first is to return a 5xx error. The second is to have a `myexporter\_up`, e.g. `haproxy\_up`, variable that has a value of 0 or 1 depending on whether the scrape worked. The latter is better where there’s still some useful metrics you can get even with a failed scrape, such as the HAProxy exporter providing process stats. The former is a tad easier for users to deal with, as [`up` works in the usual way](/docs/concepts/jobs\_instances/#automatically-generated-labels-and-time-series), although you can’t distinguish between the exporter being down and the application being down. ### Landing page It’s nicer for users if visiting `http://yourexporter/` has a simple HTML page with the name of the exporter, and a link to the `/metrics` page. ### Port numbers A user may have many exporters and Prometheus components on the same machine, so to make that easier each has a unique port number. [https://github.com/prometheus/prometheus/wiki/Default-port-allocations](https://github.com/prometheus/prometheus/wiki/Default-port-allocations) is where we track them, this is publicly editable. Feel free to grab the next free port number when developing your exporter, preferably before publicly announcing it. If you’re not ready to release yet, putting your username and WIP is fine. This is a registry to make our users’ lives a little easier, not a commitment to develop particular exporters. For exporters for internal applications we recommend using ports outside of the range of default port allocations. ## Announcing Once you’re ready to announce your exporter to the world, email the mailing list and send a PR to add it to [the list of available exporters](/docs/instrumenting/exporters/) by editing [this GitHub repository file](https://github.com/prometheus/docs/blob/main/docs/instrumenting/exporters.md). | https://github.com/prometheus/docs/blob/main//docs/instrumenting/writing_exporters.md | main | prometheus | [
-0.10392200201749802,
0.015516411513090134,
-0.07273760437965393,
0.07495951652526855,
0.02748246118426323,
-0.0225534625351429,
0.023643765598535538,
0.046554356813430786,
-0.043545037508010864,
-0.004708645399659872,
-0.00640745647251606,
0.011519484221935272,
0.019194070249795914,
-0.08... | 0.056653 |
## Abstract This document specifies the different escaping schemes used by Prometheus during generation of text exposition for metric and label names that contain characters outside the legacy character set. These schemes are negotiated during scraping via the `escaping` parameter in the Accept and Content-Type headers. ## Introduction Prometheus supports multiple escaping schemes to handle metric and label names in text exposition that contain characters outside the legacy character set (a-zA-Z0-9\\_:). The escaping scheme is negotiated during scraping and affects how metric producers should format their metric names. ## Escaping Schemes ### No Escaping (allow-utf-8) \*\*Header Value\*\*: `escaping=allow-utf-8` \*\*Behavior\*\*: - Metric and label names MUST be valid UTF-8 strings. - When names appear inside double quotes in the exposition format, `\`, `\n`, and `"` MUST be escaped with a backslash. - When names appear unquoted in the exposition format, `\` and `\n` MUST be escaped with a backslash. - This scheme MUST only be used when both the producer and consumer support UTF-8 names. ### Underscore Escaping (underscores) \*\*Header Value\*\*: `escaping=underscores` \*\*Behavior\*\*: - Any character that is not in the legacy character set (a-zA-Z0-9\\_:) MUST be replaced with an underscore. - The first character MUST be either a letter, underscore, or colon. - Subsequent characters MUST be either letters, numbers, underscores, or colons. - Example: `metric.name/with/slashes` becomes `metric\_name\_with\_slashes`. ### Dots Escaping (dots) \*\*Header Value\*\*: `escaping=dots` \*\*Behavior\*\*: - Dots (.) MUST be replaced with `\_dot\_`. - Existing underscores MUST be replaced with double underscores (`\_\_`). - Other non-legacy characters MUST be replaced with single underscores. - The first character MUST be either a letter, underscore, or colon. - Subsequent characters MUST be either letters, numbers, underscores, or colons. - Example: `metric.name.with.dots` becomes `metric\_dot\_name\_dot\_with\_dot\_dots`. ### Value Encoding Escaping (values) \*\*Header Value\*\*: `escaping=values` \*\*Behavior\*\*: - The name MUST be prefixed with `U\_\_`. - Each character that is not part of the legacy character set (a-zA-Z0-9\\_:) MUST be replaced with its Unicode code point in hexadecimal, surrounded by underscores. - Single underscores MUST be replaced with double underscores. - Example: `metric.name` becomes `U\_\_metric\_2E\_name` (where 2E is the hex Unicode code point for '.'). ## Default Behavior If no escaping scheme is specified in the Accept header, `underscores` escaping SHOULD be used. ## Security Considerations 1. Targets MUST validate input names before applying escaping. 2. The escaping scheme MUST be validated to prevent injection attacks. 3. The `allow-utf-8` scheme MUST only be used when both producer and consumer support UTF-8 names. | https://github.com/prometheus/docs/blob/main//docs/instrumenting/escaping_schemes.md | main | prometheus | [
-0.135807067155838,
0.06285402923822403,
0.019991567358374596,
0.03190276771783829,
-0.013394107110798359,
-0.04016602411866188,
0.024087904021143913,
-0.03338867053389549,
0.03161781653761864,
-0.018219230696558952,
0.013091642409563065,
-0.11064639687538147,
0.06949922442436218,
-0.00907... | 0.154243 |
Metrics can be exposed to Prometheus using a simple [text-based](#text-based-format) exposition format. There are various [client libraries](/docs/instrumenting/clientlibs/) that implement this format for you. If your preferred language doesn't have a client library you can [create your own](/docs/instrumenting/writing\_clientlibs/). ## Text-based format As of Prometheus version 2.0, all processes that expose metrics to Prometheus need to use a text-based format. In this section you can find some [basic information](#basic-info) about this format as well as a more [detailed breakdown](#text-format-details) of the format. ### Basic info | Aspect | Description | |--------|-------------| | \*\*Inception\*\* | April 2014 | | \*\*Supported in\*\* | Prometheus version `>=0.4.0` | | \*\*Transmission\*\* | HTTP | | \*\*Encoding\*\* | UTF-8, `\n` line endings | | \*\*HTTP `Content-Type`\*\* | `text/plain; version=0.0.4` (A missing `version` value will lead to a fall-back to the most recent text format version.) | | \*\*Optional HTTP `Content-Encoding`\*\* | `gzip` | | \*\*Advantages\*\* | * Human-readable * Easy to assemble, especially for minimalistic cases (no nesting required) * Readable line by line (with the exception of type hints and docstrings) | | \*\*Limitations\*\* | * Verbose * Types and docstrings not integral part of the syntax, meaning little-to-nonexistent metric contract validation * Parsing cost | | \*\*Supported metric primitives\*\* | * Counter * Gauge * Histogram * Summary * Untyped | ### Text format details Prometheus' text-based format is line oriented. Lines are separated by a line feed character (`\n`). The last line must end with a line feed character. Empty lines are ignored. #### Line format Within a line, tokens can be separated by any number of blanks and/or tabs (and must be separated by at least one if they would otherwise merge with the previous token). Leading and trailing whitespace is ignored. #### Comments, help text, and type information Lines with a `#` as the first non-whitespace character are comments. They are ignored unless the first token after `#` is either `HELP` or `TYPE`. Those lines are treated as follows: If the token is `HELP`, at least one more token is expected, which is the metric name. All remaining tokens are considered the docstring for that metric name. `HELP` lines may contain any sequence of UTF-8 characters (after the metric name), but the backslash and the line feed characters have to be escaped as `\\` and `\n`, respectively. Only one `HELP` line may exist for any given metric name. If the token is `TYPE`, exactly two more tokens are expected. The first is the metric name, and the second is either `counter`, `gauge`, `histogram`, `summary`, or `untyped`, defining the type for the metric of that name. Only one `TYPE` line may exist for a given metric name. The `TYPE` line for a metric name must appear before the first sample is reported for that metric name. If there is no `TYPE` line for a metric name, the type is set to `untyped`. Metric names not corresponding to the legacy Prometheus metric name character set must be quoted and escaped. The remaining lines describe samples (one per line) using the following syntax ([EBNF](https://en.wikipedia.org/wiki/Extended\_Backus%E2%80%93Naur\_form)): ``` metric\_name\_or\_labels value [ timestamp ] metric\_name\_or\_labels = metric\_name [ "{" labels "}" ] | "{" quoted\_metric\_name [ "," labels ] "}" metric\_name = identifier quoted\_metric\_name = `"` escaped\_string `"` labels = [ label\_pairs ] label\_pairs = label\_pair { "," label\_pair } [ "," ] label\_pair = label\_name "=" `"` escaped\_string `"` label\_name = identifier | `"` escaped\_string `"` ``` In the sample syntax: \* `identifier` carries the usual Prometheus expression language restrictions. \* `escaped\_string` consists of any UTF-8 characters, but backslash, double-quote, and line feed must be escaped. \* When | https://github.com/prometheus/docs/blob/main//docs/instrumenting/exposition_formats.md | main | prometheus | [
-0.05624809116125107,
0.04470802843570709,
-0.0084042614325881,
-0.010410208255052567,
-0.012889535166323185,
-0.06530582904815674,
0.00006994528666837141,
0.011487438343465328,
-0.007043716963380575,
-0.03226765617728233,
-0.022626547142863274,
-0.09174787998199463,
0.0065605491399765015,
... | 0.206795 |
} [ "," ] label\_pair = label\_name "=" `"` escaped\_string `"` label\_name = identifier | `"` escaped\_string `"` ``` In the sample syntax: \* `identifier` carries the usual Prometheus expression language restrictions. \* `escaped\_string` consists of any UTF-8 characters, but backslash, double-quote, and line feed must be escaped. \* When `metric\_name` is quoted with double quotes, it appears inside the braces instead of outside. \* `label\_name` may be optionally enclosed in double quotes. \* Metric and label names not corresponding to the usual Prometheus expression language restrictions must use the quoted syntaxes. \* `label\_value` can be any sequence of UTF-8 characters, but the backslash (`\`), double-quote (`"`), and line feed (`\n`) characters have to be escaped as `\\`, `\"`, and `\n`, respectively. \* `value` is a float represented as required by Go's [`ParseFloat()`](https://golang.org/pkg/strconv/#ParseFloat) function. In addition to standard numerical values, `NaN`, `+Inf`, and `-Inf` are valid values representing not a number, positive infinity, and negative infinity, respectively. \* The `timestamp` is an `int64` (milliseconds since epoch, i.e. 1970-01-01 00:00:00 UTC, excluding leap seconds), represented as required by Go's [`ParseInt()`](https://golang.org/pkg/strconv/#ParseInt) function. #### Grouping and sorting All lines for a given metric must be provided as one single group, with the optional `HELP` and `TYPE` lines first (in no particular order). Beyond that, reproducible sorting in repeated expositions is preferred but not required, i.e. do not sort if the computational cost is prohibitive. Each line must have a unique combination of a metric name and labels. Otherwise, the ingestion behavior is undefined. #### Histograms and summaries The `histogram` and `summary` types are difficult to represent in the text format. The following conventions apply: \* The sample sum for a summary or histogram named `x` is given as a separate sample named `x\_sum`. \* The sample count for a summary or histogram named `x` is given as a separate sample named `x\_count`. \* Each quantile of a summary named `x` is given as a separate sample line with the same name `x` and a label `{quantile="y"}`. \* Each bucket count of a histogram named `x` is given as a separate sample line with the name `x\_bucket` and a label `{le="y"}` (where `y` is the upper bound of the bucket). \* A histogram \_must\_ have a bucket with `{le="+Inf"}`. Its value \_must\_ be identical to the value of `x\_count`. \* The buckets of a histogram and the quantiles of a summary must appear in increasing numerical order of their label values (for the `le` or the `quantile` label, respectively). ### Text format example Below is an example of a full-fledged Prometheus metric exposition, including comments, `HELP` and `TYPE` expressions, a histogram, a summary, character escaping examples, and more. ``` # HELP http\_requests\_total The total number of HTTP requests. # TYPE http\_requests\_total counter http\_requests\_total{method="post",code="200"} 1027 1395066363000 http\_requests\_total{method="post",code="400"} 3 1395066363000 # Escaping in label values: msdos\_file\_access\_time\_seconds{path="C:\\DIR\\FILE.TXT",error="Cannot find file:\n\"FILE.TXT\""} 1.458255915e9 # UTF-8 metric and label names: {"my.dotted.metric", "error.message"="Not Found"} # Minimalistic line: metric\_without\_timestamp\_and\_labels 12.47 # A weird metric from before the epoch: something\_weird{problem="division by zero"} +Inf -3982045 # A histogram, which has a pretty complex representation in the text format: # HELP http\_request\_duration\_seconds A histogram of the request duration. # TYPE http\_request\_duration\_seconds histogram http\_request\_duration\_seconds\_bucket{le="0.05"} 24054 http\_request\_duration\_seconds\_bucket{le="0.1"} 33444 http\_request\_duration\_seconds\_bucket{le="0.2"} 100392 http\_request\_duration\_seconds\_bucket{le="0.5"} 129389 http\_request\_duration\_seconds\_bucket{le="1"} 133988 http\_request\_duration\_seconds\_bucket{le="+Inf"} 144320 http\_request\_duration\_seconds\_sum 53423 http\_request\_duration\_seconds\_count 144320 # Finally a summary, which has a complex representation, too: # HELP rpc\_duration\_seconds A summary of the RPC duration in seconds. # TYPE rpc\_duration\_seconds summary rpc\_duration\_seconds{quantile="0.01"} 3102 rpc\_duration\_seconds{quantile="0.05"} 3272 rpc\_duration\_seconds{quantile="0.5"} 4773 rpc\_duration\_seconds{quantile="0.9"} 9001 rpc\_duration\_seconds{quantile="0.99"} 76656 rpc\_duration\_seconds\_sum 1.7560473e+07 rpc\_duration\_seconds\_count 2693 ``` ## OpenMetrics Text Format [OpenMetrics](https://github.com/OpenObservability/OpenMetrics) is the an effort to standardize metric wire formatting built off of Prometheus | https://github.com/prometheus/docs/blob/main//docs/instrumenting/exposition_formats.md | main | prometheus | [
-0.11229679733514786,
0.0615079440176487,
-0.014274156652390957,
0.008384780026972294,
-0.08286187052726746,
-0.04840276390314102,
0.09205282479524612,
0.022502917796373367,
-0.00304427114315331,
-0.0356426015496254,
0.005301225930452347,
-0.1184612512588501,
0.052278656512498856,
-0.00174... | 0.182278 |
representation, too: # HELP rpc\_duration\_seconds A summary of the RPC duration in seconds. # TYPE rpc\_duration\_seconds summary rpc\_duration\_seconds{quantile="0.01"} 3102 rpc\_duration\_seconds{quantile="0.05"} 3272 rpc\_duration\_seconds{quantile="0.5"} 4773 rpc\_duration\_seconds{quantile="0.9"} 9001 rpc\_duration\_seconds{quantile="0.99"} 76656 rpc\_duration\_seconds\_sum 1.7560473e+07 rpc\_duration\_seconds\_count 2693 ``` ## OpenMetrics Text Format [OpenMetrics](https://github.com/OpenObservability/OpenMetrics) is the an effort to standardize metric wire formatting built off of Prometheus text format. It is possible to scrape targets and it is also available to use for federating metrics since at least v2.23.0. ### Exemplars (Experimental) Utilizing the OpenMetrics format allows for the exposition and querying of [Exemplars](https://github.com/prometheus/OpenMetrics/blob/v1.0.0/specification/OpenMetrics.md#exemplars). Exemplars provide a point in time snapshot related to a metric set for an otherwise summarized MetricFamily. Additionally they may have a Trace ID attached to them which when used to together with a tracing system can provide more detailed information related to the specific service. To enable this experimental feature you must have at least version v2.26.0 and add `--enable-feature=exemplar-storage` to your arguments. ## Protobuf format Earlier versions of Prometheus supported an exposition format based on [Protocol Buffers](https://developers.google.com/protocol-buffers/) (aka Protobuf) in addition to the current text-based format. With Prometheus 2.0, the Protobuf format was marked as deprecated and Prometheus stopped ingesting samples from said exposition format. However, new (experimental) features were added to Prometheus where the Protobuf format was considered the most viable option. Making Prometheus accept Protocol Buffers once again. When such features are enabled either by feature flag (`--enable-feature=created-timestamp-zero-ingestion`) or by setting the appropriate configuration option (`scrape\_native\_histograms: true`) then Protobuf will be favored over other exposition formats. ## HTTP Content-Type requirements Starting with Prometheus 3.0, scrape targets \*\*must\*\* return a valid `Content-Type` header for the metrics endpoint. If the `Content-Type` is missing, unparsable, or not a supported media type, \*\*the scrape will fail\*\*. See changes in [scrape protocols](https://prometheus.io/docs/prometheus/latest/migration/#scrape-protocols) in the migration guide for details. See each of the exposition format sections for the accurate HTTP content types. ### ScrapeProtocols vs Content-Type Prometheus scrape config offers scrape protocol negotiation based on the content-type using the [`scrape\_protocols`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape\_config) config. For the Prometheus user convenience the scrape protocols are referenced by a unique name that maps to the concrete content-type. See [Protocol Headers](./content\_negotiation.md#protocol-headers) for details. However, the targets should expose metrics in the exposition format with the absolute, response content-type (e.g. `application/openmetrics-text;version=1.0.0`) and only one. ## Historical versions For details on historical format versions, see the legacy [Client Data Exposition Format](https://docs.google.com/document/d/1ZjyKiKxZV83VI9ZKAXRGKaUKK2BIWCT7oiGBKDBpjEY/edit?usp=sharing) document. The current version of the original Protobuf format (with the recent extensions for native histograms) is maintained in the [prometheus/client\_model repository](https://github.com/prometheus/client\_model). | https://github.com/prometheus/docs/blob/main//docs/instrumenting/exposition_formats.md | main | prometheus | [
-0.08234743773937225,
0.06626968085765839,
-0.09272976219654083,
-0.0023499177768826485,
0.0023206742480397224,
-0.03449634462594986,
-0.015646206215023994,
0.0906902328133583,
0.04107976704835892,
-0.022105813026428223,
0.0383717343211174,
-0.1039915531873703,
-0.04504982382059097,
0.0628... | 0.215915 |
Before you can monitor your services, you need to add instrumentation to their code via one of the Prometheus client libraries. These implement the Prometheus [metric types](/docs/concepts/metric\_types/). Choose a Prometheus client library that matches the language in which your application is written. This lets you define and expose internal metrics via an HTTP endpoint on your application’s instance: \* [Go](https://github.com/prometheus/client\_golang) \* [Java or Scala](https://github.com/prometheus/client\_java) \* [Python](https://github.com/prometheus/client\_python) \* [Ruby](https://github.com/prometheus/client\_ruby) \* [Rust](https://github.com/prometheus/client\_rust) Unofficial third-party client libraries: \* [Bash](https://github.com/aecolley/client\_bash) \* [C](https://github.com/digitalocean/prometheus-client-c) \* [C++](https://github.com/jupp0r/prometheus-cpp) \* [Common Lisp](https://github.com/deadtrickster/prometheus.cl) \* [Dart](https://github.com/tentaclelabs/prometheus\_client) \* [Delphi](https://github.com/marcobreveglieri/prometheus-client-delphi) \* [Elixir](https://github.com/deadtrickster/prometheus.ex) \* [Erlang](https://github.com/deadtrickster/prometheus.erl) \* [Haskell](https://github.com/fimad/prometheus-haskell) \* [Julia](https://github.com/fredrikekre/Prometheus.jl) \* [Lua](https://github.com/knyar/nginx-lua-prometheus) for Nginx \* [Lua](https://github.com/tarantool/metrics) for Tarantool \* [.NET / C#](https://github.com/prometheus-net/prometheus-net) \* [Node.js](https://github.com/siimon/prom-client) \* [OCaml](https://github.com/mirage/prometheus) \* [Perl](https://metacpan.org/pod/Net::Prometheus) \* [PHP](https://github.com/promphp/prometheus\_client\_php) \* [R](https://github.com/cfmack/pRometheus) \* [Swift](https://github.com/swift-server/swift-prometheus) When Prometheus scrapes your instance's HTTP endpoint, the client library sends the current state of all tracked metrics to the server. If no client library is available for your language, or you want to avoid dependencies, you may also implement one of the supported [exposition formats](/docs/instrumenting/exposition\_formats/) yourself to expose metrics. When implementing a new Prometheus client library, please follow the [guidelines on writing client libraries](/docs/instrumenting/writing\_clientlibs). Note that this document is still a work in progress. Please also consider consulting the [development mailing list](https://groups.google.com/forum/#!forum/prometheus-developers). We are happy to give advice on how to make your library as useful and consistent as possible. | https://github.com/prometheus/docs/blob/main//docs/instrumenting/clientlibs.md | main | prometheus | [
-0.06945423036813736,
-0.029927225783467293,
-0.05066113919019699,
-0.018106259405612946,
-0.038853567093610764,
-0.10765857994556427,
-0.016950400546193123,
0.007851645350456238,
0.020455479621887207,
-0.033712297677993774,
-0.054011765867471695,
-0.033024802803993225,
-0.003545314772054553... | 0.192505 |
Native histograms were introduced as an experimental feature in November 2022. They are a concept that touches almost every part of the Prometheus stack. The first version of the Prometheus server supporting native histograms was v2.40.0. The support had to be enabled via a feature flag `--enable-feature=native-histograms`. Starting with v3.8.0, native histograms are supported as a stable feature. However, scraping native histograms still has to be activated explicitly via the `scrape\_native\_histograms` configuration setting. To ease transition from the feature flag to the configuration setting, setting the feature flag in v3.8 has the only remaining effect to set `scrape\_native\_histograms` to `true` by default. Starting with v3.9, the feature flag is a true no-op and explicitly setting `scrape\_native\_histograms` is required. Sending over Remote-Write needs to be enabled with by the `send\_native\_histograms` remote write config. (From v4 on, both `scrape\_native\_histograms` and `send\_native\_histograms` will default to `true`.) Due to the pervasive nature of the changes related to native histograms, the documentation of those changes and explanation of the underlying concepts are widely distributed over various channels (like the documentation of affected Prometheus components, doc comments in source code, sometimes the source code itself, design docs, conference talks, …). This document intends to gather all these pieces of information and present them concisely in a unified context. This document prefers to link existing detailed documentation rather than restating it, but it contains enough information to be comprehensible without referring to other sources. With all that said, it should be noted that this document is neither suitable as an introduction for beginners nor does it focus on the needs of developers. For the former, the plan is to provide an updated version of the [Best Practices article on histograms and summaries](../practices/histograms.md). (TODO: And a blog post or maybe even a series of them.) For the latter, there is Carrie Edward's [Developer’s Guide to Prometheus Native Histograms](https://docs.google.com/document/d/1VhtB\_cGnuO2q\_zqEMgtoaLDvJ\_kFSXRXoE0Wo74JlSY/edit). While formal specifications are supposed to happen in their respective context (e.g. OpenMetrics changes will be specified in the general OpenMetrics specification), some parts of this document take the shape of a specification. In those parts, the key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” are used as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119). This document still contains a lot of TODOs even though the feature is considered stable and we don't expect breaking changes before v4.0.0. These TODOs are reminders for completing the documentation, fixing minor issues and additional features. ## Introduction The core idea of native histograms is to treat histograms as first class citizens in the Prometheus data model. Elevating histograms to a “native” sample type is the fundamental prerequisite for the key properties listed below, which explains the choice of the name \_native histograms\_. Prior to the introduction of native histograms, all Prometheus sample values have been 64-bit floating point values (short \_float64\_ or just \_float\_). These floats can directly represent \_gauges\_ or \_counters\_. The Prometheus metric types \_summary\_ and (the classic version of) \_histogram\_, as they exist in exposition formats, are broken down into float components upon ingestion: A \_sum\_ and a \_count\_ component for both types, a number of \_quantile\_ samples for a summary and a number of \_bucket\_ samples for a (classic) histogram. With native histograms, a new structured sample type is introduced. A single sample represents the previously known \_sum\_ and \_count\_ plus a dynamic set of buckets. This is not limited to ingestion, but PromQL expressions may also return the new sample type where previously it was only possible to return float samples. Native histograms have the following key properties: 1. A sparse bucket representation, | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.006281678099185228,
-0.01660304330289364,
0.018718088045716286,
-0.013129517436027527,
0.015320125967264175,
-0.10243645310401917,
-0.04809904098510742,
-0.00746523542329669,
-0.07110724598169327,
0.006928950548171997,
-0.00981899257749319,
-0.05253070220351219,
-0.010983824729919434,
-... | 0.140361 |
the previously known \_sum\_ and \_count\_ plus a dynamic set of buckets. This is not limited to ingestion, but PromQL expressions may also return the new sample type where previously it was only possible to return float samples. Native histograms have the following key properties: 1. A sparse bucket representation, allowing (near) zero cost for empty buckets. 2. Coverage of the full float64 range of values. 3. No configuration of bucket boundaries during instrumentation. 4. Dynamic resolution picked according to simple configuration parameters. 5. Sophisticated exponential bucketing schemas, ensuring mergeability between all histograms using those schemas. 6. An efficient data representation for both exposition and storage. These key properties are fully realized with standard bucketing schemas. There are other schemas with different trade-offs that might only feature a subset of these properties. See the [Schema section](#schema) below for details Compared to the previously existing “classic” histograms, native histograms (with standard bucketing schemas) allow a higher bucket resolution across arbitrary ranges of observed values at a lower storage and query cost with very little to no configuration required. Even partitioning histograms by labels is now much more affordable. Because the sparse representation (property 1 in the list above) is so crucial for many of the other benefits of native histograms, \_sparse histograms\_ was a common name for \_native histograms\_ early during the design process. However, other key properties like the exponential bucketing schema or the dynamic nature of the buckets are also very important, but not caught at all in the term \_sparse histograms\_. ### Design docs These are the design docs that guided the development of native histograms. Some details are obsolete now, but they describe rather well the underlying concepts and how they evolved. - [Sparse high-resolution histograms for Prometheus](https://docs.google.com/document/d/1cLNv3aufPZb3fNfaJgdaRBZsInZKKIHo9E6HinJVbpM/edit), the original design doc. - [Prometheus Sparse Histograms and PromQL](https://docs.google.com/document/d/1ch6ru8GKg03N02jRjYriurt-CZqUVY09evPg6yKTA1s/edit), more an exploratory document than a proper design doc about the handling of native histograms in PromQL. ### Conference talks A more approachable way of learning about native histograms is to watch conference talks, of which a selection is presented below. As an introduction, it might make sense to watch these talks and then return to this document to learn about all the details and technicalities. - [Secret History of Prometheus Histograms](https://fosdem.org/2020/schedule/event/histograms/) about the classic histograms and why Prometheus kept them for so long. - [Prometheus Histograms – Past, Present, and Future](https://promcon.io/2019-munich/talks/prometheus-histograms-past-present-and-future/) is the inaugural talk about the new approach that led to native histograms. - [Better Histograms for Prometheus](https://www.youtube.com/watch?v=HG7uzON-IDM) explains why the concepts work out in practice. - [Native Histograms in Prometheus](https://promcon.io/2022-munich/talks/native-histograms-in-prometheus/) presents and explains native histograms after the actual implementation. - [PromQL for Native Histograms](https://promcon.io/2022-munich/talks/promql-for-native-histograms/) explains the usage of native histograms in PromQL. - [Prometheus Native Histograms in Production](https://www.youtube.com/watch?v=TgINvIK9SYc) provides an analysis of performance and resource consumption. - [Using OpenTelemetry’s Exponential Histograms in Prometheus](https://www.youtube.com/watch?v=W2\_TpDcess8) covers the interoperability with the OpenTelemetry. ## Glossary - A \_\_native histogram\_\_ is an instance of the new complex sample type representing a full histogram that this document is about. Where the context is sufficiently clear, it is often just called a \_histogram\_ below. - A \_\_classic histogram\_\_ is an instance of the older sample type representing a histogram with fixed buckets, formerly just called a \_histogram\_. It exists as such in the exposition formats, but is broken into a number of float samples upon ingestion into Prometheus. - \_\_Sparse histogram\_\_ is an older, now deprecated name for \_native histogram\_. This name might still be found occasionally in older documentation. \_\_Sparse buckets\_\_ remains a meaningful term for the buckets of a native histogram. ## Data model This section describes the data model of | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.02630949392914772,
0.017806656658649445,
-0.0424017459154129,
-0.00650970172137022,
-0.053128473460674286,
-0.015105203725397587,
-0.006920363754034042,
0.03866589441895485,
0.013594222255051136,
0.025453153997659683,
-0.07545211911201477,
-0.08324392139911652,
0.04487447440624237,
0.019... | 0.129064 |
samples upon ingestion into Prometheus. - \_\_Sparse histogram\_\_ is an older, now deprecated name for \_native histogram\_. This name might still be found occasionally in older documentation. \_\_Sparse buckets\_\_ remains a meaningful term for the buckets of a native histogram. ## Data model This section describes the data model of native histograms in general. It avoids implementation specifics as far as possible. This includes terminology. For example, a \_list\_ described in this section will become a \_repeated message\_ in a protobuf implementation and (most likely) a \_slice\_ in a Go implementation. ### General structure Similar to a classic histogram, a native histogram has a field for the \_count\_ of observations and a field for the \_sum\_ of observations. While the count of observation is generally non-negative (with the only exception being [intermediate results in PromQL](#unary-minus-and-negative-histograms)), the sum of observations might have any float64 value. In addition, a native histogram contains the following components, which are described in detail in dedicated sections below: - A \_schema\_ to identify the method of determining the boundaries of any given bucket with an index \_i\_. - A sparse representation of indexed buckets, mirrored for positive and negative observations. - A \_zero bucket\_ to count observations close to zero. - A (possibly empty) list of \_custom values\_. - \_Exemplars\_. ### Flavors Any native histogram has a specific flavor along each of two independent dimensions: 1. Counter vs. gauge: Usually, a histogram is “counter like”, i.e. each of its buckets acts as a counter of observations. However, there are also “gauge like” histograms where each bucket is a gauge, representing arbitrary distributions at a point in time. The concept of a gauge histogram was previously introduced for classic histograms by [OpenMetrics](https://github.com/prometheus/OpenMetrics/blob/v1.0.0/specification/OpenMetrics.md#gaugehistogram). 2. Integer vs. floating point (short: float): The obvious use case of histograms is to count observations, resulting in integer numbers of observations ≥ 0 within each bucket, including the \_zero bucket\_, and for the total \_count\_ of observations, represented as unsigned 64-bit integers (short: uint64). However, there are specific use cases leading to a “weighted” or “scaled” histogram, where all of these values are represented as 64-bit floating point numbers (short: float64). Note that the \_sum\_ of observations is a float64 in either case. Float histograms are occasionally used in direct instrumentation for “weighted” observations, for example to count the number of seconds an observed value was falling into different buckets of a histogram. The far more common use case for float histograms is within PromQL, though. PromQL generally only acts on float values, so the PromQL engine converts every histogram retrieved from the TSDB to a float histogram first, and any histogram stored back into TSDB via recording rules is a float histogram. If such a histogram is effectively an integer histogram (because the value of all non-\_sum\_ fields can be represented precisely as uint64), a TSDB implementation MAY convert them back to integer histograms to increase storage efficiency. (As of Prometheus v3.00, the TSDB implementation within Prometheus is not utilizing this option.) Note, however, that the most common PromQL function applied to a counter histogram is `rate`, which generally produces non-integer numbers, so that results of recording rules will commonly be float histograms with non-integer values anyway. PromQL expression may even create “negative” histograms (e.g. by multiplying a histogram with -1). Those negative histograms are only allowed as intermediate results and are otherwise considered invalid. They cannot be represented in any of the exchange formats (exposition formats, remote-write, OTLP) and they cannot be stored in the TSDB. Also see the [detailed section about negative histograms](#unary-minus-and-negative-histograms). Treating native histograms explicitly as integer histograms | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.04276476427912712,
0.012664076872169971,
-0.020928705111145973,
-0.03787951543927193,
-0.040892913937568665,
-0.09859449416399002,
0.01965217851102352,
0.0632486417889595,
0.013285435736179352,
-0.02686382085084915,
0.03853535279631615,
-0.0652182474732399,
0.05231785401701927,
0.000940... | 0.18745 |
negative histograms are only allowed as intermediate results and are otherwise considered invalid. They cannot be represented in any of the exchange formats (exposition formats, remote-write, OTLP) and they cannot be stored in the TSDB. Also see the [detailed section about negative histograms](#unary-minus-and-negative-histograms). Treating native histograms explicitly as integer histograms vs. float histogram is a notable deviation from the treatment of conventional simple numeric samples, which are always treated as floats throughout the whole stack for the sake of simplicity. The main reason for the more involved treatment of histograms is the easy efficiency gains in protobuf-based exposition formats. Protobuf uses varint encoding for integers, which reduces the data size for small integer values without requiring an additional compression layer. This benefit is amplified by the [delta encoding of integer buckets](#buckets), which generally results in smaller integer values. Floats, in contrast, always require 8 bytes in protobuf. In practice, many integers in an integer histogram will fit in 1 byte, and most will fit in 2 bytes, so that the explicit presence of integer histogram in a protobuf-exposition format results directly in a data size reduction approaching 8x for histograms with many buckets. This is particularly relevant as the overwhelming majority of histograms exposed by instrumented targets are integer histograms. For similar reasons, the representation of integer histograms in RAM and on disk is generally more efficient than that of float histograms. This is less relevant than the benefits in the exposition format, though. For one, Prometheus uses Gorilla-style XOR encoding for floats, which reduces their size, albeit not as much as the double-delta encoding used for integers. More importantly, an implementation could always decide to internally use an integer representation for histogram fields that are effectively integer values (see above). (Historical note: Prometheus v1 used exactly this approach to improve the compression of float samples, and Prometheus v3 might very well adopt this approach again in the future.) In a counter histogram, the total \_count\_ of observation and the counts in the buckets individually behave like Prometheus counters, i.e. they only go down upon a counter reset. However, the \_sum\_ of observation may decrease as a consequence of the observation of negative values. PromQL implementations MUST detect counter resets based on the whole histogram (see the [counter reset considerations section](#counter-reset-considerations) below for details). (Note that this always has been a problem for the \_sum\_ component of classic histograms and summaries, too. The approach so far was to accept that counter reset detection silently breaks for \_sum\_ in those cases. Fortunately, negative observations are a very rare use case for Prometheus histograms and summaries.) ### Schema The \_schema\_ is a signed integer value with a size of 8 bits (short: int8). It defines the way bucket boundaries are calculated. The currently valid values are -53 and the range between and including -4 and +8 (with a larger range between and including -9 and +52 being reserved, see below for details). More schemas may be added in the future. -53 is a schema for so-called \_custom bucket boundaries\_ or short \_custom buckets\_, while the other schema numbers represent the different standard exponential schemas (short: \_standard schemas\_). The standard schemas are mergeable with each other and are RECOMMENDED for general use cases. Larger schema numbers correspond to higher resolutions. Schema \_n\_ has half the resolution of schema \_n\_+1, which implies that a histogram with schema \_n\_+1 can be converted into a histogram with schema \_n\_ by merging neighboring buckets. For any standard schema \_n\_, the boundaries of a bucket with index \_i\_ calculated as follows (using Python syntax): - The upper inclusive | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.04274040088057518,
0.02442334219813347,
-0.030431602150201797,
-0.09288972616195679,
-0.04898638650774956,
-0.04682758450508118,
-0.0021520620211958885,
0.07381811738014221,
0.025815896689891815,
0.03365783765912056,
-0.03620610386133194,
-0.10089118033647537,
0.0313139371573925,
0.01125... | 0.06592 |
half the resolution of schema \_n\_+1, which implies that a histogram with schema \_n\_+1 can be converted into a histogram with schema \_n\_ by merging neighboring buckets. For any standard schema \_n\_, the boundaries of a bucket with index \_i\_ calculated as follows (using Python syntax): - The upper inclusive limit of a positive bucket: `(2\*\*2\*\*-n)\*\*i` - The lower exclusive limit of a positive bucket: `(2\*\*2\*\*-n)\*\*(i-1)` - The lower inclusive limit of a negative bucket: `-((2\*\*2\*\*-n)\*\*i)` - The upper exclusive limit of a negative bucket: `-((2\*\*2\*\*-n)\*\*(i-1))` \_i\_ is an integer number that may be negative. There are exceptions to the rules above concerning the largest and smallest finite values representable as a float64 (called `MaxFloat64` and `MinFloat64` in the following) and the positive and negative infinity values (`+Inf` and `-Inf`): - The positive bucket that contains `MaxFloat64` (according to the boundary formulas above) has an upper inclusive limit of `MaxFloat64` (rather than the limit calculated by the formulas above, which would overflow float64). - The next positive bucket (index \_i\_+1 relative to the bucket from the previous item) has a lower exclusive limit of `MaxFloat64` and an upper inclusive limit of `+Inf`. (It could be called a \_positive overflow bucket\_.) - The negative bucket that contains `MinFloat64` (according to the boundary formulas above) has a lower inclusive limit of `MinFloat64` (rather than the limit calculated by the formulas above, which would underflow float64). - The next negative bucket (index \_i\_+1 relative to the bucket from the previous item) has an upper exclusive limit of `MinFloat64` and an lower inclusive limit of `-Inf`. (It could be called a \_negative overflow bucket\_.) - Buckets beyond the `+Inf` and `-Inf` buckets described above MUST NOT be used. There are more exceptions for values close to zero, see the [zero bucket section](#zero-bucket) below. The current limits of -4 for the lowest resolution and 8 for the highest resolution have been chosen based on practical usefulness. Should a practical need arise for even lower or higher resolution, an extension of the range will be considered. However, a schema greater than 52 does not make sense as the growth factor from one bucket to the next would then be smaller than the difference between representable float64 numbers. Likewise, a schema smaller than -9 does not make sense either, as the growth factor would then exceed the largest float representable as float64. Therefore, the schema numbers between (and including) -9 and +52 are reserved for future standard schemas (following the formulas for bucket boundaries above) and MUST NOT be used for any other schemas. Receivers of native histograms MAY, upon ingestion, reduce the schema and thereby the resolution of ingested histograms by merging buckets appropriately. Receivers MAY accept schemas between 9 and 52 if they reduce the schema upon ingestion to a valid number (i.e. between -4 and 8), following the formulas for bucket boundaries above. If, after this optional schema conversion, the schema is still unknown to the receiver, there are the following options: - If a scrape (including federation) contains one or more histograms with an unknown schema, the entire scrape MUST fail, following the Prometheus practice of avoiding incomplete scrapes. - For any other ingestion paths (including replaying the WAL/WBL), the receiver MAY ignore histograms with unknown schemas and SHOULD notify the user about this omission in a suitable way. When a TSDB implementation reads histograms from its permanent storage (excluding replaying the WAL/WBL), similar guidelines apply: Schemas between 9 and 52 MAY be converted to valid schemas. Otherwise, unknown schemas MUST return an error on retrieval, and the PromQL query that triggered the | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.03817691653966904,
0.018567455932497978,
0.01352571602910757,
-0.09414207190275192,
-0.01880842261016369,
-0.048850588500499725,
-0.009298942051827908,
0.0060938093811273575,
-0.03893469646573067,
0.004543597809970379,
-0.01802702620625496,
-0.09555233269929886,
0.09713049978017807,
0.00... | 0.06062 |
this omission in a suitable way. When a TSDB implementation reads histograms from its permanent storage (excluding replaying the WAL/WBL), similar guidelines apply: Schemas between 9 and 52 MAY be converted to valid schemas. Otherwise, unknown schemas MUST return an error on retrieval, and the PromQL query that triggered the retrieval MUST fail. For schema -53, the bucket boundaries are set explicitly via \_custom values\_, described in detail in the [custom values section](#custom-values) below. This results in a native histogram with custom bucket boundaries (or short \_custom buckets\_, often further abbreviated to NHCB). Such a histogram can be used to represent a classic histogram as a native histogram. It can also be used if the exponential bucketing featured by the standard schemas is a bad match for the distribution to be represented by the histogram. Histograms with different custom bucket boundaries are generally not mergeable with each other. Therefore, schema -53 SHOULD only be used as an informed decision in specific use cases. ### Buckets For standard schemas, buckets are represented as two lists, one for positive buckets and one for negative buckets. For custom buckets (schema -53), only the positive bucket list is used, but repurposed for all buckets. Any unpopulated buckets MAY be excluded from the lists. (Which is the reason why the buckets are often called \_sparse buckets\_.) For float histograms, the elements of the lists are float64 and represent the bucket population directly. Bucket populations are generally non-negative, with the only exception being [intermediate results in PromQL](#unary-minus-and-negative-histograms). For integer histograms, the elements of the lists are signed 64-bit integers (short: int64), and each element represents the bucket population as a delta to the previous bucket in the list. The first bucket in each list contains an absolute population (which can also be seen as a delta relative to zero). The deltas MUST NOT evalute to a negative absolute bucket population. To map buckets in the lists to the indices as defined in the previous section, there are two lists of so-called \_spans\_, one for the positive buckets and one for the negative buckets. Each span consists of a pair of numbers, a signed 32-bit integer (short: int32) called \_offset\_ and an unsigned 32-bit integer (short: uint32) called \_length\_. Only the first span in each list can have a negative offset. It defines the index of the first bucket in its corresponding bucket list. (Note that for NHCBs, the index is always positive, see the [custom values section](#custom-values) below for details.) The length defines the number of consecutive buckets the bucket list starts with. The offsets of the following spans define the number of excluded (and thus unpopulated buckets). The lengths define the number of consecutive buckets in the list following the excluded buckets. The sum of all length values in each span list MUST be equal to the length of the corresponding bucket list. Empty spans (with a length of zero) are valid and MAY be used, although they are generally not useful and they SHOULD be eliminated by adding their offset to the offset of the following span. Similarly, spans that are not the first span in a list MAY have an offset of zero, although those offsets SHOULD be eliminated by adding their length to the previous span. Both cases are allowed so that producers of native histograms MAY pick whatever representation has the best resource trade-offs at that moment. For example, if a histogram is processed through various stages, it might be most efficient to only eliminate redundant spans after the last processing stage. In a similar spirit, there are situation where excluding | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.05580667033791542,
-0.00443352060392499,
-0.01205675583332777,
-0.014415711164474487,
-0.06421859562397003,
0.06458071619272232,
0.01995404250919819,
0.005989936646074057,
-0.01892893947660923,
0.017730319872498512,
-0.04974840208888054,
-0.05084297060966492,
0.021876253187656403,
0.0168... | 0.079634 |
producers of native histograms MAY pick whatever representation has the best resource trade-offs at that moment. For example, if a histogram is processed through various stages, it might be most efficient to only eliminate redundant spans after the last processing stage. In a similar spirit, there are situation where excluding every unpopulated bucket from the bucket list is most efficient, but in other situations, it might be better to reduce the number of spans by representing small numbers of unpopulated buckets explicitly. Note that future high resolution schemas might require offsets that are too large to be represented with an int32. An extension of the data model will be required in that case. (The current standard schema with the highest resolution is schema 8, for which the bucket that contains `MaxFloat64` has index 262144, and thus the `+Inf` overflow bucket has index 262145, while the largest number representable with int32 is 2147483647. The highest standard schema that would still work with int32 offsets would be schema 20, corresponding to a growth factor from bucket to bucket of only ~1.000000661.) #### Examples An integer histogram has the following positive buckets (index→population): `-2→3, -1→5, 0→0, 1→0, 2→1, 3→0, 4→3, 5→2` They could be represented in this way: - Positive bucket list: `[3, 2, -4, 2, -1]` - Positive span list: `[[-2, 2], [2,1], [1,2]]` The second and third span could be merged into one if the single unpopulated bucket with index 3 is represented explicitly, leading to the following result: - Positive bucket list: `[3, 2, -4, -1, 3, -1]` - Positive span list: `[[-2, 2], [2,4]]` Or merge all the spans into one by representing all unpopulated buckets above explicitly: - Positive bucket list: `[3, 2, -5, 0, 1, -1, 3, -1]` - Positive span list: `[[-2, 8]]` ### Zero bucket Observations of exactly zero do not fit into any bucket as defined by the standard schemas above. They are counted in a dedicated bucket called the \_zero bucket\_. The number of observations in the zero bucket is tracked by a single uint64 (for integer histograms) or float64 (for float histograms). As for regular buckets, this number is generally non-negative. The zero bucket has an additional parameter called the \_zero threshold\_, which is a float64 ≥ 0. If the threshold is set to zero, only observations of exactly zero go into the zero bucket, which is the case described above. If the threshold has a positive value, all observations within the closed interval [-threshold, +threshold] go to the zero bucket rather than a regular bucket. This has two use cases: - Noisy observations close to zero tend to populate a high number of buckets. Those observations might happen due to numerical inaccuracies or if the source of the observations are actual physical measurements. A zero bucket with a relatively small threshold redirects those observations into a single bucket. - If the user is more interested in the long tail of a distribution, far away from zero, a relatively large threshold of the zero bucket helps to avoid many high resolution buckets for a range that is not of interest. The threshold of the zero bucket SHOULD coincide with a boundary of a regular bucket, which avoids the complication of the zero bucket overlapping with parts of a regular bucket. However, if such an overlap is happening, the observations that are counted in the regular bucket overlapping with the zero bucket MUST be outside of the [-threshold, +threshold] interval. To merge histograms with the same zero threshold, the two zero buckets are simply added. If the zero thresholds in the source histograms | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.041287776082754135,
0.05007293447852135,
-0.004311290103942156,
-0.08442827314138412,
0.018290312960743904,
-0.020056741312146187,
-0.058271702378988266,
0.026493014767766,
-0.02623651921749115,
0.013361449353396893,
-0.06974035501480103,
-0.042285624891519547,
0.047510694712400436,
0.00... | 0.060191 |
such an overlap is happening, the observations that are counted in the regular bucket overlapping with the zero bucket MUST be outside of the [-threshold, +threshold] interval. To merge histograms with the same zero threshold, the two zero buckets are simply added. If the zero thresholds in the source histograms are different, however, the largest threshold in any of the source histograms is chosen. If that threshold happens to be within any populated bucket in the other source histograms, the threshold is increased until one of the following is true for each source histogram: - The new threshold coincides with the boundary of a populated bucket. - The new threshold is not within any populated bucket. Then the source zero buckets and any source buckets now inside the new threshold are added up to yield the population of the new zero bucket. The zero bucket is not used if the schema is -53 (custom buckets). ### Custom values The list of custom values is unused for standard schemas. It is used by non-standard schemas in a custom way in case there is need to store additional data. The only currently defined schema for which custom values are used is -53 (custom buckets). The remaining part of this section describes the usage of the custom values in more detail for this specific case. The custom values represent the upper inclusive boundaries of the custom buckets. They are sorted in ascending fashion. The custom buckets themselves are stored using the positive bucket list and the positive span list, although their boundaries, as determined via the custom values, can be negative. The index of each of those “positive” buckets defines the zero-based position of their upper boundary within the custom values list. The lower exclusive boundary is defined by the custom value preceding the upper boundary. For the first custom value (at position zero in the list), there is no preceding value, in which case the lower boundary is considered to be `-Inf` inclusively. Therefore, the custom bucket with index zero counts all observations between (and including) `-Inf` and the first custom value. In the common case that only positive observations are expected, the custom bucket with index zero SHOULD have an upper boundary of zero to clearly mark if there have been any observations at zero or below. (If there are indeed only positive observations, the custom bucket with index zero will stay unpopulated and therefore will never be represented explicitly. The only cost is the additional zero element at the beginning of the custom values list.) Custom values MUST NOT be `+Inf`. Observations greater than the last custom value go into an overflow bucket with an upper boundary of `+Inf`. This overflow bucket is added with an index equal to the length of the custom values list. As a consequence, the upper boundary of the `+Inf` bucket often included in classic histograms is not represented explicitly in the custom values. Custom values MUST NOT be `NaN`. This is explicitly excluded in OpenMetrics, but other exposition formats could, in principle, feature upper boundaries of `NaN` in classic histograms (presumably as a result of some error – such a boundary would not make any sense). Such a classic histogram MUST be rejected and cannot be converted into an NHCB. ### Exemplars A native histogram sample can have zero, one, or more exemplars. They work in the same way as conventional exemplars, but they are organized in a list (as there can be more than one), and they MUST have a timestamp. Exemplars exposed as part of a classic histogram MAY be used by | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.022244296967983246,
-0.03979991748929024,
0.005392703227698803,
-0.04503297060728073,
-0.017261076718568802,
-0.030205311253666878,
0.02063056454062462,
0.016712410375475883,
0.008916092105209827,
-0.0011319038458168507,
0.052864450961351395,
-0.09162507951259613,
0.0275941863656044,
-0.... | 0.029648 |
histogram sample can have zero, one, or more exemplars. They work in the same way as conventional exemplars, but they are organized in a list (as there can be more than one), and they MUST have a timestamp. Exemplars exposed as part of a classic histogram MAY be used by native histograms, if they have a timestamp. ### Special cases of observed values Instrumented code SHOULD avoid observing values of `NaN` and `±Inf` because they make limited sense in the context of a histogram. However, those values MUST still be handled properly, as described in the following. The sum of observations is calculated as usual by adding the observation to the sum of observations, following normal floating point arithmetic. (For example, an observation of `NaN` will set the sum to `NaN`. An observation of `+Inf` will set the sum to `+Inf`, unless it is already `NaN` or `-Inf`, in which case the sum is set to `NaN`.) An observation of `NaN` goes into no bucket, but increments the count of observations. This implies that the count of observations can be greater than the sum of all buckets (negative, positive, and zero buckets), and the difference is the number of `NaN` observations. (For an integer histogram without any `NaN` observations, the sum of all buckets is equal to the count of observations. Within the usual floating point precision limits, the same is true for a float histogram without any `NaN` observations.) An observation of `+Inf` or `-Inf` increments the count of observations and increments a bucket chosen in the following way: - With a standard schema, a `+Inf` observation increments the \_positive overflow bucket\_ as described above. - With a standard schema, a `-Inf` observation increments the \_negative overflow bucket\_ as described above. - With schema -53 (custom buckets), a `+Inf` observation increments the bucket with an index equal to the length of the custom values list. - With schema -53 (custom buckets), a `-Inf` observation increments the bucket with index zero. ### OpenTelemetry interoperability Prometheus (Prom) native histograms with a standard schema can be easily mapped into an OpenTelemetry (OTel) exponential histogram and vice versa, as detailed in the following. The Prom \_schema\_ is equal to the \_scale\_ in OTel, with the restriction that OTel allows lower values than -4 and higher values than +8. As described above, Prom has reserved more schema numbers to extend its range, should it ever by required in practice. The index is offset by one, i.e. a Prom bucket with index \_n\_ has index \_n-1\_ for OTel. OTel has a dense rather than a sparse representation of buckets. One might see OTel as “Prom with only one span”. The Prom \_zero bucket\_ is called \_zero count\_ in OTel. (Prom also uses \_zero count\_ to name the field storing the count of observations in the zero bucket). Both work the same, including the existence of a \_zero threshold\_. Note that OTel implies a threshold of zero if none is given. (TODO: The OTel spec reads: “When zero\_threshold is unset or 0, this bucket stores values that cannot be expressed using the standard exponential formula as well as values that have been rounded to zero.” Double-check if this really creates the same behavior. If there are problems close to zero, we could make Prom's spec more precise. If OTel counts NaN in the zero bucket, we have to add a note here.) OTel exponential histograms only support standard exponential bucketing schemas (as the name suggests). Therefore, NHCBs (or native histograms with other future bucketing schemas) cannot be cleanly converted to OTel exponential histograms. However, conversion to | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.020669275894761086,
-0.01555574033409357,
-0.019107919186353683,
-0.06975055485963821,
-0.030927926301956177,
-0.032585930079221725,
0.008979002013802528,
0.02380107343196869,
0.03322712704539299,
0.06395165622234344,
0.05182693898677826,
-0.07703447341918945,
0.054056402295827866,
-0.0... | 0.089872 |
precise. If OTel counts NaN in the zero bucket, we have to add a note here.) OTel exponential histograms only support standard exponential bucketing schemas (as the name suggests). Therefore, NHCBs (or native histograms with other future bucketing schemas) cannot be cleanly converted to OTel exponential histograms. However, conversion to a conventional OTel histogram with fixed buckets is still possible. OTel histograms of any kind have optional fields for the minimum and maximum value observed in the histogram. These fields have no equivalent concept in Prometheus because counter histograms accumulate data over a long and unpredictable timespan and can be scraped at any time, so that tracking a minimum and maximum value is either infeasible or of limited use. Note, though, that native histograms enable a fairly accurate estimation of the maximum and minimum observation during arbitrary timespans, see the [PromQL section](#promql). ## Exposition formats Metrics exposition in the classic Prometheus use case is dominated by strings because all the metric names, label names, and label values take much more space than the float64 sample values, even if the latter are represented in a potentially more verbose text form. This was one of the reasons why abandoning protobuf-based exposition seemed advantageous in the past. In contrast, a native histogram, following the data model described above, consists of a lot more numerical data. This amplifies the advantages of a protobuf based format. Therefore, the previously abandoned protobuf-based exposition was revived to efficiently expose and scrape native histograms. ### Classic Prometheus formats At the time native histograms were conceived, OpenMetrics adoption was still lacking, and in particular, the protobuf version of OpenMetrics had no known applications at all. Therefore, the initial approach was to extend the classic Prometheus protobuf format to support native histograms. (An additional practical consideration was that the [Go instrumentation library](https://github.com/prometheus/client\_golang) was still using the classic protobuf spec as its internal data model, simplifying the initial development.) The classic Prometheus text form was not extended for native histograms, and such an extension is not planned. (See also the [OpenMetrics](#open-metrics) section below.) There is a proto2 and a proto3 version of the protobuf specification, which both create the same wire format: - [proto2](https://github.com/prometheus/client\_model/blob/master/io/prometheus/client/metrics.proto) - [proto3](https://github.com/prometheus/prometheus/blob/main/prompb/io/prometheus/client/metrics.proto) These files have comprehensive comments, which should enable an easy mapping from the proto spec to the data model described above. Here are relevant parts from the proto3 file: ```protobuf // [...] message Histogram { uint64 sample\_count = 1; double sample\_count\_float = 4; // Overrides sample\_count if > 0. double sample\_sum = 2; // Buckets for the classic histogram. repeated Bucket bucket = 3 [(gogoproto.nullable) = false]; // Ordered in increasing order of upper\_bound, +Inf bucket is optional. google.protobuf.Timestamp created\_timestamp = 15; // Everything below here is for native histograms (also known as sparse histograms). // schema defines the bucket schema. Currently, valid numbers are -4 <= n <= 8. // They are all for base-2 bucket schemas, where 1 is a bucket boundary in each case, and // then each power of two is divided into 2^n logarithmic buckets. // Or in other words, each bucket boundary is the previous boundary times 2^(2^-n). // In the future, more bucket schemas may be added using numbers < -4 or > 8. sint32 schema = 5; double zero\_threshold = 6; // Breadth of the zero bucket. uint64 zero\_count = 7; // Count in zero bucket. double zero\_count\_float = 8; // Overrides sb\_zero\_count if > 0. // Negative buckets for the native histogram. repeated BucketSpan negative\_span = 9 [(gogoproto.nullable) = false]; // Use either "negative\_delta" or "negative\_count", the former for // regular histograms with integer counts, the | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.002829544013366103,
-0.0008730263216421008,
-0.04167424142360687,
-0.0798778235912323,
-0.06302724033594131,
-0.058025505393743515,
0.027255361899733543,
0.03525314852595329,
0.019835663959383965,
-0.007758185267448425,
-0.01522855181246996,
-0.1122853234410286,
0.09091603755950928,
0.04... | 0.179976 |
uint64 zero\_count = 7; // Count in zero bucket. double zero\_count\_float = 8; // Overrides sb\_zero\_count if > 0. // Negative buckets for the native histogram. repeated BucketSpan negative\_span = 9 [(gogoproto.nullable) = false]; // Use either "negative\_delta" or "negative\_count", the former for // regular histograms with integer counts, the latter for float // histograms. repeated sint64 negative\_delta = 10; // Count delta of each bucket compared to previous one (or to zero for 1st bucket). repeated double negative\_count = 11; // Absolute count of each bucket. // Positive buckets for the native histogram. // Use a no-op span (offset 0, length 0) for a native histogram without any // observations yet and with a zero\_threshold of 0. Otherwise, it would be // indistinguishable from a classic histogram. repeated BucketSpan positive\_span = 12 [(gogoproto.nullable) = false]; // Use either "positive\_delta" or "positive\_count", the former for // regular histograms with integer counts, the latter for float // histograms. repeated sint64 positive\_delta = 13; // Count delta of each bucket compared to previous one (or to zero for 1st bucket). repeated double positive\_count = 14; // Absolute count of each bucket. // Only used for native histograms. These exemplars MUST have a timestamp. repeated Exemplar exemplars = 16; } message Bucket { uint64 cumulative\_count = 1; // Cumulative in increasing order. double cumulative\_count\_float = 4; // Overrides cumulative\_count if > 0. double upper\_bound = 2; // Inclusive. Exemplar exemplar = 3; } // A BucketSpan defines a number of consecutive buckets in a native // histogram with their offset. Logically, it would be more // straightforward to include the bucket counts in the Span. However, // the protobuf representation is more compact in the way the data is // structured here (with all the buckets in a single array separate // from the Spans). message BucketSpan { sint32 offset = 1; // Gap to previous span, or starting point for 1st span (which can be negative). uint32 length = 2; // Length of consecutive buckets. } // A BucketSpan defines a number of consecutive buckets in a native // histogram with their offset. Logically, it would be more // straightforward to include the bucket counts in the Span. However, // the protobuf representation is more compact in the way the data is // structured here (with all the buckets in a single array separate // from the Spans). message BucketSpan { sint32 offset = 1; // Gap to previous span, or starting point for 1st span (which can be negative). uint32 length = 2; // Length of consecutive buckets. } // [...] ``` Note the following: - Both native histograms and classic histograms are encoded by the same `Histogram` proto message, i.e. the existing `Histogram` message got extended with fields for native histograms. - The fields for the sum and the count of observations and the `created\_timestamp` are shared between classic and native histograms and keep working in the same way for both. - The format originally did not support classic float histograms. While extending the format for native histograms, support for classic float histograms was added as a byproduct (see fields `sample\_count\_float`, `cumulative\_count\_float`). - The `Bucket` field and the `Bucket` message are used for the buckets of a classic histogram. It is perfectly possible to create a `Histogram` message that represents both a classic and a native version of the same histogram. Parsers have the freedom to pick either or both versions (see also the [scrape configuration section](#scrape-configuration)). - The bucket population is encoded as absolute numbers in case of float histograms, and as deltas to the previous bucket (or to | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.06724955886602402,
0.03533225879073143,
0.0016580265946686268,
-0.08453557640314102,
-0.08467015624046326,
0.0037756937090307474,
0.020469261333346367,
0.010881423950195312,
-0.03666377067565918,
-0.055549249053001404,
0.03515305742621422,
-0.09906408935785294,
0.027662325650453568,
0.03... | -0.01134 |
both a classic and a native version of the same histogram. Parsers have the freedom to pick either or both versions (see also the [scrape configuration section](#scrape-configuration)). - The bucket population is encoded as absolute numbers in case of float histograms, and as deltas to the previous bucket (or to zero for the first bucket) in case of integer histograms. The latter leads to smaller numbers, which encode to a smaller message size because protobuf uses varint encoding for the `sint64` type. - A native histogram that has received no observations yet and a classic histogram that has no buckets configured would look exactly the same as a protobuf message. Therefore, a `Histogram` message that is meant to be parsed as a native histogram MUST contain a “no-op span”, i.e. a `BucketSpan` with `offset` and `length` set to 0, in the repeated `positive\_span` field. - Any number of exemplars for native histograms MAY be added in the repeated `Exemplar` field of the `Histogram` message, but each one MUST have a timestamp. If there are no exemplars provided in this way, a parser MAY use timestamped exemplars provided for classic buckets (as at most one exemplar per bucket in the `Exemplar` field of the `Bucket` message). - The number and distribution of native histogram exemplars SHOULD fit the use case at hand. Generally, the exemplar payload SHOULD NOT be much larger than the remaining part of the `Histogram` message, and the exemplars SHOULD fall into different buckets and cover the whole spread of buckets approximately evenly. (This is generally preferred over an exemplar distribution that proportionally represents the distribution of observations, as the latter will rarely yield exemplars from the long tail of a distribution, which are often the most interesting exemplars to look at.) - There is no representation for the custom values needed for NHCBs. NHCBs are never directly exposed, but presented as classic histograms, to be converted (back) to NHCB upon ingestion. This is also true for [federation](#federation). We might still add fields for the custom values in the future, should the need arise, e.g. for future schemas that also utilize custom values. ### OpenMetrics Currently (2024-11-03), OpenMetrics does not support native histograms. Adding support to the protobuf version of OpenMetrics is relatively straightforward due to its similarity to the classic Prometheus protobuf format. A [proposal in the form of a PR](https://github.com/OpenObservability/OpenMetrics/pull/256) is under review. Adding support to the text version of OpenMetrics is harder, but also highly desirable because there are many situations where the generation of protobuf is infeasible. A text format has to make a trade-off between readability for humans and efficient handling by machines (encoding, transport, decoding). Work on it is in progress. See the [design doc](https://github.com/prometheus/proposals/blob/main/proposals/2024-01-29\_native\_histograms\_text\_format.md) for more details. (TODO: Update section as progress is made.) ## Instrumentation libraries The [protobuf specification](#classic-prometheus-formats) enables low-level creation of metrics exposition including native histograms using the language specific bindings created by the protobuf compiler. However, for direct code instrumentation, an instrumentation library is needed. Currently (2024-11-03), there are two official Prometheus instrumentation libraries supporting native histograms: - Go: [source](https://github.com/prometheus/client\_golang) – [documentation](https://pkg.go.dev/github.com/prometheus/client\_golang/prometheus) - Java: [source](https://github.com/prometheus/client\_java) – [documentation](https://prometheus.github.io/client\_java/) Adding native histogram support to other instrumentation libraries is relatively easy if the library already supports protobuf exposition. For purely text based libraries, the completion of a [text based exposition format](#openmetrics) is a prerequisite. (TODO: Update this as needed.) This section does not cover details of how to use individual instrumentation libraries (see the documentation linked above for that) but focuses on the common usage patterns and also provides general guidelines how to implement native histogram support as part of | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.03867973014712334,
-0.01845145970582962,
0.011623314581811428,
-0.06911616027355194,
0.011016760021448135,
-0.057356081902980804,
-0.030431613326072693,
0.020987749099731445,
0.014891396276652813,
0.0020852615125477314,
0.003437873674556613,
-0.09824683517217636,
0.008808864280581474,
-0... | 0.018565 |
format](#openmetrics) is a prerequisite. (TODO: Update this as needed.) This section does not cover details of how to use individual instrumentation libraries (see the documentation linked above for that) but focuses on the common usage patterns and also provides general guidelines how to implement native histogram support as part of an instrumentation library. The already existing [Go implementation](https://github.com/prometheus/client\_golang) is used for examples. The sections about the [data model](#data-model) and the [exposition formats](#exposition-formats) are highly relevant for the implementation of instrumentation libraries (but not restated in this section!). The actual instrumentation API for histograms does not change for native histograms. Both classic histograms and native histograms receive observations in the same way (with subtle differences concerning exemplars, see next paragraph). Instrumentation libraries can even maintain a classic and a native version of the same histogram and expose them in parallel so that the scraper can choose which version to ingest (see the section about [exposition formats](#exposition-formats) for details). The user chooses whether to expose classic and/or native histograms via configuration settings. Exemplars for classic histograms are usually tracked by storing and exposing the most recent exemplar for each bucket. As long as classic buckets are defined, an instrumentation library MAY expose the same exemplars for the native version of the same histogram, as long as each exemplar has a timestamp. (In fact, a scraper MAY use the exemplars provided with the classic version of the histogram even if it is otherwise only ingesting the native version, see details in the [exposition formats](#exposition-formats) section.) However, a native histogram MAY be assigned any number of exemplars, and an instrumentation library SHOULD use this liberty to meet the best practices for exemplars as described in the [exposition formats](#exposition-formats) section. An instrumentation library SHOULD offer the following configuration parameters for native histograms following standard schemas. Names are examples from the Go library – they have to be adjusted to the idiomatic style in other languages. The value in parentheses is the default value that the library SHOULD offer. - `NativeHistogramBucketFactor` (1.1): A float greater than one to determine the initial resolution. The library picks a starting schema that results in a growth of the bucket width from one bucket to the next by a factor not larger than the provided value. See table below for example values. - `NativeHistogramZeroThreshold` (2-128): A float of value zero or greater to set the initial threshold for the zero bucket. The resolution is set via a growth factor rather than providing the schema directly because most users will not know the mathematics behind the schema numbers. The notion of an upper limit for the growth factor from bucket to bucket is understandable without knowing about the internal workings of native histograms. The following table lists an example factor for each valid schema. | `NativeHistogramBucketFactor` | resulting schema | |-------------------------------|------------------| | 65536 | -4 | | 256 | -3 | | 16 | -2 | | 4 | -1 | | 2 | 0 | | 1.5 | 1 | | 1.2 | 2 | | 1.1 | 3 | | 1.05 | 4 | | 1.03 | 5 | | 1.02 | 6 | | 1.01 | 7 | | 1.005 | 8 | ### Limiting the bucket count Buckets of native histograms are created dynamically when they are populated for the first time. An unexpectedly broad distribution of observed values can lead to an unexpectedly high number of buckets, requiring more memory than anticipated. If the distribution of observed values can be manipulated from the outside, this could even be used as a DoS attack vector via exhausting | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.010547981597483158,
0.02471458725631237,
-0.050352245569229126,
-0.07304861396551132,
-0.031036455184221268,
-0.0032462812960147858,
-0.029180997982621193,
0.048929475247859955,
-0.038209959864616394,
-0.014843503944575787,
-0.01724061742424965,
-0.059179652482271194,
0.00841026846319437,... | 0.227896 |
populated for the first time. An unexpectedly broad distribution of observed values can lead to an unexpectedly high number of buckets, requiring more memory than anticipated. If the distribution of observed values can be manipulated from the outside, this could even be used as a DoS attack vector via exhausting all the memory available to the program. Therefore, an instrumentation library SHOULD offer a bucket limitation strategy. It MAY set one by default, depending on the typical use cases the library is used for. (TODO: Maybe we should say that a strategy SHOULD be set by default. The Go library is currently not limiting the buckets by default, and no issues have been reported with that so far.) The following describes the bucket limitation strategy implemented by the Go instrumentation library. Other libraries MAY follow this example, but other strategies might be feasible as well, depending on the typical usage pattern of the library. The strategy is defined by three parameters: an unsigned integer `NativeHistogramMaxBucketNumber`, a duration `NativeHistogramMinResetDuration`, and a float `NativeHistogramMaxZeroThreshold`. If `NativeHistogramMaxBucketNumber` is zero (which is the default), buckets are not limited at all, and the other two parameters are ignored. If `NativeHistogramMaxBucketNumber` is set to a positive value, the library attempts to keep the bucket count of each histogram to the provided value. A typical value for the limit is 160, which is also the default value used by OTel exponential histograms in a similar strategy. (Note that partitioning by labels will create a number of histograms. The limit applies to each of them individually, not to all of them in aggregate.) If the limit would be exceeded, a number of remedies are applied in order until the number of buckets is within the limit again: 1. If at least `NativeHistogramMinResetDuration` has passed since the last reset of the histogram (which includes the creation of the histogram), the whole histogram is reset, i.e. all buckets are deleted and the sum and count of observations as well as the zero bucket are set to zero. Prometheus handles this as a normal counter reset, which means that some observations will be lost between scrapes, so resetting should happen rarely compared to the scraping interval. Additionally, frequent counter resets might lead to less efficient storage in the TSDB (see the [TSDB section](#tsdb) for details). A `NativeHistogramMinResetDuration` of one hour is a value that should work well in most situations. 2. If not enough time has passed since the last reset (or if `NativeHistogramMinResetDuration` is set to zero, which is the default value), no reset is performed. Instead, the zero threshold is increased to merge buckets close to zero into the zero bucket, reducing the number of buckets in that way. The increase of the threshold is limited by `NativeHistogramMaxZeroThreshold`. If this value is already reached (or it is set to zero, which is the default), nothing happens in this step. 3. If the number of buckets still exceeds the limit, the resolution of the histogram is reduced by converting it to the next lower schema, i.e. by merging neighboring buckets, thereby doubling the width of the buckets. This is repeated until the bucket count is within the configured limit or schema -4 is reached. If step 2 or 3 have changed the histogram, a reset will be performed once `NativeHistogramMinResetDuration` has passed since the last reset, not only to remove the buckets but also to return to the initial values for the zero threshold and the bucket resolution. Note that this is treated like a reset for other reasons in all aspects, including updating the so-called [created timestamp](#created-timestamp-handling). It is | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.035966165363788605,
0.025636965408921242,
-0.016196923330426216,
-0.04965376481413841,
-0.03535909578204155,
-0.034535352140665054,
-0.00035316465073265135,
0.027650030329823494,
0.01875738799571991,
-0.004592807963490486,
-0.02762407250702381,
-0.038841649889945984,
0.007746165618300438,
... | 0.058763 |
has passed since the last reset, not only to remove the buckets but also to return to the initial values for the zero threshold and the bucket resolution. Note that this is treated like a reset for other reasons in all aspects, including updating the so-called [created timestamp](#created-timestamp-handling). It is tempting to set a very low `NativeHistogramBucketFactor` (e.g. 1.005) together with a reasonable `NativeHistogramMaxBucketNumber` (e.g. 160). In this way, each histogram always has the highest possible resolution that is affordable within the given bucket count “budget”. (This is the default strategy used by the OTel exponential histogram. It starts with an even higher schema (20), which is currently not even available in Prometheus native histograms.) However, this strategy is generally \_not\_ recommended for the Prometheus use case. The resolution will be reduced quite often after creation and after each reset as observations come in. This creates churn both in the instrumented program as well as in the TSDB, which is particularly problematic for the latter. All of this effort is mostly in vain because the typical queries involving histograms require many histograms to get merged, during which the lowest common resolution is used so that the user ends up with a lower resolution anyway. The TSDB can be protected against the churn by limiting the resolution upon ingestion (see [below](#limit-bucket-count-and-resolution)), but if a reasonably low resolution will be enforced upon ingestion anyway, it is more straightforward to set this resolution during instrumentation already. However, this strategy might be worth the resource overhead within the instrumented program in specific cases where a reasonable resolution cannot be assumed at instrumentation time, and the scraper should have the flexibility to pick the desired resolution at scrape time. ### Partitioning by labels While partitioning of a classic histogram with many buckets by labels has to be done judiciously, the situation is more relaxed with native histograms. Partitioning a native histograms still creates a multiplicity of individual histograms. However, the resulting partitioned histograms will often populate fewer buckets each than the original unpartitioned histogram. (For example, if a histogram tracking the duration of HTTP requests is partitioned by HTTP status code, the individual histogram tracking requests responded by status code 404 might have a very sharp bucket distribution around the typical duration it takes to identify an unknown path, populating only a few buckets.) The total number of populated buckets for all partitioned histograms will still go up, but by a smaller factor than the number of partitioned histograms. (For example, if adding labels to an already quite heavy classic histogram results in 100 labeled histograms, the total cost will go up by a factor of 100. In case of a native histogram, the cost for the single histogram might already be lower if the classic histogram featured a high resolution. After partitioning, the total number of populated buckets in the labeled native histograms will be signifcantly smaller than 100 times the number of buckets in the original native histogram.) ### NHCB Currently (2024-11-03), instrumentation libraries offer no way to directly configure native histograms with custom bucket boundaries (NHCBs). The use case for NHCBs is to allow native-histogram enabled scrapers to convert classic histograms to NHCBs upon ingestion (see [next section](#scrape-configuration)). However, there are valid use cases where custom buckets are desirable directly during instrumentation. In those cases, the current approach is to instrument with a classic histogram and configure the scraper to convert it to an NHCB upon ingestion. However, a more direct treatment of NHCBs in instrumentation libraries might happen in the future. ## Scrape configuration To enable the Prometheus server to scrape | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.012548983097076416,
0.022421380504965782,
0.04059400036931038,
-0.035532329231500626,
-0.03134121000766754,
-0.10538436472415924,
-0.01958947628736496,
0.027643105015158653,
0.014787789434194565,
0.03384317085146904,
-0.004863766487687826,
-0.03531215339899063,
0.02813340537250042,
-0.0... | 0.064875 |
In those cases, the current approach is to instrument with a classic histogram and configure the scraper to convert it to an NHCB upon ingestion. However, a more direct treatment of NHCBs in instrumentation libraries might happen in the future. ## Scrape configuration To enable the Prometheus server to scrape native histograms, set `scrape\_native\_histograms: true` in individual scrape configs, or in the global settings. Enabling `scrape\_native\_histograms` also changes the content negotiation to prefer classic protobuf-based exposition format over the OpenMetrics 1.x text format. ### Fine-tuning content negotiation It is possible to fine-tune the scrape protocol negotiation globally or per scrape config via the `scrape\_protocols` config setting. It is a list defining the content negotiation priorities. Its value depends on what feature flags are enabled (for example `--enable-feature=created-timestamp-zero-ingestion`), what value the user sets in it directly and lastly whether `scrape\_native\_histograms` is enabled. If `scrape\_native\_histograms` is enabled and `scrape\_protocols` is not set by a feature flag or the user globally or per scrape config, then its effective value for a scrape config is changed to `[ PrometheusProto, OpenMetricsText1.0.0,OpenMetricsText0.0.1, PrometheusText0.0.4 ]` to enable scraping native histograms. The `scrape\_protocols` setting can be used to configure protobuf scrapes without ingesting native histograms or enforce a non-protobuf format for certain targets even with `scrape\_native\_histograms` enabled. As long as the classic Prometheus protobuf format (`PrometheusProto` in the configured list) is the only format supporting native histograms, both `scrape\_native\_histograms` and negotiation of protobuf is required to actually ingest native histograms. NOTE: Switching the used exposition format between text-based and protobuf-based has some non-obvious implications. Most importantly, certain implementation details result in the counter-intuitive effect that scraping with a text-based format is generally much less resource demanding than scraping with a protobuf-based format (see [tracking issue](https://github.com/prometheus/prometheus/issues/14668) for details). Even more subtle is the effect on the formatting of label values for `quantile` labels (used in summaries) and `le` labels (used in classic histograms). This problem only affects v2 of the Prometheus server (v3 has consistent formatting under all circumstances) and is not directly related to native histograms, but might show up in the same context because enabling native histograms requires the protobuf exposition format. See details in the [documentation for the `native-histograms` feature flag](https://prometheus.io/docs/prometheus/2.55/feature\_flags/#native-histograms) for v2.55. ### Limiting bucket count and resolution While [instrumentation libraries](#instrumentation-libraries) SHOULD offer configuration options to limit the resolution and bucket count of a native histogram, there is still a need to enforce those limits upon ingestion. Users might be unable to change the instrumentation of a given program, or a program might be deliberately instrumented with high-resolution histograms to give different scrapers the option to reduce the resolution as they see fit. The Prometheus scrape config offers two settings to address this need: 1. The `native\_histogram\_bucket\_limit` sets an upper inclusive limit for the number of buckets in an individual histogram. If the limit is exceeded, the resolution of a histogram with a standard schema is repeatedly reduced (by doubling the width of the buckets, i.e. decreasing the schema) until the limit is reached. In case an NHCB exceeds the limit, or in the rare case that the limit cannot be satisfied even with schema -4, the scrape fails. 2. The `native\_histogram\_min\_bucket\_factor` sets a lower inclusive limit for the growth factor from bucket to bucket. This setting is only relevant for standard schemas and has no effect on NHCBs. Again, if the limit is exceeded, the resolution of the histogram is repeatedly reduced (by doubling the width of the buckets, i.e. decreasing the schema) until the limit is reached. However, once schema -4 is reached, the scrape will still succeed, even if a higher growth | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.023939674720168114,
0.0515870526432991,
-0.03824680298566818,
-0.039358098059892654,
0.015248351730406284,
-0.08634702116250992,
-0.02679937332868576,
-0.016284802928566933,
-0.06880096346139908,
-0.022710226476192474,
-0.05352620407938957,
-0.12047617137432098,
0.04720819368958473,
0.0... | 0.166126 |
no effect on NHCBs. Again, if the limit is exceeded, the resolution of the histogram is repeatedly reduced (by doubling the width of the buckets, i.e. decreasing the schema) until the limit is reached. However, once schema -4 is reached, the scrape will still succeed, even if a higher growth factor has been specified. Both settings accept zero as a valid value, which implies “no limit”. In case of the bucket limit, this means that the number of buckets are indeed not checked at all. In the case of the bucket factor, Prometheus will still ensure that a standard schema will not exceed the capabilities of the used storage backend. Prometheus currently stores histograms with standard exponential schemas of at most 8. However, it accepts exponential schemas greater than 8 up to the [reserved limit of 52](#schema) but reduces their resolution upon ingestion so that schema 8 is reached (or a lower one if required by the `native\_histogram\_bucket\_limit` or `native\_histogram\_min\_bucket\_factor` settings). If both settings have a non-zero values, the schema is decreased sufficiently to satisfy both limits. Note that the bucket factor set during [instrumentation](#instrumentation-libraries) is an upper limit (exposed bucket growth factor ≤ configured value), while the bucket factor set in the scrape config is a lower limit (ingested bucket growth factor ≥ configured value). The schemas resulting from certain limits are therefore slightly different. Some examples: | `native\_histogram\_min\_bucket\_factor` | resulting max schema | |--------------------------------------|------------------| | 65536 | -4 | | 256 | -3 | | 16 | -2 | | 4 | -1 | | 2 | 0 | | 1.4 | 1 | | 1.1 | 2 | | 1.09 | 3 | | 1.04 | 4 | | 1.02 | 5 | | 1.01 | 6 | | 1.005 | 7 | | 1.002 | 8 | General considerations about setting the limits: `native\_histogram\_bucket\_limit` is suitable to set a hard limit for the cost of an individual histogram. The same cannot be accomplished by `native\_histogram\_min\_bucket\_factor` because histograms can have many buckets even with a low resolution if the distribution of observations is sufficiently broad. `native\_histogram\_min\_bucket\_factor` is well suited to avoid needless overall resource costs. For example, if the use case at hand only requires a certain resolution, setting a corresponding `native\_histogram\_min\_bucket\_factor` for all histograms might free up enough resources to accept a very high bucket count on a few histograms with broad distributions of observed values. Another example is the case where some histograms have low resolution for some reason (maybe already on the instrumentation side). If aggregations regularly include those low resolution histograms, the outcome will have that same low resolution (see the [PromQL details below](#compatibility-between-histograms)). Storing other histograms regularly aggregated with the low resolution histograms at higher resolution might not be of much use. ### Scraping both classic and native histograms As described [above](#exposition-formats), a histogram exposed by an instrumented program might contain both a classic and a native histograms, and some parts are even shared (like the count and sum of observations). This section explains which parts will be scraped by Prometheus, and how to control the behavior. If `scrape\_native\_histograms` is `false` (default in v3) in the scrape config, Prometheus will completely ignore the native histogram parts during scraping. If `scrape\_native\_histograms` is `true` (default in v4+), Prometheus will prefer the native histogram parts over the classic histogram parts, even if both are exposed for the same histogram. Prometheus will still scrape the classic histogram parts for histograms with no native histogram data. In situations like [migration scenarios](#migration-considerations), it might be desired to scrape both versions, classic and native, for the same histogram, provided | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.012876526452600956,
0.04541511461138725,
-0.002531563164666295,
-0.02746916189789772,
0.03995205834507942,
-0.10337428748607635,
-0.041659530252218246,
-0.00911733042448759,
-0.04318404197692871,
-0.008166732266545296,
-0.01564777083694935,
-0.053443603217601776,
0.06429821252822876,
0.... | 0.123343 |
over the classic histogram parts, even if both are exposed for the same histogram. Prometheus will still scrape the classic histogram parts for histograms with no native histogram data. In situations like [migration scenarios](#migration-considerations), it might be desired to scrape both versions, classic and native, for the same histogram, provided both versions are exposed by the instrumented program. To enable this behavior, there is a boolean setting `always\_scrape\_classic\_histograms` in the scrape config. It defaults to false, but if set to true, both versions of each histogram will be scraped and ingested, provided there is at least one classic bucket and at least one native bucket span (which might be a no-op span). This will not cause any conflicts in the TSDB because classic histograms are ingested as a number of suffixed series, while native histograms are ingested as just one series with their unmodified name. (Example: A histogram called `rpc\_latency\_seconds` results in a native histogram series named `rpc\_latency\_seconds` and in a number of series for the classic part, namely `rpc\_latency\_seconds\_sum`, `rpc\_latency\_seconds\_count`, and a number of `rpc\_latency\_seconds\_bucket` series with different `le` labels.) ### Scraping classic histograms as NHCBs The aforementioned NHCB is capable of modeling a classic histogram as a native histogram. Via the boolean scrape config option `convert\_classic\_histograms\_to\_nhcb`, Prometheus can be configured to ingest classic histograms as NHCBs. While NHCBs support [automatic reconciliation between different bucket layouts](#compatibility-between-histograms), their mergeability is still fundamentally limited. The reconciliation only retains exact matches of bucket boundaries between the involved NHCBs. This yields useful results, if most bucket boundaries match. However, abitrary changes in the bucket layout can easily create a situation where none of the boundaries match, resulting in a histogram with only one bucket (the overflow bucket). A key advantage of NHCBs is that they are generally much less expensive to store. In particular, the incremental cost of adding additional buckets is relatively low, which allows affordable ingestion of classic histograms with many buckets. ## TSDB NOTE: This section provides a high level overview of storing native histograms in the TSDB and also explains some important individual aspects that might be easy to miss. It is not meant to explain implementation details, define on-disk formats, or guide through the code base. There is a [detailed documentation of the various storage formats](https://github.com/prometheus/prometheus/tree/main/tsdb/docs/format) and of course the usual generated GoDoc, with the [tsdb package](https://pkg.go.dev/github.com/prometheus/prometheus/tsdb) and the [storage package](https://pkg.go.dev/github.com/prometheus/prometheus/storage) as suitable starting points. A helpful resource is also the aforementioned [Developer’s Guide to Prometheus Native Histograms](https://docs.google.com/document/d/1VhtB\_cGnuO2q\_zqEMgtoaLDvJ\_kFSXRXoE0Wo74JlSY/edit). ### Integer histograms vs. float histograms The TSDB stores integer histograms and float histograms differently. Generally, integer histograms are expected to compress better, so a TSDB implementation MAY store a float histogram as an integer histogram if all bucket counts and the count of observations have an integer value within the int64 range so that the conversion to an integer histogram creates a numerically precise representation of the original float histogram. (Note that the Prometheus TSDB is not utilizing this option yet.) ### Encoding Native histograms require two new chunk encodings (Go type `chunkenc.Encoding`) in the TSDB: `chunkenc.EncHistogram` (string representation `histogram`, numerical value 2) for integer histograms, and `chunkenc.EncFloatHistogram` (string representation `floathistogram`, numerical value 3) for float histograms. Similarly, there are two new record types for the WAL and the in-memory snapshot (Go type `record.Type`): `record.HistogramSamples` (string representation `histogram\_samples`, numerical value 9) for integer histograms, and `record.FloatHistogramSamples` (string representation `float\_histogram\_samples`, numerical value 10) for float histograms. For backwards compatibility reasons, there are two more histogram record types: `record.HistogramSamplesLegacy` (`histogram\_samples\_legacy`, 7) and `record.FloatHistogramSamplesLegacy` (`float\_histogram\_samples\_legacy`, 8). They were used prior to the introduction of custom values needed for NHCB. They are supported | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.001255647512152791,
-0.02541760355234146,
0.0371636264026165,
-0.04848189279437065,
0.032837364822626114,
-0.06918375939130783,
-0.03127507120370865,
-0.03319372981786728,
-0.05839454010128975,
-0.02947857975959778,
-0.02013370580971241,
-0.03413156792521477,
0.02961106412112713,
-0.0077... | 0.049566 |
value 9) for integer histograms, and `record.FloatHistogramSamples` (string representation `float\_histogram\_samples`, numerical value 10) for float histograms. For backwards compatibility reasons, there are two more histogram record types: `record.HistogramSamplesLegacy` (`histogram\_samples\_legacy`, 7) and `record.FloatHistogramSamplesLegacy` (`float\_histogram\_samples\_legacy`, 8). They were used prior to the introduction of custom values needed for NHCB. They are supported so that reading old WALs is still possible. Prometheus identifies time series just by their labels. Whether a sample in a series is a float (and as such a counter or a gauge) or a histogram (no matter what flavor) does not contribute to the series's identity. Therefore, a series MAY contain a mix of samples of different types and flavors. Changes of the sample type within a time series are expected to be very rare in practice. They usually happen after changes in the instrumentation of a target (in the rare case that the same metric name is used for e.g. a gauge float prior to the change and a counter histogram after the change) or after a change of a recording rule (e.g. where the old version of a rule created a gauge float and the new version of the rule now creates a gauge histogram while retaining its name). Frequent changes of the sample type are usually the consequence of a misconfiguration (e.g. two different recording rules creating different sample types feeding into the same series). Therefore, a TSDB implementation MUST handle a change in sample type, but it MAY do so in a relatively inefficient way. When the Prometheus TSDB encounters a sample type that cannot be written to the currently used chunk, it closes that chunk and starts a new one with the appropriate encoding. (A time series that switches sample types back and forth for each sample will lead to a new chunk for each sample, which is indeed very inefficient.) Histogram chunks use a number of custom encodings for numerical values, in order to reduce the data size by encoding common values in fewer bits than less common values. The details of each custom encoding are described in the [low level chunk format documentation](https://github.com/prometheus/prometheus/blob/main/tsdb/docs/format/chunks.md) (and ultimately in the code linked from there). The following three encodings are used for a number of different fields and are therefore named here for later reference: - \_varbit-int\_ is a variable bitwidth encoding for signed integers. It uses between 1 bit and 9 bytes. Numbers closer to zero need fewer bits. This is similar to the timestamp encoding in chunks for float samples, but with a different bucketing of the various bit lengths, optimized for the value distribution commonly encountered in native histograms. - \_varbit-uint\_ is a similar encoding, but for unsigned integers. - \_varbit-xor\_ is a variable bitwidth encoding for a sequence of floats. It is based on XOR'ing the current and the previous float value in the sequence. It uses between 1 bit and 77 bits per float. This is exactly the same encoding the TSDB already uses for float samples. Histogram chunks start as usual with the number of samples in the chunk (as a uint16), followed by one byte describing if the histogram is a gauge histogram or a counter histogram and providing counter reset information for the latter. See the [corresponding section](#counter-reset-considerations) below for details. This is followed by the so called chunk layout, which contains the following information, \_shared by all histograms in the chunk\_: - The threshold of the zero bucket, using a custom encoding that encodes common values (zero or certain powers of two) in just one byte, but requires 9 bytes for arbitrary values. - The schema, encoded | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.04995147883892059,
-0.0039041119161993265,
-0.028589332476258278,
-0.02110956981778145,
0.009039167314767838,
-0.01490767765790224,
-0.00477678794413805,
0.037930265069007874,
-0.0041237180121243,
-0.06150009483098984,
-0.0654434785246849,
-0.04991811141371727,
-0.027918312698602676,
0.... | 0.160569 |
chunk layout, which contains the following information, \_shared by all histograms in the chunk\_: - The threshold of the zero bucket, using a custom encoding that encodes common values (zero or certain powers of two) in just one byte, but requires 9 bytes for arbitrary values. - The schema, encoded as varbit-int. - The positive spans, encoded as the number of spans (varbit-uint), followed by the length (varbit-uint) and the offset (varbit-int) of each span in a repeated sequence. - The negative spans in the same way. - Only for schema -53 (NHCB) the custom values, encoded as the number of custom values (varbit-uint), followed by the custom values in a repeated sequence, using a custom encoding. The chunk layout is followed by a repeated sequence of sample data. The sample data is different for integer histograms and float histograms. For an integer histogram, the data of each sample contains the following: - The timestamp, encoded as varbit-int, with an absolute value in the 1st sample, a delta between the 1st and 2nd sample for the 2nd sample, and a “delta of deltas” for any further samples (i.e. the same “double delta” encoding used for timestamps in conventional float chunks, just with a different bit bucketing for the varbit-int encoding). - The count of observations, encoded as varbit-uint for the 1st sample and as varbit-int for any further samples, using the same “delta of deltas” approach as for timestamps. - The zero bucket population, encoded as varbit-uint for the 1st sample and as varbit-int for any further samples, using the same “delta of deltas” approach as for timestamps. - The sum of observations, encoded as a float64 for the 1st sample and as varbit-xor for any further samples (XOR'ing between the current and previous sample). - The bucket populations of the positive buckets, each as a delta to the previous bucket (or as the absolute population in the 1st bucket), encoded as varbit-int, using the same “delta of deltas” approach as for timestamps. (In other words, the “double delta” encoding is applied to values that are already deltas on their own, which is the reason why this is sometimes called “triple delta“ encoding.) - The bucket populations of the negative buckets in the same way. The sample data of a float histogram has the following differences: - The count of observations and the zero bucket populations are floats now and therefore encoded in the same way as the sum of observations (float64 in the 1st sample, varbit-xor for any further samples). - The bucket population are not only floats now, but also absolute population counts rather than deltas between buckets. In the 1st sample, all bucket populations are represented as plain float64's, while they are encoded as varbit-xor for all further samples, XOR'ing corresponding buckets from the current and the previous sample. The following events trigger cutting a new chunk (for the reasons described in parentheses): - A change of sample type between integer histogram and float histogram (because both require different chunk encodings altogether). - A change of sample type between gauge histogram and counter histogram (because the leading byte has to denote the different type). - A counter reset for a counter histogram (to be stored in the leading byte as counter reset information, see details below). - A schema change (which means we need a new chunk layout, and a chunk can only have one chunk layout). - A change of the zero threshold (which changes the chunk layout, see above). - A change of the custom values (which changes the chunk layout, see above). - A | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.03244613856077194,
0.017453452572226524,
-0.04699978232383728,
-0.0718042328953743,
-0.020480604842305183,
0.02372417412698269,
-0.018310565501451492,
-0.007566227577626705,
-0.0054612779058516026,
-0.05766395106911659,
-0.03555826097726822,
-0.083507239818573,
0.034331802278757095,
-0.0... | 0.15791 |
A schema change (which means we need a new chunk layout, and a chunk can only have one chunk layout). - A change of the zero threshold (which changes the chunk layout, see above). - A change of the custom values (which changes the chunk layout, see above). - A staleness marker is followed by a regular sample (which does not strictly require a new chunk, but it can be assumed that most histograms will change so much when they go away and come back that cutting a new chunk is the best option). - The chunk size limit is exceeded (see [details below](#chunk-size-limit)). Differences in the spans would also change the chunk layout, but they are reconciled by adding (explicitly represented) unpopulated buckets as needed so that all histograms in a chunk share the same span structure. This is straightforward if a bucket disappears, because the missing bucket is simply added to the new histogram as an unpopulated bucket while the histogram is appended to the chunk. However, disappearance of a formerly populated bucket constitutes a counter reset (see [below](#counter-reset-considerations)), so this case can only happen for gauge histograms (which do not feature counter resets). The far more common case is that buckets exist in a newly appended histogram that did not exist in the previously appended histograms. In this case, these buckets have to be added as explicitly unpopulated buckets to all previously appended histograms. This requires a complete re-encoding of the entire chunk. (There is some optimization potential in only re-encoding the affected parts. Implementing this would be quite complicated. So far, the performance impact of the full re-encoding did not stick out as problematic.) ### Staleness markers NOTE: To understand the following section, it is important to recall how staleness markers work in the TSDB. Staleness markers in float series are represented by one specific bit pattern among the many that can be used to represent the `NaN` value. This very specific float value is called “special stale `NaN` value” in the following section. It is (almost certainly) never returned by the usual arithmetic float operations and as such different from a “naturally occurring” `NaN` value, including those discussed in [Special cases of observed values](#special-cases-of-observed-values). In fact, the special stale `NaN` value is never returned directly when querying the TSDB, but it is handled internally before it reaches the caller. To mark staleness in histogram series, the usual special stale `NaN` value could be used. However, this would require cutting a new chunk, just for the purpose of marking the series as stale, because a float value following a histogram value has to be stored in a different chunk (see above). Therefore, there is also a histogram version of a stale marker where the field for the sum of observations is set to the special stale `NaN` value. In this case, all other fields are ignored, which enables setting them to values suitable for efficient storage (as the histogram version of a stale marker is essentially just a storage optimization). This works for both float and integer histograms (as the sum field is a float value even in an integer histogram), and the appropriate version can be used to avoid cutting a new chunk. All version of a stale marker (float, integer histogram, float histogram) MUST be treated as equivalent by the TSDB. ### Chunk size limit The size of float chunks is limited to 1024 bytes. The same size limitation is generally used for histogram chunks, too. However, individual histograms can become very large if they have many buckets, so blindly enforcing the size limit | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.04090896248817444,
-0.008450510911643505,
0.05206865072250366,
-0.008467227220535278,
0.057389359921216965,
-0.018182946369051933,
-0.07522742450237274,
-0.023573527112603188,
0.019954759627580643,
-0.056314948946237564,
0.0035003754310309887,
-0.03738347440958023,
-0.03953045234084129,
... | 0.052352 |
be treated as equivalent by the TSDB. ### Chunk size limit The size of float chunks is limited to 1024 bytes. The same size limitation is generally used for histogram chunks, too. However, individual histograms can become very large if they have many buckets, so blindly enforcing the size limit could lead to chunks with very few histograms. (In the most extreme case, a single histogram could even take more than 1024 bytes so that the size limit could not be enforced at all.) With very few histograms per chunk, the compression ratio becomes worse. Therefore, a minimum number of 10 histograms per chunks has to be reached before the size limit of 1024 bytes kicks in. This implies that histogram chunks can be much larger than 1024 bytes. Requiring a minimum of 10 histograms per chunk is an initial, very simplistic approach, which might be improved in the future to find a better trade-off between chunk size and compression ratio. ### Counter reset considerations Generally, Prometheus considers a counter to have reset whenever its value drops from one sample to the next (but see also the [next section about the created timestamp](#created-timestamp-handling)). The situation is more complex when detecting a counter reset between two histogram samples. First of all, gauge histograms and counter histograms are explicitly different (whereas Prometheus generally treats all float samples equally after ingestion, no matter if they were ingested as a gauge or a counter metric). Counter resets do not apply to gauge histograms. If a gauge histogram is followed by a counter histogram in a time series, a counter reset is assumed to have happened, because a change from gauge to counter is considered equivalent to the gauge being deleted and the counter being newly created from zero. The most common case is a counter histogram being followed by another counter histogram. In this case, a possible counter reset is detected by the following procedure: If the two histograms are both using a standard schema, but differ in schema or in the zero bucket width, these changes could be part of a compatible resolution reduction (which happens regularly to [reduce the bucket count of a histogram](#limit-the-bucket-count)). Both of the following is true for a compatible resolution reduction: - If the schema has changed, its number has decreased from one standard exponential schema to another standard schema. - If the zero bucket width has changed, any populated regular bucket in the first histogram is either completely included in the zero bucket of the second histogram or not at all (i.e. no partial overlap of old regular buckets with the new zero bucket). If any of the conditions are not met, the change is not a compatible resolution reduction. Because such a change is only possible by resetting or newly creating a histogram, it is considered a counter reset and the detection procedure is concluded. If both conditions are met, the first histogram has to be converted so that its schema and zero bucket width matches those of the second histogram. This happens in the same way as [previously described](#limit-the-bucket-count): Neighboring buckets are merged to reduce the schema, and regular buckets are merged with the zero bucket to increase the width of the zero bucket. If both histograms are NHCBs (schema -53), any difference in their custom values is reconciled as described [below](#compatibility-between-histograms). At this point in the procedure, both histograms have the same schema and zero bucket width, either because this was the case from the beginning, or because one of the histograms was converted accordingly. (Note that NHCBs do not use the zero bucket. | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.03098008595407009,
0.03078940324485302,
-0.00868389941751957,
-0.07706476747989655,
-0.03991619870066643,
-0.06220748648047447,
-0.020683620125055313,
0.04060365632176399,
-0.03751417621970177,
0.0040747374296188354,
-0.08863364905118942,
-0.016940277069807053,
0.0032918136566877365,
0.... | 0.094436 |
custom values is reconciled as described [below](#compatibility-between-histograms). At this point in the procedure, both histograms have the same schema and zero bucket width, either because this was the case from the beginning, or because one of the histograms was converted accordingly. (Note that NHCBs do not use the zero bucket. Their zero bucket widths and population counts are considered equal for the sake of this procedure.) In this situation, any of the following constitutes a counter reset: - A drop in the count of observations (but notably \_not\_ a drop in the sum of observations). - A drop in the population count of any bucket, including the zero bucket. This includes the case where a populated bucket disappears, because a non-represented bucket is equivalent to a bucket with a population of zero. If none of the above is the case, there is no counter reset. As this whole procedure is relatively involved, the counter reset detection preferably happens once during ingestion, with the result being persisted for later use. Counter reset detection during ingestion has to happen anyway because a counter reset is one of the triggers to cut a new chunk. Cutting a new chunk after a counter reset aims to improve the compression ratio. A counter reset sets all bucket populations to zero, so there are fewer buckets to represent. A chunk, however, has to represent the superset of all buckets of all histograms in the chunk, so cutting a new chunk enables a simpler set of buckets for the new chunk. This in turn implies that there will never be a counter reset after the first sample in a chunk. Therefore, the only counter reset information that has to be persisted is that of the 1st histogram in a chunk. This happens in the so-called \_histogram flags\_, a single byte stored directly after the the number of samples in the chunk. This byte is currently only used for the counter reset information, but it may be used for other flags in the future. The counter reset information uses the first two bits. The four possible bit patterns are represented as Go constants of type `CounterResetHeader` in the `chunkenc` package. Their names and meanings are the following: - `GaugeType` (bit pattern `11`): The chunk contains gauge histograms. Counter resets are irrelevant for gauge histograms. - `CounterReset` (bit pattern `10`): A counter reset happened between the last histogram of the previous chunk and the 1st histogram of this chunk. (It is likely that the counter reset was actually the reason why the new chunk was cut.) - `NotCounterReset` (bit pattern `01`): No counter reset happened between the last histogram of the previous chunk and the 1st histogram of this chunk. (This commonly happens if a new chunk is cut because the previous chunk hit the size limit.) - `UnknownCounterReset` (bit pattern `00`): It is unknown if there was a counter reset between the last histogram of the previous chunk and the 1st histogram of this chunk. `UnknownCounterReset` is always a safe choice. It does not prevent counter reset detection, but merely requires that the counter reset detection procedure has to be performed (again) whenever counter reset information is needed. The counter reset information is propagated to the caller when querying the TSDB (in the Go code as a field of type `CounterResetHint` in the Go types `Histogram` and `FloatHistogram`, using enumerated constants with the same names as the bit pattern constants above). For gauge histogram, the `CounterResetHint` is always `GaugeType`. Any other `CounterResetHint` value implies that the histogram in question is a counter histogram. In this way, queriers (including | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.027396023273468018,
-0.015587950125336647,
-0.002831250661984086,
0.008654452860355377,
-0.01335209235548973,
0.032493237406015396,
0.013343279249966145,
0.022232167422771454,
0.017553173005580902,
0.014162005856633186,
0.0012321430258452892,
-0.10438700020313263,
0.0405755452811718,
0.0... | 0.073443 |
of type `CounterResetHint` in the Go types `Histogram` and `FloatHistogram`, using enumerated constants with the same names as the bit pattern constants above). For gauge histogram, the `CounterResetHint` is always `GaugeType`. Any other `CounterResetHint` value implies that the histogram in question is a counter histogram. In this way, queriers (including the PromQL engine, see [below](#promql)) obtain the information if a histogram is a gauge or a counter (which is notably different from float samples). As long as counter histograms are returned in order from a single chunk, the `CounterResetHint` for the 2nd and following histograms in a chunk is set to `NotCounterReset`. (Overlapping blocks and out-of-order ingestion may lead to histogram sequences coming from multiple chunks, which requires special treatment, see below.) When returning the 1st histogram from a counter histogram chunk, the `CounterResetHint` MUST be set to `UnknownCounterReset` \_unless\_ the TSDB implementation can ensure that the previously returned histogram was indeed the same histogram that was used as the preceding histogram to detect the counter reset at ingestion time. Only in the latter case, the counter reset information from the chunk MAY be used directly as the `CounterResetHint` of the returned histogram. This precaution is needed because there are various ways how chunks might get removed or inserted (e.g. deletion via tombstones or adding blocks for backfilling). A counter reset, while attributed to one sample, is in fact happening \_between\_ the marked sample and the preceding sample. Removing the preceding sample or inserting another sample in between the two samples invalidates the previously performed counter reset detection. TODO: Currently, the Prometheus TSDB has no means of ensuring that the preceding chunk is still the same chunk as during ingestion. Therefore, Prometheus currently returns `UnknownCounterReset` for \_all\_ 1st histograms from a counter histogram chunk. See [tracking issue](https://github.com/prometheus/prometheus/issues/15346) for efforts to change that. As already implied above, the querier MUST perform the counter reset detection procedure (again), if the `CounterResetHint` is set to `UnknownCounterReset`. Special caution has to be applied when processing overlapping blocks or out-of-order samples (for querying or during compaction). Both overdetection and underdetection of counter resets may happen in these cases, as illustrated by the following examples: - \_Example for underdetection:\_ One chunk contains samples ABC, without counter resets. Another chunk contains samples DEF, again without counter resets. The chunks are overlapping and refer to the same series. When querying them together, the temporal order of samples turns out to be ADBECF. There might now very well be a counter reset between some or even all of those samples. This is in fact likely if the two samples are actually from unrelated series and got merged into the same series by accident. However, even accidental merges like this have to be handled correctly by the TSDB. If the overlapping chunks are compacted into a new chunk, a new counter reset detection has to happen, catching the new counter resets. If querying the overlapping chunks directly (without prior compaction), a `CounterResetHint` of `UnknownCounterReset` has to be set for each sample that comes from a different chunk than the previously returned sample, which mandates a counter reset detection by the querier (utilizing the safe fallback described above). - \_Example for overdetection:\_ There is a sequence of samples ABCD with a counter reset happening between B and C. However, the initial ingestion missed B and C so that only A and D were ingested, with a counter reset detected between A and D. Later, B and C are ingested (via out-of-order ingestion or as separate chunks later added to the TSDB as a separate block), with a counter reset detected between | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.047915954142808914,
-0.012104570865631104,
-0.05461142584681511,
0.035595107823610306,
-0.08204657584428787,
0.004018506500869989,
0.10269340127706528,
-0.003345618722960353,
0.0004536830820143223,
-0.06465193629264832,
0.015028202906250954,
-0.0807342603802681,
0.016809960827231407,
-0... | 0.103555 |
ingestion missed B and C so that only A and D were ingested, with a counter reset detected between A and D. Later, B and C are ingested (via out-of-order ingestion or as separate chunks later added to the TSDB as a separate block), with a counter reset detected between B and C. In this case, each sample goes into its own chunk, so when assembling all the chunks, they do not even overlap. However, when returning the counter reset hints according to the rules above, both C and D will be returned to the querier with a `CounterResetHint` of `CounterReset`, although there is now no counter reset between C and D. Similar to the situation in the previous example, a new counter reset detection has to be performed between A and B, and another one between C and D. Or both B and D have to be returned with a `CounterResetHint` of `UnknownCounterReset`. In summary, whenever the TSDB cannot safely establish that a counter reset detection between two samples has happened upon ingestion, it either has to perform another counter reset detection or it has to return a `CounterResetHint` of `UnknownCounterReset` for the second sample. Note that there is the possibility of counter resets that are not detected by the procedure described above, namely if the counts in the reset histogram have increased quickly enough so that the 1st sample after the counter reset has no counts that have decreased compared to the last sample prior to the counter reset. (This is also a problem for float counters, where it is actually more likely to happen.) With the mechanisms explained above, it is possible to store a counter reset even in this case, provided that the counter reset was detected by other means. However, due to the complications caused by insertion and removal of chunks, out-of-order samples, and overlapping blocks (as explained above), this information MAY get lost if a second round of counter reset detection is required. (TODO: Currently, this information is reliably lost, see TODO above.) A better way to safely mark a counter reset is via created timestamps (see next section). ### Created timestamp handling OpenMetrics introduced so-called created timestamps for counters, summaries, and classic counter histograms. (The term is probably short for “created-at timstamp”. The more appropriate term might have been “creation timestamp” or “reset timestamp”, but the term “created timestamp” is firmly established by now.) The created timestamp provides the most recent time the metric was created or reset. A [design doc](https://github.com/prometheus/proposals/blob/main/proposals/2023-06-13\_created-timestamp.md) describes how Prometheus handles created timestamps. Created timestamps are also useful for native histograms. In the same way a synthetic zero sample is inserted for float counters, a zero value of a histogram sample is inserted for counter histograms. A zero value of a histogram has no populated buckets, and the sum of observations, the count of observations, and the zero bucket population are all zero. Schema, zero bucket width, custom values, and the float vs. integer flavor of the histogram SHOULD match the sample that directly follows the synthetic zero sample (to not trigger the detection of a spurious counter reset). The counter reset information of the synthetic zero sample is always set to `CounterReset`. ### Exemplars Exemplars for native histograms are attached to the histogram sample as a whole, not to individual buckets. (See also the [exposition formats section](#exposition-formats).) Therefore, it is allowed (and in fact the common case) that a single native histogram sample comes with multiple exemplars attached. Exemplars may or may not change from one scrape to the next. Scrapers SHOULD detect unchanged exemplars to avoid storing | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.06624870002269745,
-0.12114880234003067,
0.013275034725666046,
0.043589502573013306,
-0.04538962244987488,
0.03422393649816513,
0.07875755429267883,
-0.004246058873832226,
0.0036449874751269817,
0.018688738346099854,
-0.014016611501574516,
0.019317863509058952,
0.03947264328598976,
0.00... | 0.046592 |
to individual buckets. (See also the [exposition formats section](#exposition-formats).) Therefore, it is allowed (and in fact the common case) that a single native histogram sample comes with multiple exemplars attached. Exemplars may or may not change from one scrape to the next. Scrapers SHOULD detect unchanged exemplars to avoid storing many duplicate exemplars. Duplicate detection is potentially expensive, though, given that a single sample might have many exemplars, of which any subset could be repeated exemplars from the last scrape. The TSDB MAY rely on the assumption that any new exemplar has a more recent timestamp than any of the previously exposed exemplars. (Remember that exemplars of native histograms MUST have a timestamp.) Duplicate detection is then possible in an efficient way: 1. The exemplars of a newly ingested native histogram are sorted by the following fields: first timestamp, then value, then labels. 2. The exemplars are appended to the exemplar storage in the sorted order. 3. The append fails for exemplars that would be sorted before or are equal to the last successfully appended exemplar (which might be from the previous scrape for the same metric). 4. The append succeeds for exemplars that would be sorted after the last successfully appended exemplar. Exemplars are only counted as out of order if all exemplars of an ingested histogram would be sorted before the last successfully appended exemplar. This does not detect out-of-order exemplars that are mixed with newer exemplars or with a duplicate of the last successfully appended exemplar, which is considered acceptable. ## PromQL This section describes how PromQL handles native histograms. It focuses on general concepts rather than every single detail of individual operations. For the latter, refer to the PromQL documentation about [operators](https://prometheus.io/docs/prometheus/latest/querying/operators/) and [functions](https://prometheus.io/docs/prometheus/latest/querying/functions/). ### Annotations The introduction of native histograms creates certain situations where a PromQL expression returns unexpected results, most commonly the case where some or all elements in the output vector are unexpectedly missing. To help users detect and understand those situations, operations acting on native histograms often use annotations. Annotations can have warn and info level and describe possible issues encountered during the evaluation. Warn level is used to mark situations that are most likely an actual problem the user has to act on. Info level is used for situations that might also be deliberate, but are still unusual enough to flag them. ### Integer histograms vs. float histograms PromQL always acts on float histograms. Native histograms that are stored as integer histograms are automatically converted to float histograms when retrieved from the TSDB. ### Compatibility between histograms When an operator or function acts on two or more native histograms, the histograms involved need to have the same schema, the same zero bucket width, and (if applicable) the same custom values. Within certain limits, histograms can be converted on the fly to meet these compatibility criteria: - NHCBs (schema -53) are only compatible with each other. Different custom values need to be reconciled by conversion in the following way: - Identify the custom values that are shared by all of the original NHCBs. These are the new reconciled custom values. - Convert each original NHCB to the new custom values by merging its buckets into the unified bucket set described by the new custom values. - Note that it is easily possible that the original NHCBs do not share any custom values. In this case, the new bucket set will only consist of the overflow bucket, taking all observations from all of the original buckets. - Any query requiring reconciliation of custom values is flagged with an info-level annotation. - Histograms with | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.06195775792002678,
-0.0645720586180687,
0.03920949995517731,
-0.015928054228425026,
0.027691205963492393,
0.01949945278465748,
0.022203292697668076,
-0.0008392256568185985,
0.025051003322005272,
0.005977656226605177,
-0.024781448766589165,
-0.040550362318754196,
0.04791565239429474,
-0.... | 0.04634 |
that the original NHCBs do not share any custom values. In this case, the new bucket set will only consist of the overflow bucket, taking all observations from all of the original buckets. - Any query requiring reconciliation of custom values is flagged with an info-level annotation. - Histograms with standard schemas can always be converted to the smallest (i.e. lowest resolution) common schema by decreasing the resolution of the histograms with greater schemas (i.e. higher resolution). This happens in the usual way by merging neighboring buckets into the larger buckets of the smaller schema. - Different zero bucket widths are handled by expanding the smaller zero buckets, merging any populated regular bucket into the expanded zero bucket as appropriate. If the greatest common width happens to end up in the middle of any populated bucket, it is further expanded to coincide with the bucket boundary of that bucket. (See more details in the [zero bucket section above](#zero-bucket).) If incompatibility prevents an operation, a warn-level annotation is added to the result. ### Counter resets Counter resets are defined as described [above](#counter-reset-considerations). Counter reset hints returned from the TSDB MAY be taken into account to avoid explicit counter reset detection and to correctly process counter resets that are not detectable by the usual procedure. (This implies that these counter resets are only taken into account on a best effort basis. However, the same is true for the TSDB itself, see above.) A notable difference to the counter reset handling for classic histograms and summaries is that a decrease of the sum of observations does \_not\_ constitute a counter reset by itself. (For example, calculating the rate of a native histogram will still work correctly even if the histogram has observed negative values.) Note that the counter reset hints of counter histograms returned by sub-queries MUST NOT be taken into account to avoid explicit counter reset detection, unless the PromQL engine can safely detect that consecutive counter histograms returned from the sub-query are also consecutive in the TSDB. ### Gauge histograms vs. counter histograms Via the counter reset hint returned from the TSDB, PromQL is aware if a native histogram is a gauge or a counter histogram. To mirror PromQL's treatment of float samples (where it cannot reliably distinguish between float counters and gauges), functions that act on counters will still process gauge histograms, and vice versa, but a warn-level annotation is returned with the result. Note that explicit counter reset detection has to be performed on a gauge histogram in that case, treating it as if it were a counter histogram. ### Interpolation within a bucket When estimating quantiles or fractions, PromQL has to apply interpolation within a bucket. In classic histograms, this interpolation happens in a linear fashion. It is based on the assumption that observations are equally distributed within the bucket. In reality, this assumption might be far off. (For example, an API endpoint might respond to almost all request with a latency of 110ms. The median latency and maybe even the 90th percentile latency would then be close to 110ms. If a classic histogram has bucket boundaries at 100ms and 200ms, it would see most observations in that range and estimate the median at 150ms and the 90th percentile at 190ms.) The worst case is an estimation at one end of a bucket where the actual value is at the other end of the bucket. Therefore, the maximum possible error is the whole width of a bucket. Not doing any interpolation and using some fixed midpoint within a bucket (for example the arithmetic mean or even the harmonic | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.05446188524365425,
0.01204247958958149,
0.0024406106676906347,
-0.02806735225021839,
-0.008547217585146427,
0.006556193344295025,
-0.02521522156894207,
0.0059008849784731865,
0.0016336641274392605,
-0.008931190706789494,
-0.04483901336789131,
-0.08627595007419586,
0.04518023505806923,
0.... | 0.055291 |
one end of a bucket where the actual value is at the other end of the bucket. Therefore, the maximum possible error is the whole width of a bucket. Not doing any interpolation and using some fixed midpoint within a bucket (for example the arithmetic mean or even the harmonic mean) would minimize the maximum possible error (which would then be half of the bucket width in case of the arithmetic mean), but in practice, the linear interpolation yields an error that is lower on average. Since the interpolation has worked well over many years of classic histogram usage, interpolation is also applied for native histograms. For NHCBs, PromQL applies the same interpolation method as for classic histograms to keep results consistent. (The main use case for NHCBs is a drop-in replacement for classic histograms.) However, for standard exponential schemas, linear interpolation can be seen as a misfit. While exponential schemas primarily intend to minimize the relative error of quantile estimations, they also benefit from a balanced usage of buckets, at least over certain ranges of observed values. The basic assumption is that for most practically occurring distributions, the density of observations tends to be higher for smaller observed values. Therefore, PromQL uses exponential extrapolation for the standard schemas, which models the assumption that dividing a bucket into two when increasing the schema number by one (i.e. doubling the resolution) will on average see similar populations in both new buckets. A more detailed explanation can be found in the [PR implementing the interpolation method](https://github.com/prometheus/prometheus/pull/14677). A special case is interpolation within the zero bucket. The zero bucket breaks the exponential bucketing schema. Therefore, linear interpolation is applied within the zero bucket. Furthermore, if all populated regular buckets of a histogram are positive, it is assumed that all observations in the zero bucket are also positive, i.e. the interpolation is done between zero and the upper bound of the zero bucket. In the case of a histogram where all populated regular buckets are negative, the situation is mirrored, i.e. the interpolation within the zero bucket is done between the lower bond of the zero bucket and zero. ### Mixed series As already discussed above, neither the sample type nor the flavor of a native histogram is part of the identity of a series. Therefore, one and the same series might contain a mix of different sample types and flavors. A mix of counter histograms and gauge histograms doesn't prevent any PromQL operation, but a warn-level annotation is returned with the result if some of the input samples have an inappropriate flavor (see [above](#gauge-histograms-vs-counter-histograms)). A mix of float samples and histogram samples is more problematic. Many functions that operate on range vectors will remove elements from the result where the input elements contain a mix of floats and histograms. If this happens, a warn-level annotation is added to the result. Concrete examples can be found [below](#functions). ### Unary minus and negative histograms The unary minus can be used on native histograms. It returns a histogram where all bucket populations and the count and the sum of observations have their sign inverted. The counter reset hint is set to `GaugeType` in any case. Everything else stays the same. Enforcing `GaugeType` is needed because explicit counter reset detection will be thrown off by the inverted sign. Generally, histograms with negative bucket populations or a negative count of observations do not really make sense on their own and are only supposed to act as intermediate results inside other expressions. They are always considered gauge histograms within PromQL. They cannot be persisted as a result of a | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.03092617727816105,
0.03684317320585251,
-0.0009039377328008413,
-0.09397720545530319,
-0.037237655371427536,
-0.029525866732001305,
-0.047450482845306396,
0.07904588431119919,
0.03354860842227936,
0.02664528228342533,
-0.08093449473381042,
-0.060586221516132355,
0.10028181970119476,
0.0... | 0.126453 |
sign. Generally, histograms with negative bucket populations or a negative count of observations do not really make sense on their own and are only supposed to act as intermediate results inside other expressions. They are always considered gauge histograms within PromQL. They cannot be persisted as a result of a recording rule. (A rule evaluating to a negative histogram results in an error.) It is impossible to represent negative histograms in any of the exchange formats (exposition formats, remote-write, OTLP). ### Binary operators Most binary operators do not work between two histograms or between a histogram and a float or between a histogram and a scalar. If an operator processes such an impossible combination, the corresponding element is removed from the output vector and an info-level annotation is added to the result. (This situation is somewhat similar to label matching, where the sample type plays a role similar to a label. Therefore, such a mismatch might be known and deliberate, which is the reason why the level of the annotation is only info.) The following describes all the operations that actually \_do\_ work. Addition (`+`) and subtraction (`-`) work between two compatible histograms. These operators add or subtract all matching bucket populations and the count and the sum of observations. Missing buckets are assumed to be empty and treated accordingly. Generally, both operands should be gauges. Adding and subtracting counter histograms requires caution, but PromQL allows it. Adding a gauge histogram and a counter histogram results in a gauge histogram. Adding two counter histograms results in a counter histogram. If the two operands share the same counter reset hint, the resulting counter histogram retains that counter reset hint. Otherwise, the resulting counter reset hint is set to `UnknownCounterReset`. The result of a subtraction is always marked as a gauge histogram because it might result in negative histograms, see [notes above](#unary-minus-and-negative-histograms). Adding or subtracting two counter histograms with directly contradicting counter reset hints (i.e. `CounterReset` and `NotCounterReset`) triggers a warn-level annotation. (TODO: As described [above](#counter-reset-considerations), the TSDB currently does not return `NotCounterReset`, so this annotation will only happen under specific circumstances involving the `HistogramStatsIterator`, which includes additional counter reset tracking. See [tracking issue](https://github.com/prometheus/prometheus/issues/15346).) Multiplication (`\*`) works between a float sample or a scalar on the one side and a histogram on the other side, in any order. It multiplies all bucket populations and the count and the sum of observations by the float (sample or scalar). This will lead to “scaled” and sometimes even negative histograms, which is usually only useful as intermediate results inside other expressions (see also [notes above](#unary-minus-and-negative-histograms)). Multiplication works for both counter histograms and gauge histograms, and their flavor is left unchanged by the operation. Division (`/`) works between a histogram on the left hand side and a float sample or a scalar on the right hand side. It is equivalent to multiplication with the inverse of the float (sample or scalar). Division by zero results in a histogram with no regular buckets and the zero bucket population and the count and sum of observations all set to `+Inf`, `-Inf`, or `NaN`, depending on their values in the input histogram (positive, negative, or zero/`NaN`, respectively). Equality (`==`) and inequality (`!=`) work between two histograms, both in their filtering version as well as with the `bool` modifier. They compare the schema, the custom values, the zero threshold, all bucket populations, and the sum and count of observations. Whether the histograms have counter or gauge flavor is irrelevant for the comparison. (A counter histogram could be equal to a gauge histogram.) The logical/set binary operators (`and`, `or`, `unless`) work | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.02993687242269516,
0.028301602229475975,
-0.004577483516186476,
0.0017682217294350266,
-0.056291766464710236,
-0.08232343941926956,
0.10212180018424988,
0.008340137079358101,
0.03541094437241554,
-0.043216779828071594,
-0.0037279678508639336,
-0.06852774322032928,
0.060526277869939804,
... | 0.118573 |
compare the schema, the custom values, the zero threshold, all bucket populations, and the sum and count of observations. Whether the histograms have counter or gauge flavor is irrelevant for the comparison. (A counter histogram could be equal to a gauge histogram.) The logical/set binary operators (`and`, `or`, `unless`) work as expected even if histogram samples are involved. They only check for the existence of a vector element and don't change their behavior depending on the sample type or flavor of an element (float or histogram, counter or gauge). The “trim” operators `>/` and `\_over\_time()` functions not mentioned before, native histogram samples are removed from the input range vector. In case any series contains a mix of float samples and histogram samples within the range, the removal of histograms is flagged by an info-level annotation. ### Recording rules Recording rules MAY result in native histogram values. They are stored back into the TSDB as during normal ingestion, including whether the histogram is a gauge histogram or a counter histogram. In the latter case, a counter reset explicitly marked by the counter reset hint is also stored, while a new counter reset detection is initiated during ingestion otherwise. TSDB implementations MAY convert the float histograms created by recording rules to integer histograms if this conversion precisely represents all the float values in the original histogram. ### Alerting rules Alerts work as usual with native histograms. However, it is RECOMMENDED to avoid native histograms as output values for alerts. If native histogram samples are used in templates, they are [rendered in their simple text form](#template-expansion) (as produced by the Go `FloatHistogram.String` method), which is hard to read for humans. ### Testing framework The PromQL testing framework has been extended so that both PromQL unit tests as well as rules unit tests via `promtool` can include native histograms. The histogram sample notation is complex and explained in the [documentation for rules unit testing](https://prometheus.io/docs/prometheus/latest/configuration/unit\_testing\_rules/#series). In the unit test framework there is an alternative `load` command called `load\_with\_nhcb`, which converts classic histograms to NHCBs and loads both the float series of the classic histogram as well as the NHCB series resulting from the conversion. Not specific to native histograms, but very useful in their context, is the `expect` keyword in the unit test framework that can define expectations about the info- and warn-level annotations. ### Optimizations As usual, PromQL implementations MAY apply any optimizations they see fit as long as the behavior stays the same. Decoding native histograms can be quite expensive with the potentially many buckets. Similarly, deep-copying a histogram sample within the PromQL engine is much more expensive than copying a simple float sample. This creates a huge potential for optimization compared to a naive approach of always decoding everything and always copying everything. Prometheus currently tries to avoid needless copies (TODO: but a proper CoW like approach still has to be implemented, as it would be much cleaner and less bug prone) and skips decoding of the buckets for special cases where only the sum and count of observations is required. ## Prometheus query API The [query API documentation](https://prometheus.io/docs/prometheus/latest/querying/api/#native-histograms) includes native histogram support. This section focuses on the parts relevant for native histograms and provides a bit of context not part of the API documentation. ### Instant and range queries To return native histograms in the JSON response of instant (`query` endpoint) and range (`query\_range` endpoint) queries, both the `vector` and `matrix` result type needs an extension by a new key. The `vector` result type gets a new key `histogram` at the same level as the existing `value` key. Both these keys | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.011926821433007717,
0.0010589915327727795,
0.013343743048608303,
0.01130265649408102,
-0.008753444068133831,
0.004313173703849316,
0.05896145477890968,
0.006138276308774948,
0.01749459281563759,
0.0057005793787539005,
0.0014304836513474584,
-0.06262785941362381,
-0.01773759350180626,
0.... | 0.137697 |
native histograms in the JSON response of instant (`query` endpoint) and range (`query\_range` endpoint) queries, both the `vector` and `matrix` result type needs an extension by a new key. The `vector` result type gets a new key `histogram` at the same level as the existing `value` key. Both these keys are mutually exclusive, i.e. each element in a `vector` has either a `value` key (for a float result) or a `histogram` key (for a histogram result). The value of the `histogram` key is structured similarly to the value of the `value` key (a two-element array), with the difference that the string representing the float sample value is replaced by a specific histogram object described below. The `matrix` result type gets a new key `histograms` at the same level as the existing `values` key. These keys are \_not\_ mutually exclusive. A series may contain both float values and histogram values, but for a given timestamp, there must be only one sample, either a float or a histogram. The value of the `histograms` key is structured similarly to the value of the `values` key (an array of \_n\_ two-element arrays), with the difference that the strings representing float sample values are replaced by specific histogram objects described below. Note that a better naming of the keys would be `float`/`histogram` and `floats`/`histograms` because both float values and histogram values are values. The current naming has historical reasons. (In the past, there was only one value type, namely floats, so calling the keys simply `value` and `values` was the obvious choice.) The intention here is to not break existing consumers that do not know about native histograms. The histogram object mentioned above has the following structure: ``` { "count": "", "sum": "", "buckets": [ [ , "", "", "" ], ... ] } ``` `count` and `sum` directly correspond to the histogram fields of the same name. Each bucket is represented explicitly with its boundaries and count, including the zero bucket. Spans and the schema are therefore not part of the response, and the structure of the histogram object does not depend on the used schema. The `` placeholder is an integer between 0 and 3 with the following meaning: \* 0: “open left” (left boundary is exclusive, right boundary in inclusive) \* 1: “open right” (left boundary is inclusive, right boundary in exclusive) \* 2: “open both” (both boundaries are exclusive) \* 3: “closed both” (both boundaries are inclusive) For standard schemas, positive buckets are “open left”, negative buckets are “open right”, and the zero bucket (with a negative left boundary and a positive right boundary) is “closed both”. For NHCBs, all buckets are “open left” (mirroring the behavior of classic histograms). Future schemas might utilize different boundary rules. ### Metadata For the `series` endpoint, series containing native histograms are included in the same way as conventional series containing only floats. The endpoint does not provide any information what sample types are included (and in fact, \_any\_ series may contain either or both sample types). Note in particular that a histogram exposed by a target under the name `request\_duration\_seconds` will lead to a series called `request\_duration\_seconds` if it is exposed and ingested as a native histogram, but if it is exposed and ingested as a classic histogram, it will lead to a set of series called `request\_duration\_seconds\_sum`, `request\_duration\_seconds\_count`, and `request\_duration\_seconds\_bucket`. If the histogram is [ingested as \_both\_ a native histogram and a classic histogram](#scraping-both-classic-and-native-histograms), all of the series names above will be returned by the `series` endpoint. The target and metric metadata (endpoints `targets/metadata` and `metadata`) work a bit differently, as they are | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.006738707423210144,
0.037522099912166595,
-0.086832694709301,
-0.019339319318532944,
-0.07321179658174515,
0.011994678527116776,
-0.011393573135137558,
0.0021903933957219124,
0.03698289021849632,
-0.03447481617331505,
0.0305397417396307,
-0.02086244337260723,
0.0472077950835228,
-0.0307... | 0.102374 |
of series called `request\_duration\_seconds\_sum`, `request\_duration\_seconds\_count`, and `request\_duration\_seconds\_bucket`. If the histogram is [ingested as \_both\_ a native histogram and a classic histogram](#scraping-both-classic-and-native-histograms), all of the series names above will be returned by the `series` endpoint. The target and metric metadata (endpoints `targets/metadata` and `metadata`) work a bit differently, as they are acting on the original name as exposed by the target. This means that a classic histogram called `request\_duration\_seconds` will be represented by these metadata endpoints only as `request\_duration\_seconds` (and not `request\_duration\_seconds\_sum`, `request\_duration\_seconds\_count`, or `request\_duration\_seconds\_bucket`). A native histogram `request\_duration\_seconds` will also be represented under this name. Even in the case where `request\_duration\_seconds` is ingested as both a classic and a native histogram, there will be no collision as the metadata returned is actually the same (most notably the returned `type` will be `histogram`). In other words, there is currently no way of distinguishing native from classic histograms via the metadata endpoints alone. An additional look-up via the `series` endpoint is required. There are no plans to change this, as the existing metadata endpoints are anyway severely limited (no historical information, no metadata for metrics created by rules, limited ability to handle conflicting metadata between different targets). There are plans, though, to improve metadata handling in Prometheus in general. Those efforts will also take into account how to support native histograms properly. (TODO: Update as progress is made.) ## Prometheus UI This section describes the rendering of histograms by Prometheus's own UI. This MAY be used as a guideline for third party graphing frontends. In the \_Table\_ view, a histogram data point is rendered graphically as a bar graph together with a textual representation of all the buckets with their lower and upper limit and the count and sum of observations. Each bar in the bar graph represents a bucket. The position of each bar on the \_x\_ axis is determined by the lower and upper limit of the corresponding bucket. The area of each bar is proportional to the population of the corresponding bucket (which is a core principle of rendering histograms in general). The graphical histogram allows a choice between an exponential and a linear \_x\_ axis. The former is the default. It is a good fit for the standard schemas. (TODO: Consider linear as a default for non-exponential schemas.) Conveniently, all regular buckets of an exponential schema have the same width on an exponential \_x\_ axis. This means that the \_y\_ axis can display actual bucket populations without violating the above principle that the \_area\_ (not the height) of a bar is representative for the bucket population. The zero bucket is an exception to that. Technically, it has an infinite width. Prometheus simply renders it with the same width as the regular exponential buckets (which in turn means that the \_x\_ axis is not strictly exponential around the zero point). (TODO: How to do the rendering for non-exponential schemas.) With a linear \_x\_ axis, the buckets generally have varying width. Therefore, the \_y\_ axis displays the bucket population divided by its width. The Prometheus UI does not render values on the \_y\_ axis as they would be hard to interpret for humans anyway. The population can still be inspected in the text representation. In the \_Graph\_ view, Prometheus displays a heatmap (TODO: not yet, see below), which could be seen as a series of histograms over time, rotated by 90 degrees and encoding the bucket population as a color rather than the height of a bar. The typical query to render a counter-like histogram as a heatmap would be a `rate` query. A heatmap is an extremely powerful representation | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.017662569880485535,
-0.026462813839316368,
-0.04696716368198395,
-0.0433879978954792,
-0.01999184861779213,
-0.03618166968226433,
-0.037251245230436325,
0.0004964755498804152,
0.05506657436490059,
-0.059568461030721664,
-0.032673344016075134,
-0.09722942858934402,
0.009083881974220276,
... | 0.138367 |
seen as a series of histograms over time, rotated by 90 degrees and encoding the bucket population as a color rather than the height of a bar. The typical query to render a counter-like histogram as a heatmap would be a `rate` query. A heatmap is an extremely powerful representation that allows humans to easily spot characteristics of distributions as they change over time. TODO: Heatmaps are not implemented yet. Instead, the UI plots just the sum of observations as a conventional graph. See [tracking issue](https://github.com/prometheus/prometheus/issues/11268). The same issue also discusses how to deal with the rendering of range vectors in the \_Table\_ view. ## Template expansion Native histograms work in template expansion. They are rendered in a text representation inspired by the mathematical notation of open and closed intervals. (This is generated by the `FloatHistogram.String` method in Go.) As native histograms can have a lot of buckets and bucket boundaries tend to have boundaries with a lot of decimal places, the representation isn't necessarily very readable. Use native histograms in template expansion judiciously. Example for the text representation of a float histogram: ``` {count:3493.3, sum:2.349209324e+06, [-22.62741699796952,-16):1000, [-16,-11.31370849898476):123400, [-4,-2.82842712474619):3, [-2.82842712474619,-2):3.1, [-0.01,0.01]:5.5, (0.35355339059327373,0.5]:1, (1,1.414213562373095]:3.3, (1.414213562373095,2]:4.2, (2,2.82842712474619]:0.1} ``` ## Remote-write & remote-read The [protobuf specs for remote-write & remote-read](https://github.com/prometheus/prometheus/blob/main/prompb) were extended for native histograms. Receivers not capable of processing native histograms will simply ignore the newly added fields. Nevertheless, Prometheus has to be configured to send native histograms via remote-write (by setting the `send\_native\_histograms` remote-write config setting to true). In [remote-write v2](https://prometheus.io/docs/specs/remote\_write\_spec\_2\_0/), native histograms are a stable feature. It might appear tempting to convert classic histograms to NHCBs while sending or receiving them. However, this does not overcome the known consistency problems classic histograms suffer from when transmitted via remote-write. Instead, classic histograms SHOULD be converted to NHCBs during scraping. Similarly, explicit OTel histograms SHOULD be converted to NHCBs during [OTLP ingestion](#otlp) already. TODO: A remaining possible problem with remote-write is what to do if multiple exemplars originally ingested for the same native histogram are sent in different remote-write requests. ## Federation Federation of native histograms works as expected, provided the federation scrape uses the protobuf format. A federation via OpenMetrics text format will be possible, at least in principle, once native histograms are supported in that format, but federation via protobuf is preferred for efficiency reasons anyway. TODO: Update once OM supports NH. NHCBs are rendered as classic float histograms when exposed via the federation endpoint. Scrapers have the option of converting them back to NHCBs or ingest them as classic histograms. The latter could lead to naming collisions, though. Note that OpenMetrics v1 does not support classic float histograms. Fortunately, Prometheus federation does not use OpenMetrics v1 anyway, but either the protobuf format or the classic text format. ## OTLP The OTLP receiver built into Prometheus converts incoming OTel exponential histograms to Prometheus native histograms utilizing the compatibility described [above](#opentelemetry-interoperability). The resolution of a histogram using a schema (“scale” in OTel lingo) greater than 8 will be reduced to match schema 8. (In the unlikely case that a schema smaller than -4 is used, the ingestion will fail.) Explicit OTel histograms are the equivalent of Prometheus's classic histograms. Prometheus therefore converts them to classic histograms by default, but optionally offers direct conversion to NHCBs. ## Pushgateway Native histogram support has been gradually added to the [Pushgateway](https://github.com/prometheus/pushgateway). Full support was reached in v1.9. The Pushgateway always has been based on the classic protobuf format as its internal data model, which made the necessary changes easy (mostly UI concerns). Combined histograms (with classic and native buckets) can be pushed and will be | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
-0.015454643405973911,
0.012906485237181187,
0.02847856655716896,
0.00471921730786562,
-0.019390994682908058,
-0.028401542454957962,
-0.0214990247040987,
-0.0075974659994244576,
0.006010591518133879,
0.0092157032340765,
-0.0356299988925457,
-0.08061843365430832,
0.08233454078435898,
-0.017... | 0.095426 |
been gradually added to the [Pushgateway](https://github.com/prometheus/pushgateway). Full support was reached in v1.9. The Pushgateway always has been based on the classic protobuf format as its internal data model, which made the necessary changes easy (mostly UI concerns). Combined histograms (with classic and native buckets) can be pushed and will be exposed as such via the `/metrics` endpoint. (However, the query API, which can be used to query the pushed metrics as JSON, will only be able to return one kind of buckets and will prefer native buckets if present.) ## `promtool` This section describes `promtool` commands added or changed to support native histograms. Commands not mentioned explicitly do not directly interact with native histograms and require no changes. The `promtool query ...` commands work with native histograms. See the [query API documentation](#instant-and-range-queries) to learn about the output format. A new command `promtool query analyze` was specifically added to analyze classic and native histogram usage patterns returned by the query API. The rules unit testing via `promtool test rules` works with native histograms, using the format described [above](#testing-framework). `promtool tsdb analyze` and `promtool tsdb list` work normally with native histograms. The `--extended` output of the former has specific sections for histogram chunks. `promtool tsdb dump` uses the usual text representation of native histograms (as produced by the Go method `FloatHistogram.String`). `promtool tsdb create-blocks-from rules` works with rules that emit native histograms. The `promtool promql ...` commands support all the PromQL features added for native histograms. While `promtool tsdb bench write` could in principle include native histograms, such a support is not planned at the moment. The following commands depend on the OpenMetrics text format and therefore cannot support native histograms as long as there is no native histogram support in OpenMetrics: - `promtool check metrics` - `promtool push metrics` - `promtool tsdb dump-openmetrics` - `promtool tsdb create-blocks-from openmetrics` TODO: Update as progress is made. See [tracking issue](https://github.com/prometheus/prometheus/issues/12146). ## `prom2json` [`prom2json`](https://github.com/prometheus/prom2json) is a small tool that scrapes a Prometheus `/metrics` endpoint, converts the metrics to a bespoke JSON format, which it dumps to stdout. This is convenient for further processing with tools handling JSON, for example `jq`. `prom2json` v1.4 added support for native histograms. If a histogram in the exposition contains at least one bucket span, `prom2json` will replace the usual classic bucket in the JSON output with the buckets of the native histogram, following a format inspired by the [Prometheus query API](#prometheus-query-api). ## Migration considerations When migrating from classic to native histograms, there are three important sources of issues to consider: 1. Querying native histograms works differently from querying classic histograms. In most cases, the changes are minimal and straightforward, but there are tricky edge cases, which make it hard to perform a reliable auto-conversion. 2. Classic and native histograms cannot be aggregated with each other. A change from classic to native histograms at a certain point in time makes it hard to create dashboards that work across the transition point, and range vectors that contain the transition point will inevitably be incomplete (i.e. a range vector selecting classic histograms will only contain data points in the earlier part of the range, and a range vector selecting native histograms will only contain data points in the later part of the range). 3. A classic histogram might be tailored to have bucket boundaries precisely at the points of interest. Native histograms with a standard schema can have a high resolution, but do not allow to set bucket boundaries at arbitrary values. In those cases, the user experience with native histograms might actually be worse. To address (3), it is of course possible | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.002392577240243554,
-0.07429759204387665,
-0.06901691108942032,
-0.04178129509091377,
-0.06864118576049805,
-0.036778151988983154,
-0.09581451117992401,
0.06042391434311867,
-0.06727732717990875,
-0.01768956147134304,
-0.03606770932674408,
-0.04227038100361824,
0.01084123831242323,
0.013... | 0.116037 |
boundaries precisely at the points of interest. Native histograms with a standard schema can have a high resolution, but do not allow to set bucket boundaries at arbitrary values. In those cases, the user experience with native histograms might actually be worse. To address (3), it is of course possible to not migrate the classic histogram in question and leave things as they are. Another option is to leave the instrumentation the same but convert classic histograms to NHCBs upon ingestion. This leverages the increased storage performance of native histograms, but still requires to address (1) and (2) in the same way as for a full migration to native histograms (see next paragraphs). The conservative way of addressing (1) and (2) is to allow a long transition period, which comes at the cost of collecting and storing classic and native histograms in parallel for a while. The first step is to update the instrumentation to expose classic and native histograms in parallel. (This step can be skipped if the plan is to stick with classic histogram in the instrumentation and simply convert them to NHCBs during scraping.) Then configure Prometheus to scrape both classic and native histograms, see section about [scraping both classic and native histograms](#scraping-both-classic-and-native-histograms) above. (If needed, also [activate the conversion of classic histograms to NHCB](#scraping-classic-histograms-as-nhcbs).) The existing queries involving classic histograms will continue to work, but from now on, users can start working with native histograms and start to change queries in dashboards, alerts, recording rules,… As already mentioned above, it is important to pay attention to queries with longer range vectors like `histogram\_quantile(0.9, rate(rpc\_duration\_seconds[1d]))`. This query calculates the 90th percentile latency over the last day. Hoewever, if native histograms haven't been collected for at least one day, the query will only cover that shorter period. Thus, the query should only be used once native histograms have been collected for at least 1d. For a dashboard that displays the daily 90th percentile latency over the last month, it is tempting to craft a query that correctly switches from classic to native histograms at the right moment. While that is in principle possible, it is tricky. If feasible, the transition period during which classic and native histograms are collected in parallel, can be quite long to minimize the necessity to implement tricky switch-overs. For example, once classic and native histograms have been collected in parallel for a month, any dashboard not looking farther back than a month can simply be switched over from a classic histogram query to a native histogram query without any consideration about the right switch-over. Once there is confidence that all queries have been migrated correctly, configure Prometheus to only scrape native histograms (which is the “normal” setting). (It is also possible to incrementally remove classic histograms with relabel rules in the scrape config.) If everything still works, it is time to remove classic histograms from the instrumentation. The Grafana Mimir documentation contains [a detailed migration guide](https://grafana.com/docs/mimir/next/send/native-histograms/#migrate-from-classic-histograms) following the same philosophy as described in this section. | https://github.com/prometheus/docs/blob/main//docs/specs/native_histograms.md | main | prometheus | [
0.0818377211689949,
0.00032444047974422574,
-0.011553587391972542,
-0.08640497177839279,
0.002413554349914193,
-0.012417388148605824,
-0.09971778839826584,
0.014780743047595024,
-0.07665164768695831,
-0.0062412996776402,
-0.047219887375831604,
-0.08505174517631531,
0.04280142858624458,
0.0... | 0.064265 |
- Version: 2.0 - Status: Draft - Date: TBD - Authors: Arthur Silva Sens, Bartłomiej Płotka, David Ashpole, György Krajcsovits, Owen Williams, Richard Hartmann - Emeritus: Ben Kochie, Brian Brazil, Rob Skillington Created in 2012, Prometheus has been the default for cloud-native observability since 2015. A central part of Prometheus' design is its text metric exposition format, called the Prometheus exposition format 0.0.4, stable since 2014. In this format, special care has been taken to make it easy to generate, to ingest, and to understand by humans. As of 2020, there are more than 700 publicly listed exporters, an unknown number of unlisted exporters, and thousands of native library integrations using this format. Dozens of ingestors from various projects and companies support consuming it. In 2020, [OpenMetrics 1.0](open\_metrics\_spec.md) was released to clean up and tighten the specification with the additional purpose of bringing it into IETF. OpenMetrics 1.0 text exposition documented a working standard with wide and organic adoption among dozens of exporters, integrations, and ingestors. Around 2024, the OpenMetrics project was incorporated under the CNCF Prometheus project umbrella. Together with production learnings from deploying OpenMetrics 1.0 on wide scale and a backlog of new Prometheus innovations missing from the text formats, Prometheus community decided to pursue a second version of OpenMetrics standard. The intention of OpenMetrics 2.0 is to use OpenMetrics 1.0 as a foundation and enhance it to achieve even greater reliability, usability and consistency with the modern Prometheus data model, without sacrificing the ease of use and readability. See TODO for changes, since the OpenMetrics 1.0. This document is meant to be used a standalone specification, although the majority of the content is reused from the [OpenMetrics 1.0](open\_metrics\_spec.md). > NOTE: This document is an early draft, major changes expected. Read [here](https://github.com/prometheus/OpenMetrics/issues/276) on how to join [the Prometheus OM 2.0 work group](https://docs.google.com/document/d/1FCD-38Xz1-9b3ExgHOeDTQUKUatzgj5KbCND9t-abZY/edit?tab=t.lvx6fags1fga#heading=h.uaaplxxbz60u). ## Overview Metrics are a specific kind of telemetry data. They represent a snapshot of the current state for a set of data. They are distinct from logs or events, which focus on records or information about individual events. OpenMetrics is primarily a wire format, independent of any particular transport for that format. The format is expected to be consumed on a regular basis and to be meaningful over successive expositions. Implementers MUST expose metrics in the OpenMetrics text format in response to a simple HTTP GET request to a documented URL for a given process or device. This endpoint SHOULD be called "/metrics". Implementers MAY also expose OpenMetrics formatted metrics in other ways, such as by regularly pushing metric sets to an operator-configured endpoint over HTTP. ### Metrics and Time Series This standard expresses all system states as numerical values; counts, current values, enumerations, and boolean states being common examples. Contrary to metrics, singular events occur at a specific time. Metrics tend to aggregate data temporally. While this can lose information, the reduction in overhead is an engineering trade-off commonly chosen in many modern monitoring systems. Time series are a record of changing information over time. While time series can support arbitrary strings or binary data, only numeric data is in scope for this RFC. Common examples of metric time series would be network interface counters, device temperatures, BGP connection states, and alert states. ## Data Model This section MUST be read together with the ABNF section. In case of disagreements between the two, the ABNF's restrictions MUST take precedence. This reduces repetition as the text wire format MUST be supported. ### Data Types #### Values Metric values in OpenMetrics MUST be either Number or ComplexValue. ##### Number Number value MUST be either floating point | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.06437214463949203,
0.012931525707244873,
-0.031204737722873688,
-0.00040338592953048646,
0.08069651573896408,
-0.06045334413647652,
-0.028100712224841118,
0.01602519117295742,
0.011213390156626701,
0.018627267330884933,
-0.005781293846666813,
-0.02891876921057701,
-0.004162925761193037,
... | 0.227711 |
ABNF section. In case of disagreements between the two, the ABNF's restrictions MUST take precedence. This reduces repetition as the text wire format MUST be supported. ### Data Types #### Values Metric values in OpenMetrics MUST be either Number or ComplexValue. ##### Number Number value MUST be either floating point or integer. Note that ingestors of the format MAY only support float64. The non-real values NaN, +Inf and -Inf MUST be supported. NaN value MUST NOT be considered a missing value, but it MAY be used to signal a division by zero. ##### ComplexValue ComplexValue MUST contain all information necessary to recreate a sample value for Metric within the MetricFamily. The following Metric Types MUST use ComplexValue for Metric Values: TODO: Below will switch to Histogram and Summary in the next PR. \* [Histogram](#histogram) MetricFamily Type with [Native Buckets](#native-buckets). \* [GaugeHistogram](#gauge-histogram) MetricFamily Type with [Native Buckets](#native-buckets). Other Metric Types MUST use Numbers. See [Metric Types](#metric-types) for details. ##### Booleans Boolean values MUST follow `1==true`, `0==false`. #### Timestamps Timestamps MUST be Unix Epoch in seconds. Negative timestamps MAY be used. #### Strings Strings MUST only consist of valid UTF-8 characters and MAY be zero length. NULL (ASCII 0x0) MUST be supported. #### Label Labels are key-value pairs consisting of strings. Label names beginning with two underscores are RESERVED and MUST NOT be used unless specified by this standard. Label names SHOULD follow the restrictions in the ABNF section under the `label-name` section. Label names MAY be any quoted escaped UTF-8 string as described in the ABNF section. Be aware that exposing UTF-8 metrics is still experimental and may reduce usability. Empty label values SHOULD be treated as if the label was not present. #### LabelSet A LabelSet MUST consist of Labels and MAY be empty. Label names MUST be unique within a LabelSet. #### MetricPoint Each MetricPoint consists of a set of values, depending on the MetricFamily type. #### Exemplars Exemplars are references to data outside of the MetricSet. A common use case are IDs of program traces. Exemplars MUST consist of a LabelSet and a value, and SHOULD have a timestamp. They MAY each be different from the MetricPoints' LabelSet and timestamp. The combined length of the label names and values of an Exemplar's LabelSet MUST NOT exceed 128 UTF-8 character code points. Other characters in the text rendering of an exemplar such as `",=` are not included in this limit for implementation simplicity and for consistency between the text and proto formats. Ingestors MAY discard exemplars. #### Metric Metrics are defined by a unique LabelSet within a MetricFamily. Metrics MUST contain a list of one or more MetricPoints. Metrics with the same name for a given MetricFamily SHOULD have the same set of label names in their LabelSet. MetricPoints SHOULD NOT have explicit timestamps. If more than one MetricPoint is exposed for a Metric, then its MetricPoints MUST have monotonically increasing timestamps. #### MetricFamily A MetricFamily MAY have zero or more Metrics. A MetricFamily MUST have a name, HELP, TYPE, and UNIT metadata. Every Metric within a MetricFamily MUST have a unique LabelSet. ##### Name MetricFamily names are a string and MUST be unique within a MetricSet. Names SHOULD be in snake\_case. Names SHOULD follow the restrictions in the ABNF section under `metricname`. Metric names MAY be any quoted and escaped UTF-8 string as described in the ABNF section. Be aware that exposing UTF-8 metrics is still experimental and may reduce usability, especially when suffixes are not included. Colons in MetricFamily names are RESERVED to signal that the MetricFamily is the result of a calculation or aggregation of a | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.07757720351219177,
-0.024300461634993553,
-0.0525190569460392,
-0.024783434346318245,
-0.03558201342821121,
-0.009996915236115456,
0.0515349842607975,
0.05508481711149216,
-0.006853275932371616,
-0.024184798821806908,
0.023151634261012077,
-0.13071218132972717,
-0.00007183449633885175,
... | 0.139626 |
and escaped UTF-8 string as described in the ABNF section. Be aware that exposing UTF-8 metrics is still experimental and may reduce usability, especially when suffixes are not included. Colons in MetricFamily names are RESERVED to signal that the MetricFamily is the result of a calculation or aggregation of a general purpose monitoring system. MetricFamily names beginning with underscores are RESERVED and MUST NOT be used unless specified by this standard. ###### Suffixes The name of a MetricFamily MUST NOT result in a potential clash for sample metric names as per the ABNF with another MetricFamily in the Text Format within a MetricSet. An example would be a gauge called "foo\_total" as a counter called "foo" could create a "foo\_total" in the text format. Exposers SHOULD avoid names that could be confused with the suffixes that text format sample metric names use. \* Suffixes for the respective types are: \* Counter: `\_total` \* Summary: `\_count`, `\_sum`, `` (empty) \* Histogram: `\_count`, `\_sum`, `\_bucket` \* GaugeHistogram: `\_gcount`, `\_gsum`, `\_bucket` \* Info: `\_info` \* Gauge: `` (empty) \* StateSet: `` (empty) \* Unknown: `` (empty) ##### Type Type specifies the MetricFamily type. Valid values are "unknown", "gauge", "counter", "stateset", "info", "histogram", "gaugehistogram", and "summary". ##### Unit Unit specifies MetricFamily units. If non-empty, it SHOULD be a suffix of the MetricFamily name separated by an underscore. Be aware that further generation rules might make it an infix in the text format. Be aware that exposing metrics without the unit being a suffix of the MetricFamily name directly to end-users may reduce the usability due to confusion about what the metric's unit is. ##### Help Help is a string and SHOULD be non-empty. It is used to give a brief description of the MetricFamily for human consumption and SHOULD be short enough to be used as a tooltip. ##### MetricSet A MetricSet is the top level object exposed by OpenMetrics. It MUST consist of MetricFamilies and MAY be empty. Each MetricFamily name MUST be unique. The same label name and value SHOULD NOT appear on every Metric within a MetricSet. There is no specific ordering of MetricFamilies required within a MetricSet. An exposer MAY make an exposition easier to read for humans, for example sort alphabetically if the performance tradeoff makes sense. If present, an Info MetricFamily called "target" per the "Supporting target metadata in both push-based and pull-based systems" section below SHOULD be first. ### Metric Types #### Gauge Gauges are current measurements, such as bytes of memory currently used or the number of items in a queue. For gauges the absolute value is what is of interest to a user. A MetricPoint in a Metric with the type gauge MUST have a single value. Gauges MAY increase, decrease, or stay constant over time. Even if they only ever go in one direction, they might still be gauges and not counters. The size of a log file would usually only increase, a resource might decrease, and the limit of a queue size may be constant. A gauge MAY be used to encode an enum where the enum has many states and changes over time, it is the most efficient but least user friendly. #### Counter Counters measure discrete events. Common examples are the number of HTTP requests received, CPU seconds spent, or bytes sent. For counters how quickly they are increasing over time is what is of interest to a user. A MetricPoint in a Metric with the type Counter MUST have one value called Total. A Total is a non-NaN and MUST be monotonically non-decreasing over time, starting from 0. A MetricPoint | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.08509943634271622,
-0.10430394113063812,
0.0076264673843979836,
0.05776713415980339,
-0.09090708941221237,
0.02259761467576027,
0.05761454999446869,
0.01282586995512247,
0.05757983773946762,
-0.054638419300317764,
0.027424177154898643,
-0.12036335468292236,
0.04065711423754692,
0.057402... | 0.140806 |
sent. For counters how quickly they are increasing over time is what is of interest to a user. A MetricPoint in a Metric with the type Counter MUST have one value called Total. A Total is a non-NaN and MUST be monotonically non-decreasing over time, starting from 0. A MetricPoint in a Metric with the type Counter SHOULD have a Timestamp value called Start Timestamp. This can help ingestors discern between new metrics and long-running ones it did not see before. A MetricPoint in a Metric's Counter's Total MAY reset to 0. If present, the corresponding Start Timestamp MUST also be set to the timestamp of the reset. A MetricPoint in a Metric's Counter's Total MAY have an exemplar. #### StateSet StateSets represent a series of related boolean values, also called a bitset. If ENUMs need to be encoded this MAY be done via StateSet. A point of a StateSet metric MAY contain multiple states and MUST contain one boolean per State. States have a name which are Strings. A StateSet Metric's LabelSet MUST NOT have a label name which is the same as the name of its MetricFamily. If encoded as a StateSet, ENUMs MUST have exactly one Boolean which is true within a MetricPoint. This is suitable where the enum value changes over time, and the number of States isn't much more than a handful. MetricFamilies of type StateSets MUST have an empty Unit string. #### Info Info metrics are used to expose textual information which SHOULD NOT change during process lifetime. Common examples are an application's version, revision control commit, and the version of a compiler. A MetricPoint of an Info Metric contains a LabelSet. An Info MetricPoint's LabelSet MUST NOT have a label name which is the same as the name of a label of the LabelSet of its Metric. Info MAY be used to encode ENUMs whose values do not change over time, such as the type of a network interface. MetricFamilies of type Info MUST have an empty Unit string. #### Histogram Histograms measure distributions of discrete events. Common examples are the latency of HTTP requests, function runtimes, or I/O request sizes. A Histogram MetricPoint MUST contain Count and Sum values. The Count value MUST be equal to the number of measurements taken by the Histogram. The Count is a counter semantically. The Count MUST be an integer and MUST NOT be NaN or negative. The Sum value MUST be equal to the sum of all the measured event values. The Sum is only a counter semantically as long as there are no negative event values measured by the Histogram MetricPoint. A Histogram MUST measure values that are not NaN in either [Classic Buckets](#classic-buckets) or [Native Buckets](#native-buckets) or both. Measuring NaN is different for Classic and Native Buckets, see in their respective sections. Every Bucket MUST have well-defined boundaries and a value. The bucket value is called the bucket count colloquially. Boundaries of a Bucket MUST NOT be NaN. Bucket values MUST be integers. Semantically, bucket values are counters so MUST NOT be NaN or negative. A Histogram SHOULD NOT include NaN measurements as including NaN in the Sum will make the Sum equal to NaN and mask the sum of the real measurements for the lifetime of the time series. If a Histogram includes NaN measurements, then NaN measurements MUST be counted in the Count and the Sum MUST be NaN. If a Histogram includes +Inf or -Inf measurement, then +Inf or -Inf MUST be counted in Count and MUST be added to the Sum, potentially resulting in +Inf, -Inf or NaN in | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.05463961139321327,
-0.09404543787240982,
-0.03481820970773697,
-0.04612908884882927,
-0.07765207439661026,
0.04054364934563637,
0.044469211250543594,
0.009417288936674595,
0.08655381202697754,
-0.01979842409491539,
-0.009796561673283577,
-0.06681767851114273,
0.05002469941973686,
0.0522... | 0.196189 |
includes NaN measurements, then NaN measurements MUST be counted in the Count and the Sum MUST be NaN. If a Histogram includes +Inf or -Inf measurement, then +Inf or -Inf MUST be counted in Count and MUST be added to the Sum, potentially resulting in +Inf, -Inf or NaN in the Sum, the later for example in case of adding +Inf to -Inf. Note that in this case the Sum of finite measurements is masked until the next reset of the Histogram. A Histogram MetricPoint SHOULD have a Timestamp value called Start Timestamp. This can help ingestors discern between new metrics and long-running ones it did not see before. If the Histogram Metric has MetricPoints with Classic Buckets, the Histogram's Metric's LabelSet MUST NOT have a "le" label name, because in case the MetricPoints are stored as classic histogram series with the `\_bucket` suffix, then the "le" label in the Histogram will conflict with the "le" label generated from the bucket thresholds. The Histogram type is cumulative over time, but MAY be reset. When a Histogram is reset, the Sum, Count, Classic Buckets and Native Buckets MUST be reset to their zero state, and if the Start Timestamp is present then it MUST be set to the approximate reset time. Histogram resets can be useful for limiting the number of Native Buckets used by Histograms. ##### Classic Buckets Every Classic Bucket MUST have a threshold. Classic Bucket thresholds within a MetricPoint MUST be unique. Classic Bucket thresholds MAY be negative. A Classic Bucket MUST count the number of measured values less than or equal to its threshold, including measured values that are also counted in lower buckets. This allow monitoring systems to drop any non-+Inf bucket for performance/anti-denial-of-service reasons in a way that loses granularity but is still a valid Histogram. As an example, for a metric representing request latency in seconds with Classic Buckets and thresholds 1, 2, 3, and +Inf, it follows that value\_1 <= value\_2 <= value\_3 <= value\_+Inf. If ten requests took 1 second each, the values of the 1, 2, 3, and +Inf buckets will be all equal to 10. Histogram MetricPoints with Classic Buckets MUST have one Classic Bucket with a +Inf threshold. The +Inf bucket counts all measurements. The Count value MUST be equal to the value of the +Inf bucket. Exposed Classic Bucket thresholds SHOULD stay constant over time and between targets whose metrics are intended to be aggregated. A change of thresholds may prevent the affected histograms to be part of the same operation (e.g. an aggregation of different metrics or a rate calculation over time). If the NaN value is allowed, it MUST be counted in the +Inf bucket, and MUST NOT be counted in any other bucket. The rationale is that NaN does not belong to any bucket mathematically, however instrumentation libraries traditionally put it into the +Inf bucket. Classic Bucket values MAY have exemplars. The value of the exemplar MUST be within the Classic Bucket. Exemplars SHOULD be put into the Classic Bucket with the lowest threshold that includes the exemplar value. A Classic Bucket MUST NOT have more than one exemplar. ##### Native Buckets Histogram MetricPoints with Native Buckets MUST have a Schema value. The Schema MUST be an 8 bit signed integer between -4 and 8 (inclusive), these are called Standard (exponential) schemas. Schema values outside the -4 to 8 range are reserved for future use and MUST NOT be used. In particular: \* Schema values between -9 to -5 and 9 to 52 are reserved for use as Standard (exponential) Schemas. \* Schema value equal | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
0.006303389556705952,
-0.07369062304496765,
0.046817515045404434,
-0.03304352983832359,
0.017215801402926445,
-0.005578659940510988,
0.055105481296777725,
-0.0257674902677536,
0.0344785638153553,
0.0021346730645745993,
0.045956481248140335,
-0.06106241047382355,
0.015096113085746765,
0.005... | 0.100038 |
(inclusive), these are called Standard (exponential) schemas. Schema values outside the -4 to 8 range are reserved for future use and MUST NOT be used. In particular: \* Schema values between -9 to -5 and 9 to 52 are reserved for use as Standard (exponential) Schemas. \* Schema value equal to -53 is reserved for use for Custom Buckets Schema. For any Standard Schema n, the Histogram MetricPoint MAY contain positive and/or negative Native Buckets and MUST contain a zero Native Bucket. Empty positive or negative Native Buckets SHOULD NOT be present. In case of Standard Schemas, the boundaries of a positive or negative Native Bucket with index i MUST be calculated as follows (using Python syntax): The upper inclusive limit of a positive Native Bucket: `(2\*\*2\*\*-n)\*\*i` The lower exclusive limit of a positive Native Bucket: `(2\*\*2\*\*-n)\*\*(i-1)` The lower inclusive limit of a negative Native Bucket: `-((2\*\*2\*\*-n)\*\*i)` The upper exclusive limit of a negative Native Bucket: `-((2\*\*2\*\*-n)\*\*(i-1))` i is an integer number that MAY be negative. There are exceptions to the rules above concerning the largest and smallest finite values representable as a float64 (called MaxFloat64 and MinFloat64 in the following) and the positive and negative infinity values (+Inf and -Inf): The positive Native Bucket that contains MaxFloat64 (according to the boundary formulas above) has an upper inclusive limit of MaxFloat64 (rather than the limit calculated by the formulas above, which would overflow float64). The next positive Native Bucket (index i+1 relative to the bucket from the previous item) has a lower exclusive limit of MaxFloat64 and an upper inclusive limit of +Inf. (It could be called a positive Native overflow Bucket.) The negative Native Bucket that contains MinFloat64 (according to the boundary formulas above) has a lower inclusive limit of MinFloat64 (rather than the limit calculated by the formulas above, which would underflow float64). The next negative Native Bucket (index i+1 relative to the bucket from the previous item) has an upper exclusive limit of MinFloat64 and an lower inclusive limit of -Inf. (It could be called a negative Native overflow Bucket.) Native Buckets beyond the +Inf and -Inf buckets described above MUST NOT be used. The boundaries of the zero Native Bucket are `[-threshold, threshold]` inclusively. The Zero threshold MUST be a non-negative float64 value (threshold >= 0.0). If the Zero threshold is positive (threshold > 0), then any measured value that falls into the zero Native Bucket MUST be counted towards the zero Native Bucket and MUST NOT be counted in any other native bucket. The Zero threshold SHOULD be equal to a lower limit of an arbitrary Native Bucket. If the NaN value is not allowed, then the Count value MUST be equal to the sum of the negative, positive and zero Native Buckets. If the NaN value is allowed, it MUST NOT be counted in any Native Bucket, and MUST be counted towards the Count. The difference between the Count and the sum of the negative, positive and zero Native Buckets MUST BE the number of NaN observations. The rationale is that NaN does not belong to any bucket mathematically. A Histogram MetricPoint with Native Buckets MAY contain exemplars. The values of exemplars in a Histogram MetricPoint with Native Buckets SHOULD be evenly distributed to avoid only representing the bucket with the highest value and therefore most common case. #### GaugeHistogram GaugeHistograms measure current distributions. Common examples are how long items have been waiting in a queue, or size of the requests in a queue. A GaugeHistogram MetricPoint MUST contain Gcount, Gsum values. The GCount value MUST be equal to the number of measurements currently | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
0.024959970265626907,
0.022228149697184563,
0.01177031546831131,
-0.08614497631788254,
-0.05240512639284134,
-0.029677024111151695,
-0.02754674106836319,
0.030793344601988792,
-0.015532965771853924,
-0.03163890168070793,
-0.012658216990530491,
-0.15447847545146942,
0.035866301506757736,
0.... | 0.090315 |
therefore most common case. #### GaugeHistogram GaugeHistograms measure current distributions. Common examples are how long items have been waiting in a queue, or size of the requests in a queue. A GaugeHistogram MetricPoint MUST contain Gcount, Gsum values. The GCount value MUST be equal to the number of measurements currently in the GaugeHistogram. The GCount is a gauge semantically. The GCount MUST be and integer and MUST NOT be NaN or negative. The Gsum value MUST be equal to the sum of all the measured values currently in the GaugeHistogram. The Gsum is a gauge semantically. A GaugeHistogram MUST measure values that are not NaN in either [Classic Buckets](#classic-buckets) or [native buckets](#native-buckets) or both. Measuring NaN is different for Classic and Native Buckets, see in their respective sections. If a GaugeHistogram stops measuring values in either Classic or Native buckets and keeps measuring values in the other, it MUST clear and not expose the buckets it stopped measuring into. This avoids exposing different distribution from the two kind of buckets at the same time. Every bucket MUST have well-defined boundaries and a value. Boundaries of a bucket MUST NOT be NaN. Bucket values MUST be integers. Semantically, bucket values are gauges and MUST NOT be NaN or negative. A GaugeHistogram SHOULD NOT include NaN measurements as including NaN in the Gsum will make the Gsum equal to NaN and mask the sum of the real measurements for the lifetime of the time series. If a GaugeHistogram includes NaN measurements, then NaN measurements MUST be counted in the Gcount and the Gsum MUST be NaN. If a GaugeHistogram includes +Inf or -Inf measurement, then +Inf or -Inf MUST be counted in Gcount and MUST be added to the Gsum, potentially resulting in +Inf, -Inf or NaN in the Gsum, the later for example in case of adding +Inf to -Inf. Note that in this case the Gsum of finite measurements is masked until the next reset of the Histogram. If the GaugeHistogram Metric has MetricPoints with Classic Buckets, the GaugeHistogram's Metric's LabelSet MUST NOT have a "le" label name, because in case the MetricPoints are stored as classic histogram series with the `\_bucket` suffix, then the "le" label in the GaugeHistogram will conflict with the "le" label generated from the bucket thresholds. The Classic and Native buckets for a GaugeHistogram follow all the same rules as for a Histogram, with Gcount and Gsum playing the same role as Count and Sum. The exemplars for a GaugeHistogram follow all the same rules as for a Histogram. #### Summary Summaries also measure distributions of discrete events and MAY be used when Histograms are too expensive and/or an average event size is sufficient. They MAY also be used for backwards compatibility, because some existing instrumentation libraries expose precomputed quantiles and do not support Histograms. Precomputed quantiles SHOULD NOT be used, because quantiles are not aggregatable and the user often can not deduce what timeframe they cover. A Summary MetricPoint MAY consist of a Count, Sum, Start Timestamp, and a set of quantiles. Semantically, Count and Sum values are counters so MUST NOT be NaN or negative. Count MUST be an integer. A MetricPoint in a Metric with the type Summary which contains Count or Sum values SHOULD have a Timestamp value called Start Timestamp. This can help ingestors discern between new metrics and long-running ones it did not see before. Start Timestamp MUST NOT relate to the collection period of quantile values. Quantiles are a map from a quantile to a value. An example is a quantile 0.95 with value 0.2 in a metric | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.027983829379081726,
-0.07064417004585266,
-0.00994639191776514,
-0.010109673254191875,
-0.010039987042546272,
0.005333378911018372,
0.0734175592660904,
-0.07620684802532196,
0.06308144330978394,
-0.012064509093761444,
0.06549665331840515,
-0.042479950934648514,
0.03377307206392288,
0.01... | 0.079689 |
This can help ingestors discern between new metrics and long-running ones it did not see before. Start Timestamp MUST NOT relate to the collection period of quantile values. Quantiles are a map from a quantile to a value. An example is a quantile 0.95 with value 0.2 in a metric called myapp\_http\_request\_duration\_seconds which means that the 95th percentile latency is 200ms over an unknown timeframe. If there are no events in the relevant timeframe, the value for a quantile MUST be NaN. A Quantile's Metric's LabelSet MUST NOT have "quantile" label name. Quantiles MUST be between 0 and 1 inclusive. Quantile values MUST NOT be negative. Quantile values SHOULD represent the recent values. Commonly this would be over the last 5-10 minutes. #### Unknown Unknown SHOULD NOT be used. Unknown MAY be used when it is impossible to determine the types of individual metrics from 3rd party systems. A point in a metric with the unknown type MUST have a single value. # Data transmission & wire formats The text wire format MUST be supported and is the default. The protobuf wire format MAY be supported and MUST ONLY be used after negotiation. The OpenMetrics formats are Regular Chomsky Grammars, making writing quick and small parsers possible. The text format compresses well, and protobuf is already binary and efficiently encoded. Partial or invalid expositions MUST be considered erroneous in their entirety. ### Protocol Negotiation All ingestor implementations MUST be able to ingest data secured with TLS 1.2 or later. All exposers SHOULD be able to emit data secured with TLS 1.2 or later. ingestor implementations SHOULD be able to ingest data from HTTP without TLS. All implementations SHOULD use TLS to transmit data. Negotiation of what version of the OpenMetrics format to use is out-of-band. For example for pull-based exposition over HTTP standard HTTP content type negotiation is used, and MUST default to the oldest version of the standard (i.e. 1.0.0) if no newer version is requested. Push-based negotiation is inherently more complex, as the exposer typically initiates the connection. Producers MUST use the oldest version of the standard (i.e. 1.0.0) unless requested otherwise by the ingestor. ### Text format #### ABNF ABNF as per RFC 5234 "exposition" is the top level token of the ABNF. ```abnf exposition = metricset HASH SP eof [ LF ] metricset = \*metricfamily metricfamily = \*metric-descriptor \*metric metric-descriptor = HASH SP type SP (metricname / metricname-utf8) SP metric-type LF metric-descriptor =/ HASH SP help SP (metricname / metricname-utf8) SP escaped-string LF metric-descriptor =/ HASH SP unit SP (metricname / metricname-utf8) SP \*metricname-char LF metric = \*sample metric-type = counter / gauge / histogram / gaugehistogram / stateset metric-type =/ info / summary / unknown sample = metricname-and-labels SP number [SP timestamp] [SP start-timestamp] [exemplar] LF sample =/ metricname-and-labels SP "{" complex-value "}" [SP timestamp] [SP start-timestamp] \*exemplar LF exemplar = SP HASH SP labels-in-braces SP number [SP timestamp] metricname-and-labels = metricname [labels-in-braces] / name-and-labels-in-braces labels-in-braces = "{" [label \*(COMMA label)] "}" name-and-labels-in-braces = "{" metricname-utf8 \*(COMMA label) "}" label = label-key EQ DQUOTE escaped-string DQUOTE number = realnumber ; Case insensitive number =/ [SIGN] ("inf" / "infinity") number =/ "nan" timestamp = realnumber ; Not 100% sure this captures all float corner cases. ; Leading 0s explicitly okay realnumber = [SIGN] 1\*DIGIT realnumber =/ [SIGN] 1\*DIGIT ["." \*DIGIT] [ "e" [SIGN] 1\*DIGIT ] realnumber =/ [SIGN] \*DIGIT "." 1\*DIGIT [ "e" [SIGN] 1\*DIGIT ] ; RFC 5234 is case insensitive. ; Uppercase eof = %d69.79.70 type = %d84.89.80.69 help = %d72.69.76.80 unit = %d85.78.73.84 ; Lowercase counter = %d99.111.117.110.116.101.114 gauge = %d103.97.117.103.101 histogram = | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.08906890451908112,
0.00578675651922822,
-0.025728747248649597,
-0.04020105302333832,
-0.018739525228738785,
-0.06739966571331024,
0.03052414581179619,
-0.013552796095609665,
0.08063077181577682,
-0.008657537400722504,
0.057245172560214996,
-0.11018005758523941,
-0.06792926043272018,
0.0... | 0.130671 |
[SIGN] 1\*DIGIT ["." \*DIGIT] [ "e" [SIGN] 1\*DIGIT ] realnumber =/ [SIGN] \*DIGIT "." 1\*DIGIT [ "e" [SIGN] 1\*DIGIT ] ; RFC 5234 is case insensitive. ; Uppercase eof = %d69.79.70 type = %d84.89.80.69 help = %d72.69.76.80 unit = %d85.78.73.84 ; Lowercase counter = %d99.111.117.110.116.101.114 gauge = %d103.97.117.103.101 histogram = %d104.105.115.116.111.103.114.97.109 gaugehistogram = gauge histogram stateset = %d115.116.97.116.101.115.101.116 info = %d105.110.102.111 summary = %d115.117.109.109.97.114.121 unknown = %d117.110.107.110.111.119.110 BS = "\" EQ = "=" COMMA = "," HASH = "#" SIGN = "-" / "+" metricname = metricname-initial-char 0\*metricname-char metricname-char = metricname-initial-char / DIGIT metricname-initial-char = ALPHA / "\_" / ":" metricname-utf8 = DQUOTE escaped-string-non-empty DQUOTE label-key = label-name / DQUOTE escaped-string-non-empty DQUOTE label-name = label-name-initial-char \*label-name-char label-name-char = label-name-initial-char / DIGIT label-name-initial-char = ALPHA / "\_" escaped-string = \*escaped-char escaped-string-non-empty = 1\*escaped-char escaped-char = normal-char escaped-char =/ BS ("n" / DQUOTE / BS) escaped-char =/ BS normal-char ; Any unicode character, except newline, double quote, and backslash normal-char = %x00-09 / %x0B-21 / %x23-5B / %x5D-D7FF / %xE000-10FFFF ; Lowercase st @ timestamp start-timestamp = %d115.116 "@" timestamp ; Complex values complex-value = nativehistogram nativehistogram = nh-count "," nh-sum "," nh-schema "," nh-zero-threshold "," nh-zero-count [ "," nh-negative-spans "," nh-negative-buckets ] [ "," nh-positive-spans "," nh-positive-buckets ] ; count:x nh-count = %d99.111.117.110.116 ":" non-negative-integer ; sum:f allows real numbers and +-Inf and NaN nh-sum = %d115.117.109 ":" number ; schema:i nh-schema = %d115.99.104.101.109.97 ":" integer ; zero\_threshold:f nh-zero-threshold = %d122.101.114.111 "\_" %d116.104.114.101.115.104.111.108.100 ":" realnumber ; zero\_count:x nh-zero-count = %d122.101.114.111 "\_" %d99.111.117.110.116 ":" non-negative-integer ; negative\_spans:[1:2,3:4] and negative\_spans:[] nh-negative-spans = %d110.101.103.97.116.105.118.101 "\_" %d115.112.97.110.115 ":" "[" [nh-spans] "]" nh-positive-spans = %d112.111.115.105.116.105.118.101 "\_" %d115.112.97.110.115 ":" "[" [nh-spans] "]" ; Spans can start from any index, even negative, however subsequent spans ; can only advance the index, not decrease it. nh-spans = nh-start-span \*("," nh-span) nh-start-span = integer ":" positive-integer nh-span = non-negative-integer ":" positive-integer ; negative\_buckets:[1,2,3] and positive\_buckets:[1,2,3] nh-negative-buckets = %d110.101.103.97.116.105.118.101 "\_" %d98.117.99.107.101.116.115 ":" "[" [nh-buckets] "]" nh-positive-buckets = %d112.111.115.105.116.105.118.101 "\_" %d98.117.99.107.101.116.115 ":" "[" [nh-buckets] "]" nh-buckets = non-negative-integer \*("," non-negative-integer) integer = [SIGN] 1\*"0" / [SIGN] positive-integer non-negative-integer = ["+"] 1\*"0" / ["+"] positive-integer ; Leading 0s explicitly okay. positive-integer = \*"0" positive-digit \*DIGIT positive-digit = "1" / "2" / "3" / "4" / "5" / "6" / "7" / "8" / "9" ``` #### Overall Structure UTF-8 MUST be used. Byte order markers (BOMs) MUST NOT be used. As an important reminder for implementers, byte 0 is valid UTF-8 while, for example, byte 255 is not. The content type MUST be: ``` application/openmetrics-text; version=1.0.0; charset=utf-8 ``` Line endings MUST be signalled with line feed (\n) and MUST NOT contain carriage returns (\r). Expositions MUST end with EOF and SHOULD end with `EOF\n`. An example of a complete exposition: ```openmetrics # TYPE acme\_http\_router\_request\_seconds summary # UNIT acme\_http\_router\_request\_seconds seconds # HELP acme\_http\_router\_request\_seconds Latency though all of ACME's HTTP request router. acme\_http\_router\_request\_seconds\_sum{path="/api/v1",method="GET"} 9036.32 st@1605281325.0 acme\_http\_router\_request\_seconds\_count{path="/api/v1",method="GET"} 807283.0 st@1605281325.0 acme\_http\_router\_request\_seconds\_sum{path="/api/v2",method="POST"} 479.3 st@1605301325.0 acme\_http\_router\_request\_seconds\_count{path="/api/v2",method="POST"} 34.0 st@1605301325.0 # TYPE go\_goroutines gauge # HELP go\_goroutines Number of goroutines that currently exist. go\_goroutines 69 # TYPE process\_cpu\_seconds counter # UNIT process\_cpu\_seconds seconds # HELP process\_cpu\_seconds Total user and system CPU time spent in seconds. process\_cpu\_seconds\_total 4.20072246e+06 # TYPE acme\_http\_request\_seconds histogram # UNIT acme\_http\_request\_seconds seconds # HELP acme\_http\_request\_seconds Latency histogram of all of ACME's HTTP requests. acme\_http\_request\_seconds{path="/api/v1",method="GET"} {count:2,sum:1.2e2,schema:0,zero\_threshold:1e-4,zero\_count:0,positive\_spans:[1:2],positive\_buckets:[1,1]} st@1605301325.0 acme\_http\_request\_seconds\_count{path="/api/v1",method="GET"} 2 st@1605301325.0 acme\_http\_request\_seconds\_sum{path="/api/v1",method="GET"} 1.2e2 st@1605301325.0 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="0.5"} 1 st@1605301325.0 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="1"} 2 st@1605301325.0 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="+Inf"} 2 st@1605301325.0 # TYPE "foodb.read.errors" counter # HELP "foodb.read.errors" The number of errors in the read path for fooDb. {"foodb.read.errors","service.name"="my\_service"} 3482 # EOF ``` ##### UTF-8 Quoting Metric names not conforming to the ABNF definition of `metricname` MUST be | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.01676698960363865,
0.053851447999477386,
-0.024249043315649033,
-0.025231454521417618,
-0.009667946957051754,
-0.006448650732636452,
0.033311739563941956,
0.09453633427619934,
-0.03965305536985397,
-0.02449868619441986,
-0.02433234080672264,
-0.052223920822143555,
0.03661058098077774,
0... | 0.128664 |
st@1605301325.0 acme\_http\_request\_seconds\_sum{path="/api/v1",method="GET"} 1.2e2 st@1605301325.0 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="0.5"} 1 st@1605301325.0 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="1"} 2 st@1605301325.0 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="+Inf"} 2 st@1605301325.0 # TYPE "foodb.read.errors" counter # HELP "foodb.read.errors" The number of errors in the read path for fooDb. {"foodb.read.errors","service.name"="my\_service"} 3482 # EOF ``` ##### UTF-8 Quoting Metric names not conforming to the ABNF definition of `metricname` MUST be enclosed in double quotes and the alternative UTF-8 syntax MUST be used. In these MetricPoints, the quoted metric name MUST be moved inside the brackets without a label name and equal sign, in accordance with the ABNF. The metric names MUST be enclosed in double quotes in TYPE, UNIT, and HELP lines. Quoting and the alternative metric syntax MAY be used for any metric name, regardless of whether the name requires quoting or not. Label names not conforming to the `label-name` ABNF definition MUST be enclosed in double quotes. Any label name MAY be enclosed in double quotes. Expressed as regular expressions, metric names that don't need to be enclosed in quotes must match: `^[a-zA-Z\_:][a-zA-Z0-9\_:]\*$`. For label names, the string must match: `^[a-zA-Z\_][a-zA-Z0-9\_]\*$`. Complete example: ```openmetrics # TYPE "process.cpu.seconds" counter # UNIT "process.cpu.seconds" seconds # HELP "process.cpu.seconds" Total user and system CPU time spent in seconds. {"process.cpu.seconds","node.name"="my\_node"} 4.20072246e+06 # TYPE "quoting\_example" gauge # HELP "quoting\_example" Number of goroutines that currently exist. {"quoting\_example","foo"="bar"} 4.5 # EOF ``` ##### Escaping Where the ABNF notes escaping, the following escaping MUST be applied Line feed, `\n` (0x0A) -> literally `\\n` (Bytecode 0x5c 0x6e) Double quotes -> `\\"` (Bytecode 0x5c 0x22) Backslash -> `\\\\` (Bytecode 0x5c 0x5c) A double backslash SHOULD be used to represent a backslash character. A single backslash SHOULD NOT be used for undefined escape sequences. As an example, `\\\\a` is equivalent and preferable to `\\a`. Escaping MUST also be applied to quoted UTF-8 strings. ##### Numbers Integer numbers MUST NOT have a decimal point. Examples are `23`, `0042`, and `1341298465647914`. Floating point numbers MUST be represented either with a decimal point or using scientific notation. Examples are `8903.123421` and `1.89e-7`. Floating point numbers MUST fit within the range of a 64-bit floating point value as defined by IEEE 754, but MAY require so many bits in the mantissa that results in lost precision. This MAY be used to encode nanosecond resolution timestamps. Arbitrary integer and floating point rendering of numbers MUST NOT be used for "quantile" and "le" label values as in section "Canonical Numbers". They MAY be used anywhere else numbers are used. ###### ComplexValues ComplexValue is represented as structured data with fields. There MUST NOT be any whitespace around fields. See the ABNF for exact details about the format and possible values. ###### Considerations: Canonical Numbers Numbers in the "le" label values of histograms and "quantile" label values of summary metrics are special in that they're label values, and label values are intended to be opaque. As end users will likely directly interact with these string values, and as many monitoring systems lack the ability to deal with them as first-class numbers, it would be beneficial if a given number had the exact same text representation. Consistency is highly desirable, but real world implementations of languages and their runtimes make mandating this impractical. The most important common quantiles are 0.5, 0.95, 0.9, 0.99, 0.999 and bucket values representing values from a millisecond up to 10.0 seconds, because those cover cases like latency SLAs and Apdex for typical web services. Powers of ten are covered to try to ensure that the switch between fixed point and exponential rendering is consistent as this varies across runtimes. The target rendering is equivalent to the default Go rendering of float64 values (i.e. %g), | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.028832536190748215,
0.04708712548017502,
-0.012385777197778225,
0.016349105164408684,
-0.050178997218608856,
-0.039670512080192566,
-0.0020309067331254482,
0.037468187510967255,
0.05143079161643982,
0.0006767361774109304,
0.001274871057830751,
-0.11007992923259735,
0.0469626858830452,
-... | 0.010708 |
cases like latency SLAs and Apdex for typical web services. Powers of ten are covered to try to ensure that the switch between fixed point and exponential rendering is consistent as this varies across runtimes. The target rendering is equivalent to the default Go rendering of float64 values (i.e. %g), with a .0 appended in case there is no decimal point or exponent to make clear that they are floats. Exposers MUST produce output for positive infinity as +Inf. Exposers SHOULD produce output for the values 0.0 up to 10.0 in 0.001 increments in line with the following examples: 0.0 0.001 0.002 0.01 0.1 0.9 0.95 0.99 0.999 1.0 1.7 10.0 Exposers SHOULD produce output for the values 1e-10 up to 1e+10 in powers of ten in line with the following examples: 1e-10 1e-09 1e-05 0.0001 0.1 1.0 100000.0 1e+06 1e+10 Parsers MUST NOT reject inputs which are outside of the canonical values merely because they are not consistent with the canonical values. For example 1.1e-4 must not be rejected, even though it is not the consistent rendering of 0.00011. Exposers SHOULD follow these patterns for non-canonical numbers, and the intention is by adjusting the rendering algorithm to be consistent for these values that the vast majority of other values will also have consistent rendering. Exposers using only a few particular le/quantile values could also hardcode. In languages such as C where a minimal floating point rendering algorithm such as Grisu3 is not readily available, exposers MAY use a different rendering. A warning to implementers in C and other languages that share its printf implementation: The standard precision of %f, %e and %g is only six significant digits. 17 significant digits are required for full precision, e.g. `printf("%.17g", d)`. ##### Timestamps Timestamps SHOULD NOT use exponential float rendering for timestamps if nanosecond precision is needed as rendering of a float64 does not have sufficient precision, e.g. `1604676851.123456789`. #### MetricFamily There MUST NOT be an explicit separator between MetricFamilies. The next MetricFamily MUST be signalled with either metadata or a new sample metric name which cannot be part of the previous MetricFamily. MetricFamilies MUST NOT be interleaved. ##### MetricFamily metadata There are four pieces of metadata: The MetricFamily name, TYPE, UNIT and HELP. An example of the metadata for a counter Metric called foo is: ```openmetrics-add-eof # TYPE foo counter ``` If no TYPE is exposed, the MetricFamily MUST be of type Unknown. If a unit is specified it MUST be provided in a UNIT metadata line. In addition, an underscore and the unit SHOULD be the suffix of the MetricFamily name. Be aware that exposing metrics without the unit being a suffix of the MetricFamily name directly to end-users may reduce the usability due to confusion about what the metric's unit is. A valid example for a foo\_seconds metric with a unit of "seconds": ```openmetrics-add-eof # TYPE foo\_seconds counter # UNIT foo\_seconds seconds ``` A valid, but discouraged example, where the unit is not a suffix on the name: ```openmetrics-add-eof # TYPE foo counter # UNIT foo seconds ``` It is also valid to have: ```openmetrics-add-eof # TYPE foo\_seconds counter ``` If the unit is known it SHOULD be provided. The value of a UNIT or HELP line MAY be empty. This MUST be treated as if no metadata line for the MetricFamily existed. ```openmetrics-add-eof # TYPE foo\_seconds counter # UNIT foo\_seconds seconds # HELP foo\_seconds Some text and \n some \" escaping ``` See the UTF-8 Quoting section for circumstances where the metric name MUST be enclosed in double quotes. There MUST NOT be more than one of each | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.04695930704474449,
-0.04205020144581795,
-0.024487964808940887,
0.023385344073176384,
-0.028351934626698494,
-0.10235650092363358,
0.005502752959728241,
-0.018448473885655403,
0.03991590067744255,
0.032382212579250336,
0.05923914536833763,
-0.08654504269361496,
0.02295905537903309,
0.00... | -0.039178 |
for the MetricFamily existed. ```openmetrics-add-eof # TYPE foo\_seconds counter # UNIT foo\_seconds seconds # HELP foo\_seconds Some text and \n some \" escaping ``` See the UTF-8 Quoting section for circumstances where the metric name MUST be enclosed in double quotes. There MUST NOT be more than one of each type of metadata line for a MetricFamily. The ordering SHOULD be TYPE, UNIT, HELP. Aside from this metadata and the EOF line at the end of the message, you MUST NOT expose lines beginning with a #. ##### Metric Metrics MUST NOT be interleaved. See the example in "Text format -> MetricPoint". Labels A sample without labels or a timestamp and the value 0 MUST be rendered either like: ```openmetrics-add-eof bar\_seconds\_count 0 ``` or like ```openmetrics-add-eof bar\_seconds\_count{} 0 ``` Label values MAY be any valid UTF-8 value, so escaping MUST be applied as per the ABNF. A valid example with two labels: ```openmetrics-add-eof bar\_seconds\_count{a="x",b="escaping\" example \n "} 0 ``` Metric names and label names MAY also be any valid UTF-8 value, and under certain circumstances they MUST be quoted and escaped per the ABNF. See the UTF-8 Quoting section for specifics. ```openmetrics-add-eof {"\"bar\".seconds.count","b\\"="escaping\" example \n "} 0 ``` The rendering of values for a MetricPoint can include additional labels (e.g. the "le" label for a Histogram type), which MUST be rendered in the same way as a Metric's own LabelSet. #### MetricPoint MetricPoints MUST NOT be interleaved. A correct example where there were multiple MetricPoints and Samples within a MetricFamily would be: ```openmetrics-add-eof # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_sum{a="bb"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_sum{a="bb"} 0 456 foo\_seconds\_count{a="ccc"} 0 123 foo\_seconds\_sum{a="ccc"} 0 123 foo\_seconds\_count{a="ccc"} 0 456 foo\_seconds\_sum{a="ccc"} 0 456 ``` An incorrect example where Metrics are interleaved: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_count{a="ccc"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_count{a="ccc"} 0 456 ``` An incorrect example where MetricPoints are interleaved: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_sum{a="bb"} 0 123 foo\_seconds\_sum{a="bb"} 0 456 ``` #### Metric types ##### Gauge The Sample MetricName for the value of a MetricPoint for a MetricFamily of type Gauge MUST NOT have a suffix. An example MetricFamily with a Metric with no labels and a MetricPoint with no timestamp: ```openmetrics-add-eof # TYPE foo gauge foo 17.0 ``` An example of a MetricFamily with two Metrics with a label and MetricPoints with no timestamp: ```openmetrics-add-eof # TYPE foo gauge foo{a="bb"} 17.0 foo{a="ccc"} 17.0 ``` An example of a MetricFamily with no Metrics: ```openmetrics-add-eof # TYPE foo gauge ``` An example with a Metric with a label and a MetricPoint with a timestamp: ```openmetrics-add-eof # TYPE foo gauge foo{a="b"} 17.0 1520879607.789 ``` An example with a Metric with no labels and MetricPoint with a timestamp: ```openmetrics-add-eof # TYPE foo gauge foo 17.0 1520879607.789 ``` An example with a Metric with no labels and two MetricPoints with timestamps: ```openmetrics-add-eof # TYPE foo gauge foo 17.0 123 foo 18.0 456 ``` ##### Counter The MetricPoint's Total Value Sample MetricName SHOULD have the suffix `\_total`. If present, the MetricPoint's Start Timestamp MUST be inlined with the Metric point with a `st@` prefix. If the value's timestamp is present, the Start Timestamp MUST be added right after it. If exemplar is present, the Start Timestamp MUST be added before it. Be aware that exposing metrics without `\_total` being a suffix of the MetricFamily name directly to end-users may reduce the usability due to confusion about what the metric's type is. An example with a Metric with no labels, and a MetricPoint | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.030818384140729904,
0.00899129081517458,
-0.03179025650024414,
0.0021146652288734913,
-0.059514228254556656,
0.055864572525024414,
0.022906554862856865,
0.03853166848421097,
0.08107013255357742,
-0.04682180657982826,
0.03701670095324516,
-0.12211400270462036,
0.04596922919154167,
0.0375... | 0.115156 |
present, the Start Timestamp MUST be added before it. Be aware that exposing metrics without `\_total` being a suffix of the MetricFamily name directly to end-users may reduce the usability due to confusion about what the metric's type is. An example with a Metric with no labels, and a MetricPoint with no timestamp and no Start Timestamp: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 ``` An example with a Metric with no labels, and a MetricPoint with a timestamp and no Start Timestamp: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 1520879607.789 ``` An example with a Metric with no labels, and a MetricPoint with no timestamp and a Start Timestamp: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 st@1520430000.123 ``` An example with a Metric with no labels, and a MetricPoint with a timestamp and a Start Timestamp: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 1520879607.789 st@1520430000.123 ``` An example with a Metric with no labels, and a MetricPoint without the `\_total` suffix and with a timestamp and a start timestamp: ```openmetrics-add-eof # TYPE foo counter foo 17.0 1520879607.789 st@1520879607.789 ``` Exemplars MAY be attached to the MetricPoint's Total sample. An example with a Metric with no labels, and a MetricPoint with a timestamp and a Start Timestamp and an exemplar: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 1520879607.789 st@1520430000.123 # {trace\_id="KOO5S4vxi0o"} 0.67 ``` ##### StateSet The Sample MetricName for the value of a MetricPoint for a MetricFamily of type StateSet MUST NOT have a suffix. StateSets MUST have one sample per State in the MetricPoint. Each State's sample MUST have a label with the MetricFamily name as the label name and the State name as the label value. The State sample's value MUST be 1 if the State is true and MUST be 0 if the State is false. An example with the states "a", "bb", and "ccc" in which only the value bb is enabled and the metric name is foo: ```openmetrics-add-eof # TYPE foo stateset foo{foo="a"} 0 foo{foo="bb"} 1 foo{foo="ccc"} 0 ``` An example of an "entity" label on the Metric: ```openmetrics-add-eof # TYPE foo stateset foo{entity="controller",foo="a"} 1.0 foo{entity="controller",foo="bb"} 0.0 foo{entity="controller",foo="ccc"} 0.0 foo{entity="replica",foo="a"} 1.0 foo{entity="replica",foo="bb"} 0.0 foo{entity="replica",foo="ccc"} 1.0 ``` ##### Info The Sample MetricName for the value of a MetricPoint for a MetricFamily of type Info MUST have the suffix `\_info`. The Sample value MUST always be 1. An example of a Metric with no labels, and one MetricPoint value with "name" and "version" labels: ```openmetrics-add-eof # TYPE foo info foo\_info{name="pretty name",version="8.2.7"} 1 ``` An example of a Metric with label "entity" and one MetricPoint value with “name” and “version” labels: ```openmetrics-add-eof # TYPE foo info foo\_info{entity="controller",name="pretty name",version="8.2.7"} 1.0 foo\_info{entity="replica",name="prettier name",version="8.1.9"} 1.0 ``` Metric labels and MetricPoint value labels MAY be in any order. ##### Summary If present, the MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_sum`. If present, the MetricPoint's Count Value MetricName MUST have the suffix `\_count`. If present, the MetricPoint's Quantile Values MUST specify the quantile measured using a label with a label name of "quantile" and with a label value of the quantile measured. If present the MetricPoint's Start Timestamp MUST be inlined with the Metric point with a `st@` prefix. If the value's timestamp is present, the Start Timestamp MUST be added right after it. If exemplar is present, the Start Timestamp MUST be added before it. Start Timestamp MUST be appended to all Quantile Values, to the MetricPoint's Sum and MetricPoint's Count. An example of a Metric with no labels and a MetricPoint with Sum, Count and Start Timestamp values: ```openmetrics-add-eof # TYPE foo summary foo\_count 17.0 st@1520430000.123 foo\_sum 324789.3 | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.03419598564505577,
-0.009023532271385193,
-0.028618915006518364,
-0.02270684763789177,
-0.013198229484260082,
0.025037214159965515,
0.0026063176337629557,
0.0395621620118618,
0.08904922753572464,
-0.009053030982613564,
0.05199912562966347,
-0.10226523131132126,
-0.01079041138291359,
0.0... | 0.068971 |
Timestamp MUST be added before it. Start Timestamp MUST be appended to all Quantile Values, to the MetricPoint's Sum and MetricPoint's Count. An example of a Metric with no labels and a MetricPoint with Sum, Count and Start Timestamp values: ```openmetrics-add-eof # TYPE foo summary foo\_count 17.0 st@1520430000.123 foo\_sum 324789.3 st@1520430000.123 ``` An example of a Metric with no labels and a MetricPoint with two quantiles and Start Timestamp values: ```openmetrics-add-eof # TYPE foo summary foo{quantile="0.95"} 123.7 st@1520430000.123 foo{quantile="0.99"} 150.0 st@1520430000.123 ``` Quantiles MAY be in any order. ##### Histogram with Classic Buckets The MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_sum`. The MetricPoint's Count Value Sample MetricName MUST have the suffix `\_count`. The MetricPoint's Classic Bucket values Sample MetricNames MUST have the suffix `\_bucket`. If present the MetricPoint's Start Timestamp MUST be inlined with the Metric point with a `st@` prefix. If the value's timestamp is present, the Start Timestamp MUST be added right after it. If exemplar is present, the Start Timestamp MUST be added before it. Start Timestamp MUST be appended to all Classic Bucket values, to the MetricPoint's Sum and MetricPoint's Count. Classic Buckets MUST be sorted in number increasing order of "le", and the value of the "le" label MUST follow the rules for Canonical Numbers. All Classic Buckets MUST be present, even ones with the value 0. An example of a Metric with no labels and a MetricPoint with Sum, Count, and Start Timestamp values, and with 12 Classic Buckets. A wide and atypical but valid variety of “le” values is shown on purpose: ```openmetrics-add-eof # TYPE foo histogram foo\_bucket{le="0.0"} 0 st@1520430000.123 foo\_bucket{le="1e-05"} 0 st@1521430000.123 foo\_bucket{le="0.0001"} 5 st@1521430020.123 foo\_bucket{le="0.1"} 8 st@1520430321.123 foo\_bucket{le="1.0"} 10 st@1522430000.123 foo\_bucket{le="10.0"} 11 st@1520430123.123 foo\_bucket{le="100000.0"} 11 st@1521430010.123 foo\_bucket{le="1e+06"} 15 st@1520430301.123 foo\_bucket{le="1e+23"} 16 st@1521430001.123 foo\_bucket{le="1.1e+23"} 17 st@1522430220.123 foo\_bucket{le="+Inf"} 17 st@1520430000.123 foo\_count 17 st@1520430000.123 foo\_sum 324789.3 st@1520430000.123 ``` ##### Histogram with Native Buckets The MetricPoint's value MUST be a ComplexValue. The ComplexValue MUST include the Count, Sum, Schema, Zero Threshold, Zero Native Bucket value as the fields `count`, `sum`, `schema`, `zero\_threshold`, `zero\_count`, in this order. If there are no negative Native Buckets, then the fields `negative\_spans` and `negative\_buckets` SHOULD be omitted. If there are no positive Native Buckets, then the fields `positive\_spans` and `positive\_buckets` SHOULD be omitted. If there are negative (and/or positive) Native Buckets, then the fields `negative\_spans`, `negative\_buckets` (and/or `positive\_spans`, `positive\_buckets`) MUST be present in this order after the `zero\_count` field. With the exception of the `sum` and `zero\_threshold` field, all numbers MUST be integers and MUST NOT include dot '.' or exponent 'e'. Native Bucket values MUST be ordered by their index, and their values MUST be placed in the `negative\_buckets` (and/or `positive\_buckets`) fields. Native Buckets that have a value of 0 SHOULD NOT be present. To map the `negative\_buckets` (and/or `positive\_buckets`) back to their indices, the `negative\_spans` (and/or `positive\_spans`) field MUST be constructed in the following way: Each span consists of a pair of numbers, an integer called offset and an non-negative integer called length. Only the first span in each list can have a negative offset. It defines the index of the first bucket in its corresponding `negative\_buckets` (and/or `positive\_buckets`). The length defines the number of consecutive buckets the bucket list starts with. The offsets of the following spans define the number of excluded (and thus unpopulated buckets). The lengths define the number of consecutive buckets in the list following the excluded buckets. An example of when to keep empty positive or negative Native Buckets is to reduce the number of spans needed to represent the case where the offset between two spans is just 1, | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.021176112815737724,
0.02026320993900299,
0.02315508760511875,
-0.0356004424393177,
-0.04258497804403305,
0.01016277726739645,
0.029892226681113243,
0.030903026461601257,
0.07821477204561234,
-0.022783737629652023,
0.012553291395306587,
-0.13393214344978333,
-0.02112163044512272,
-0.0186... | 0.036945 |
unpopulated buckets). The lengths define the number of consecutive buckets in the list following the excluded buckets. An example of when to keep empty positive or negative Native Buckets is to reduce the number of spans needed to represent the case where the offset between two spans is just 1, meaning that with the inclusion of one empty bucket, the number of spans is reduced by one. The sum of all length values in each span list MUST be equal to the length of the corresponding bucket list. An example with all fields: ```openmetrics-add-eof # TYPE acme\_http\_request\_seconds histogram acme\_http\_request\_seconds{path="/api/v1",method="GET"} {count:59,sum:1.2e2,schema:7,zero\_threshold:1e-4,zero\_count:0,negative\_spans:[1:2],negative\_buckets:[5,7],positive\_spans:[-1:2,3:4],positive\_buckets:[5,7,10,9,8,8]} st@1520430000.123 ``` An example without any buckets in use: ```openmetrics-add-eof # TYPE acme\_http\_request\_seconds histogram acme\_http\_request\_seconds{path="/api/v1",method="GET"} {count:0,sum:0,schema:3,zero\_threshold:1e-4,zero\_count:0} st@1520430000.123 ``` ##### Histogram with both Classic and Native Buckets If a Histogram MetricPoint has both Classic and Native buckets, the Sample for the Native Buckets MUST come first. The order ensures that implementations can easily skip the Classic Buckets if the Native Buckets are preferred. ```openmetrics-add-eof # TYPE acme\_http\_request\_seconds histogram # UNIT acme\_http\_request\_seconds seconds # HELP acme\_http\_request\_seconds Latency histogram of all of ACME's HTTP requests. acme\_http\_request\_seconds{path="/api/v1",method="GET"} {count:2,sum:1.2e2,schema:0,zero\_threshold:1e-4,zero\_count:0,positive\_spans:[1:2],positive\_buckets:[1,1]} acme\_http\_request\_seconds\_count{path="/api/v1",method="GET"} 2 acme\_http\_request\_seconds\_sum{path="/api/v1",method="GET"} 1.2e2 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="0.5"} 1 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="1"} 2 acme\_http\_request\_seconds\_buckets{path="/api/v1",method="GET",le="+Inf"} 2 ``` ###### Exemplars Exemplars without Labels MUST represent an empty LabelSet as {}. An example of Exemplars showcasing several valid cases: The Histogram Sample with Native Buckets has multiple Exemplars. The "0.01" bucket has no Exemplar. The 0.1 bucket has an Exemplar with no Labels. The 1 bucket has an Exemplar with one Label. The 10 bucket has an Exemplar with a Label and a timestamp. In practice all buckets SHOULD have the same style of Exemplars. ```openmetrics-add-eof # TYPE foo histogram foo {count:10,sum:1.0,schema:0,zero\_threshold:1e-4,zero\_count:0,positive\_spans:[0:2],positive\_buckets:[5,5]} st@1520430000.123 # {trace\_id="shaZ8oxi"} 0.67 1520879607.789 # {trace\_id="ookahn0M"} 1.2 1520879608.589 foo\_bucket{le="0.01"} 0 st@1520430000.123 foo\_bucket{le="0.1"} 8 st@1520430000.123 # {} 0.054 foo\_bucket{le="1"} 11 st@1520430000.123 # {trace\_id="KOO5S4vxi0o"} 0.67 foo\_bucket{le="10"} 17 st@1520430000.123 # {trace\_id="oHg5SJYRHA0"} 9.8 1520879607.789 foo\_bucket{le="+Inf"} 17 st@1520430000.123 foo\_count 17 st@1520430000.123 foo\_sum 324789.3 st@1520430000.123 ``` ##### GaugeHistogram with Classic Buckets The MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_gsum`. The MetricPoint's Count Value Sample MetricName MUST have the suffix `\_gcount`. The MetricPoint's Classic Bucket values Sample MetricNames MUST have the suffix `\_bucket`. Classic Buckets MUST be sorted in number increasing order of "le", and the value of the "le" label MUST follow the rules for Canonical Numbers. An example of a Metric with no labels, and one MetricPoint value with no Exemplar with no Exemplars in the buckets: ```openmetrics-add-eof # TYPE foo gaugehistogram foo\_bucket{le="0.01"} 20.0 foo\_bucket{le="0.1"} 25.0 foo\_bucket{le="1"} 34.0 foo\_bucket{le="10"} 34.0 foo\_bucket{le="+Inf"} 42.0 foo\_gcount 42.0 foo\_gsum 3289.3 ``` ##### GaugeHistogram with Native Buckets GaugeHistogram MetricPoints with Native Buckets follow the same syntax as Histogram MetricPoints with Native Buckets. ```openmetrics-add-eof # TYPE acme\_http\_request\_seconds gaugehistogram acme\_http\_request\_seconds{path="/api/v1",method="GET"} {count:59,sum:1.2e2,schema:7,zero\_threshold:1e-4,zero\_count:0,negative\_spans:[1:2],negative\_buckets:[5,7],positive\_spans:[-1:2,3:4],positive\_buckets:[5,7,10,9,8,8]} st@1520430000.123 ``` ##### GaugeHistogram with both Classic and Native buckets If a GaugeHistogram MetricPoint has both Classic and Native buckets, the Sample for the Native Buckets MUST come first. The order ensures that implementations can easily skip the Classic Buckets if the Native Buckets are preferred. ##### Unknown The sample metric name for the value of the MetricPoint for a MetricFamily of type Unknown MUST NOT have a suffix. An example with a Metric with no labels and a MetricPoint with no timestamp: ```openmetrics-add-eof # TYPE foo unknown foo 42.23 ``` ### Protobuf format #### Overall Structure Protobuf messages MUST be encoded in binary and MUST have `application/openmetrics-protobuf; version=1.0.0` as their content type. All payloads MUST be a single binary encoded MetricSet message, as defined by the OpenMetrics protobuf schema. ##### Version The protobuf format MUST follow the proto3 version of the protocol buffer language. ##### Strings All string | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.0259237177670002,
0.0658784881234169,
-0.006383566651493311,
-0.02428573928773403,
0.023036567494273186,
-0.03103071264922619,
-0.0008311103447340429,
0.02287500537931919,
0.044637154787778854,
-0.009864121675491333,
0.019453920423984528,
-0.11584366858005524,
0.0766739547252655,
-0.048... | 0.05052 |
MUST be encoded in binary and MUST have `application/openmetrics-protobuf; version=1.0.0` as their content type. All payloads MUST be a single binary encoded MetricSet message, as defined by the OpenMetrics protobuf schema. ##### Version The protobuf format MUST follow the proto3 version of the protocol buffer language. ##### Strings All string fields MUST be UTF-8 encoded. ##### Timestamps Timestamp representations in the OpenMetrics protobuf schema MUST follow the published google.protobuf.Timestamp [timestamp] message. The timestamp message MUST be in Unix epoch seconds as an int64 and a non-negative fraction of a second at nanosecond resolution as an int32 that counts forward from the seconds timestamp component. It MUST be within 0 to 999,999,999 inclusive. #### Protobuf schema Protobuf schema is currently available [here](https://github.com/prometheus/OpenMetrics/blob/3bb328ab04d26b25ac548d851619f90d15090e5d/proto/openmetrics\_data\_model.proto). > NOTE: Prometheus and ecosystem does not support OpenMetrics protobuf schema, instead it uses similar `io.prometheus.client` [format](https://github.com/prometheus/client\_model/blob/master/io/prometheus/client/metrics.proto). Discussions about the future of the protobuf schema in OpenMetrics 2.0 [are in progress](https://github.com/prometheus/OpenMetrics/issues/296). ## Design Considerations ### Scope OpenMetrics is intended to provide telemetry for online systems. It runs over protocols which do not provide hard or soft real time guarantees, so it can not make any real time guarantees itself. Latency and jitter properties of OpenMetrics are as imprecise as the underlying network, operating systems, CPUs, and the like. It is sufficiently accurate for aggregations to be used as a basis for decision-making, but not to reflect individual events. Systems of all sizes should be supported, from applications that receive a few requests an hour up to monitoring bandwidth usage on a 400Gb network port. Aggregation and analysis of transmitted telemetry should be possible over arbitrary time periods. It is intended to transport snapshots of state at the time of data transmission at a regular cadence. #### Out of scope How ingestors discover which exposers exist, and vice-versa, is out of scope for and thus not defined in this standard. ### Extensions and Improvements This first version of OpenMetrics is based upon well established and de facto standard Prometheus text format 0.0.4, deliberately without adding major syntactic or semantic extensions, or optimisations on top of it. For example no attempt has been made to make the text representation of Histogram buckets more compact, relying on compression in the underlying stack to deal with their repetitive nature. This is a deliberate choice, so that the standard can take advantage of the adoption and momentum of the existing user base. This ensures a relatively easy transition from the Prometheus text format 0.0.4. It also ensures that there is a basic standard which is easy to implement. This can be built upon in future versions of the standard. The intention is that future versions of the standard will always require support for this 1.0 version, both syntactically and semantically. We want to allow monitoring systems to get usable information from an OpenMetrics exposition without undue burden. If one were to strip away all metadata and structure and just look at an OpenMetrics exposition as an unordered set of samples that should be usable on its own. As such, there are also no opaque binary types, such as sketches or t-digests which could not be expressed as a mix of gauges and counters as they would require custom parsing and handling. This principle is applied consistently throughout the standard. For example it is encouraged that a MetricFamily's unit is duplicated in the name so that the unit is available for systems that don't understand the unit metadata. The "le" label is a normal label value, rather than getting its own special syntax, so that ingestors don't have to add special histogram handling code to | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.029037123546004295,
-0.0011195566039532423,
-0.0163011122494936,
-0.1053817868232727,
-0.008565518073737621,
-0.07268780469894409,
-0.04149680212140083,
-0.006781305652111769,
-0.02338303253054619,
0.004412363283336163,
-0.0046932278200984,
-0.15624767541885376,
-0.007970776408910751,
0... | 0.050017 |
that a MetricFamily's unit is duplicated in the name so that the unit is available for systems that don't understand the unit metadata. The "le" label is a normal label value, rather than getting its own special syntax, so that ingestors don't have to add special histogram handling code to ingest them. As a further example, there are no composite data types. For example, there is no geolocation type for latitude/longitude as this can be done with separate gauge metrics. ### Units and Base Units For consistency across systems and to avoid confusion, units are largely based on SI base units. Base units include seconds, bytes, joules, grams, meters, ratios, volts, amperes, and celsius. Units should be provided where they are applicable. For example, having all duration metrics in seconds, there is no risk of having to guess whether a given metric is nanoseconds, microseconds, milliseconds, seconds, minutes, hours, days or weeks nor having to deal with mixed units. By choosing unprefixed units, we avoid situations like ones in which kilomilliseconds were the result of emergent behaviour of complex systems. As values can be floating point, sub-base-unit precision is built into the standard. Similarly, mixing bits and bytes is confusing, so bytes are chosen as the base. While Kelvin is a better base unit in theory, in practice most existing hardware exposes Celsius. Kilograms are the SI base unit, however the kilo prefix is problematic so grams are chosen as the base unit. While base units SHOULD be used in all possible cases, Kelvin is a well-established unit which MAY be used instead of Celsius for use cases such as color or black body temperatures where a comparison between a Celsius and Kelvin metric are unlikely. Ratios are the base unit, not percentages. Where possible, raw data in the form of gauges or counters for the given numerator and denominator should be exposed. This has better mathematical properties for analysis and aggregation in the ingestors. Decibels are not a base unit as firstly, deci is a SI prefix and secondly, bels are logarithmic. To expose signal/energy/power ratios exposing the ratio directly would be better, or better still the raw power/energy if possible. Floating point exponents are more than sufficient to cover even extreme scientific uses. An electron volt (~1e-19 J) all the way up to the energy emitted by a supernova (~1e44 J) is 63 orders of magnitude, and a 64-bit floating point number can cover over 2000 orders of magnitude. If non-base units can not be avoided and conversion is not feasible, the actual unit should still be included in the metric name for clarity. For example, joule is the base unit for both energy and power, as watts can be expressed as a counter with a joule unit. In practice a given 3rd party system may only expose watts, so a gauge expressed in watts would be the only realistic choice in that case. Not all MetricFamilies have units. For example a count of HTTP requests wouldn't have a unit. Technically the unit would be HTTP requests, but in that sense the entire MetricFamily name is the unit. Going to that extreme would not be useful. The possibility of having good axes on graphs in downstream systems for human consumption should always be kept in mind. ### Statelessness The wire format defined by OpenMetrics is stateless across expositions. What information has been exposed before MUST have no impact on future expositions. Each exposition is a self-contained snapshot of the current state of the exposer. The same self-contained exposition MUST be provided to existing and new ingestors. A core | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
0.0021132677793502808,
-0.021954700350761414,
-0.033313602209091187,
0.009003500454127789,
-0.020153816789388657,
-0.0060057989321649075,
-0.03145787492394447,
0.04359063506126404,
0.08996710181236267,
-0.07681203633546829,
0.05489273741841316,
-0.11090461909770966,
0.052181825041770935,
0... | 0.148853 |
The wire format defined by OpenMetrics is stateless across expositions. What information has been exposed before MUST have no impact on future expositions. Each exposition is a self-contained snapshot of the current state of the exposer. The same self-contained exposition MUST be provided to existing and new ingestors. A core design choice is that exposers MUST NOT exclude a metric merely because it has had no recent changes, or observations. An exposer must not make any assumptions about how often ingestors are consuming expositions. ### Exposition Across Time and Metric Evolution Metrics are most useful when their evolution over time can be analysed, so accordingly expositions must make sense over time. Thus, it is not sufficient for one single exposition on its own to be useful and valid. Some changes to metric semantics can also break downstream users. Parsers commonly optimize by caching previous results. Thus, changing the order in which labels are exposed across expositions SHOULD be avoided even though it is technically not breaking This also tends to make writing unit tests for exposition easier. Metrics and samples SHOULD NOT appear and disappear from exposition to exposition, for example a counter is only useful if it has history. In principle, a given Metric should be present in exposition from when the process starts until the process terminates. It is often not possible to know in advance what Metrics a MetricFamily will have over the lifetime of a given process (e.g. a label value of a latency histogram is a HTTP path, which is provided by an end user at runtime), but once a counter-like Metric is exposed it should continue to be exposed until the process terminates. That a counter is not getting increments doesn't invalidate that it still has its current value. There are cases where it may make sense to stop exposing a given Metric; see the section on Missing Data. In general changing a MetricFamily's type, or adding or removing a label from its Metrics will be breaking to ingestors. A notable exception is that adding a label to the value of an Info MetricPoints is not breaking. This is so that you can add additional information to an existing Info MetricFamily where it makes sense to be, rather than being forced to create a brand new info metric with an additional label value. ingestor systems should ensure that they are resilient to such additions. Changing a MetricFamily's Help is not breaking. For values where it is possible, switching between floats and ints is not breaking. Adding a new state to a stateset is not breaking. Adding unit metadata where it doesn't change the metric name is not breaking. Histogram buckets SHOULD NOT change from exposition to exposition, as this is likely to both cause performance issues and break ingestors and cause. Similarly all expositions from any consistent binary and environment of an application SHOULD have the same buckets for a given Histogram MetricFamily, so that they can be aggregated by all ingestors without ingestors having to implement histogram merging logic for heterogeneous buckets. An exception might be occasional manual changes to buckets which are considered breaking, but may be a valid tradeoff when performance characteristics change due to a new software release. Even if changes are not technically breaking, they still carry a cost. For example frequent changes may cause performance issues for ingestors. A Help string that varies from exposition to exposition may cause each Help value to be stored. Frequently switching between int and float values could prevent efficient compression. ### NaN NaN is a number like any other in OpenMetrics, usually | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.08581959456205368,
-0.03174678236246109,
-0.02022889256477356,
-0.007798205129802227,
0.04644666239619255,
0.057416707277297974,
-0.01765964739024639,
-0.003683866932988167,
0.11176442354917526,
-0.029893597587943077,
0.005994025617837906,
-0.02048824541270733,
-0.02473803423345089,
0.0... | 0.119726 |
For example frequent changes may cause performance issues for ingestors. A Help string that varies from exposition to exposition may cause each Help value to be stored. Frequently switching between int and float values could prevent efficient compression. ### NaN NaN is a number like any other in OpenMetrics, usually resulting from a division by zero such as for a summary quantile if there have been no observations recently. NaN does not have any special meaning in OpenMetrics, and in particular MUST NOT be used as a marker for missing or otherwise bad data. ### Missing Data There are valid cases when data stops being present. For example a filesystem can be unmounted and thus its Gauge Metric for free disk space no longer exists. There is no special marker or signal for this situation. Subsequent expositions simply do not include this Metric. ### Exposition Performance Metrics are only useful if they can be collected in reasonable time frames. Metrics that take minutes to expose are not considered useful. As a rule of thumb, exposition SHOULD take no more than a second. Metrics from legacy systems serialized through OpenMetrics may take longer. For this reason, no hard performance assumptions can be made. Exposition SHOULD be of the most recent state. For example, a thread serving the exposition request SHOULD NOT rely on cached values, to the extent it is able to bypass any such caching ### Concurrency For high availability and ad-hoc access a common approach is to have multiple ingestors. To support this, concurrent expositions MUST be supported. All BCPs for concurrent systems SHOULD be followed, common pitfalls include deadlocks, race conditions, and overly-coarse grained locking preventing expositions progressing concurrently. ### Metric Naming and Namespaces We aim for a balance between understandability, avoiding clashes, and succinctness in the naming of metrics and label names. Names are separated through underscores, so metric names end up being in “snake\_case”. While we strongly recommend the practices recommended in this document, other metric systems have different philosophies regarding naming conventions. OpenMetrics allows these metrics to be exposed, but without the conventions and suffixes recommended here there is an increased risk of collisions and incompatibilities along the chain of services in a metrics system. Users wishing to use alternative conventions will need to take special care and expend additional effort to ensure that the entire system is consistent. To take an example "http\_request\_seconds" is succinct but would clash between large numbers of applications, and it's also unclear exactly what this metric is measuring. For example, it might be before or after auth middleware in a complex system. Metric names should indicate what piece of code they come from. So a company called A Company Manufacturing Everything might prefix all metrics in their code with "acme\_", and if they had a HTTP router library measuring latency it might have a metric such as "acme\_http\_router\_request\_seconds" with a Help string indicating that it is the overall latency. It is not the aim to prevent all potential clashes across all applications, as that would require heavy handed solutions such as a global registry of metric namespaces or very long namespaces based on DNS. Rather the aim is to keep to a lightweight informal approach, so that for a given application that it is very unlikely that there is clash across its constituent libraries. Across a given deployment of a monitoring system as a whole the aim is that clashes where the same metric name means different things are uncommon. For example acme\_http\_router\_request\_seconds might end up in hundreds of different applications developed by A Company Manufacturing Everything, which is | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.045255064964294434,
-0.048784494400024414,
-0.02237221784889698,
-0.016456427052617073,
0.07843489199876785,
-0.0374588668346405,
-0.034936923533678055,
0.02870253100991249,
0.0625036209821701,
0.019406694918870926,
0.06562580168247223,
0.007734722923487425,
-0.0037819447461515665,
0.01... | 0.172536 |
clash across its constituent libraries. Across a given deployment of a monitoring system as a whole the aim is that clashes where the same metric name means different things are uncommon. For example acme\_http\_router\_request\_seconds might end up in hundreds of different applications developed by A Company Manufacturing Everything, which is normal. If Another Corporation Making Entities also used the metric name acme\_http\_router\_request\_seconds in their HTTP router that's also fine. If applications from both companies were being monitored by the same monitoring system the clash is undesirable, but acceptable as no application is trying to expose both names and no one target is trying to (incorrectly) expose the same metric name twice. If an application wished to contain both My Example Company's and Mega Exciting Company's HTTP router libraries that would be a problem, and one of the metric names would need to be changed somehow. As a corollary, the more public a library is the better namespaced its metric names should be to reduce the risk of such scenarios arising. acme\_ is not a bad choice for internal use within a company, but these companies might for example choose the prefixes acmeverything\_ or acorpme\_ for code shared outside their company. After namespacing by company or organisation, namespacing and naming should continue by library/subsystem/application fractally as needed such as the http\_router library above. The goal is that if you are familiar with the overall structure of a codebase, you could make a good guess at where the instrumentation for a given metric is given its metric name. For a common very well known existing piece of software, the name of the software itself may be sufficiently distinguishing. For example bind\_ is probably sufficient for the DNS software, even though isc\_bind\_ would be the more usual naming. Metric names prefixed by scrape\_ are used by ingestors to attach information related to individual expositions, so should not be exposed by applications directly. Metrics that have already been consumed and passed through a general purpose monitoring system may include such metric names on subsequent expositions. If an exposer wishes to provide information about an individual exposition, a metric prefix such as myexposer\_scrape\_ may be used. A common example is a gauge myexposer\_scrape\_duration\_seconds for how long that exposition took from the exposer's standpoint. Within the Prometheus ecosystem a set of per-process metrics has emerged that are consistent across all implementations, prefixed with process\_. For example for open file ulimits the MetricFamiles process\_open\_fds and process\_max\_fds gauges provide both the current and maximum value. (These names are legacy, if such metrics were defined today they would be more likely called process\_fds\_open and process\_fds\_limit). In general it is very challengings to get names with identical semantics like this, which is why different instrumentation should use different names. Avoid redundancy in metric names. Avoid substrings like "metric", "timer", "stats", "counter", "total", "float64" and so on - by virtue of being a metric with a given type (and possibly unit) exposed via OpenMetrics information like this is already implied so should not be included explicitly. You should not include label names of a metric in the metric name for the same reasons, and in addition subsequent aggregation of the metric by a monitoring system could make such information incorrect. Avoid including implementation details from other layers of your monitoring system in the metric names contained in your instrumentation. For example a MetricFamily name should not contain the string "openmetrics" merely because it happens to be currently exposed via OpenMetrics somewhere, or "prometheus" merely because your current monitoring system is Prometheus. ### Label Namespacing For label names no explicit namespacing by | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.09856666624546051,
-0.08426894247531891,
-0.06943747401237488,
-0.05560065805912018,
-0.03700346127152443,
-0.06703770905733109,
0.033164940774440765,
-0.00011779362830566242,
0.09008970111608505,
-0.07291430979967117,
0.03528348356485367,
-0.03815668076276779,
0.03243272006511688,
0.00... | 0.09043 |
in the metric names contained in your instrumentation. For example a MetricFamily name should not contain the string "openmetrics" merely because it happens to be currently exposed via OpenMetrics somewhere, or "prometheus" merely because your current monitoring system is Prometheus. ### Label Namespacing For label names no explicit namespacing by company or library is recommended, namespacing from the metric name is sufficient for this when considered against the length increase of the label name. However some minimal care to avoid common clashes is recommended. There are label names such as region, zone, cluster, availability\_zone, az, datacenter, dc, owner, customer, stage, service, team, job, instance, environment, and env which are highly likely to clash with labels used to identify targets which a general purpose monitoring system may add. Try to avoid them, adding minimal namespacing may be appropriate in these cases. The label name "type" is highly generic and should be avoided. For example for HTTP-related metrics "method" would be a better label name if you were distinguishing between GET, POST, and PUT requests. While there is metadata about metric names such as HELP, TYPE and UNIT there is no metadata for label names. This is as it would be bloating the format for little gain. Out-of-band documentation is one way for exposers could present this their ingestors. ### Metric Names versus Labels There are situations in which both using multiple Metrics within a MetricFamily or multiple MetricFamilies seem to make sense. Summing or averaging a MetricFamily should be meaningful even if it's not always useful. For example, mixing voltage and fan speed is not meaningful. As a reminder, OpenMetrics is built with the assumption that ingestors can process and perform aggregations on data. Exposing a total sum alongside other metrics is wrong, as this would result in double-counting upon aggregation in downstream ingestors. ``` wrong\_metric{label="a"} 1 wrong\_metric{label="b"} 6 wrong\_metric{label="total"} 7 ``` Labels of a Metric should be to the minimum needed to ensure uniqueness as every extra label is one more that users need to consider when determining what Labels to work with downstream. Labels which could be applied many MetricFamilies are candidates for being moved into \_info metrics similar to database {{normalization}}. If virtually all users of a Metric could be expected to want the additional label, it may be a better trade-off to add it to all MetricFamilies. For example if you had a MetricFamily relating to different SQL statements where uniqueness was provided by a label containing a hash of the full SQL statements, it would be okay to have another label with the first 500 characters of the SQL statement for human readability. Experience has shown that downstream ingestors find it easier to work with separate total and failure MetricFamiles rather than using {result="success"} and {result="failure"} Labels within one MetricFamily. Also it is usually better to expose separate read & write and send & receive MetricFamiles as full duplex systems are common and downstream ingestors are more likely to care about those values separately than in aggregate. All of this is not as easy as it may sound. It's an area where experience and engineering trade-offs by domain-specific experts in both exposition and the exposed system are required to find a good balance. Metric and Label Name Characters OpenMetrics builds on the existing widely adopted Prometheus text exposition format and the ecosystem which formed around it. Backwards compatibility is a core design goal. Expanding or contracting the set of characters that are supported by the Prometheus text format would work against that goal. Breaking backwards compatibility would have wider implications than just the wire format. In | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.07157133519649506,
-0.011783238500356674,
-0.08584138751029968,
-0.018783951178193092,
-0.019706444814801216,
-0.09430072456598282,
0.04485089331865311,
0.012321972288191319,
0.03195453807711601,
-0.05504671111702919,
0.020986171439290047,
-0.08707311004400253,
0.02510691247880459,
0.03... | 0.217395 |
text exposition format and the ecosystem which formed around it. Backwards compatibility is a core design goal. Expanding or contracting the set of characters that are supported by the Prometheus text format would work against that goal. Breaking backwards compatibility would have wider implications than just the wire format. In particular, the query languages created or adopted to work with data transmitted within the Prometheus ecosystem rely on these precise character sets. Label values support full UTF-8, so the format can represent multi-lingual metrics. ### Types of Metadata Metadata can come from different sources. Over the years, two main sources have emerged. While they are often functionally the same, it helps in understanding to talk about their conceptual differences. "Target metadata" is metadata commonly external to an exposer. Common examples would be data coming from service discovery, a CMDB, or similar, like information about a datacenter region, if a service is part of a particular deployment, or production or testing. This can be achieved by either the exposer or the ingestor adding labels to all Metrics that capture this metadata. Doing this through the ingestor is preferred as it is more flexible and carries less overhead. On flexibility, the hardware maintenance team might care about which server rack a machine is located in, whereas the database team using that same machine might care that it contains replica number 2 of the production database. On overhead, hardcoding or configuring this information needs an additional distribution path. "Exposer metadata" is coming from within an exposer. Common examples would be software version, compiler version, or Git commit SHA. #### Supporting Target Metadata in both Push-based and Pull-based Systems In push-based consumption, it is typical for the exposer to provide the relevant target metadata to the ingestor. In pull-based consumption the push-based approach could be taken, but more typically the ingestor already knows the metadata of the target a-priori such as from a machine database or service discovery system, and associates it with the metrics as it consumes the exposition. OpenMetrics is stateless and provides the same exposition to all ingestors, which is in conflict with the push-style approach. In addition the push-style approach would break pull-style ingestors, as unwanted metadata would be exposed. One approach would be for push-style ingestors to provide target metadata based on operator configuration out-of-band, for example as a HTTP header. While this would transport target metadata for push-style ingestors, and is not precluded by this standard, it has the disadvantage that even though pull-style ingestors should use their own target metadata, it is still often useful to have access to the metadata the exposer itself is aware of. The preferred solution is to provide this target metadata as part of the exposition, but in a way that does not impact on the exposition as a whole. Info MetricFamilies are designed for this. An exposer may include an Info MetricFamily called "target" with a single Metric with no labels with the metadata. An example in the text format might be: ``` # TYPE target info # HELP target Target metadata target\_info{env="prod",hostname="myhost",datacenter="sdc",region="europe",owner="frontend"} 1 ``` When an exposer is providing this metric for this purpose it SHOULD be first in the exposition. This is for efficiency, so that ingestors relying on it for target metadata don't have to buffer up the rest of the exposition before applying business logic based on its content. Exposers MUST NOT add target metadata labels to all Metrics from an exposition, unless explicitly configured for a specific ingestor. Exposers MUST NOT prefix MetricFamily names or otherwise vary MetricFamily names based on target metadata. Generally, the same | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
0.0030489808414131403,
0.005282691679894924,
-0.012001197785139084,
-0.02430872805416584,
0.05946919322013855,
-0.027649182826280594,
-0.020333299413323402,
0.012647215276956558,
0.027136191725730896,
-0.05694691836833954,
-0.0011154436506330967,
-0.020045746117830276,
0.02077483758330345,
... | 0.175053 |
rest of the exposition before applying business logic based on its content. Exposers MUST NOT add target metadata labels to all Metrics from an exposition, unless explicitly configured for a specific ingestor. Exposers MUST NOT prefix MetricFamily names or otherwise vary MetricFamily names based on target metadata. Generally, the same Label should not appear on every Metric of an exposition, but there are rare cases where this can be the result of emergent behaviour. Similarly all MetricFamily names from an exposer may happen to share a prefix in very small expositions. For example an application written in the Go language by A Company Manufacturing Everything would likely include metrics with prefixes of acme\_, go\_, process\_, and metric prefixes from any 3rd party libraries in use. Exposers can expose exposer metadata as Info MetricFamilies. The above discussion is in the context of individual exposers. An exposition from a general purpose monitoring system may contain metrics from many individual targets, and thus may expose multiple target info Metrics. The metrics may already have had target metadata added to them as labels as part of ingestion. The metric names MUST NOT be varied based on target metadata. For example it would be incorrect for all metrics to end up being prefixed with staging\_ even if they all originated from targets in a staging environment). ### Client Calculations and Derived Metrics Exposers should leave any math or calculation up to ingestors. A notable exception is the Summary quantile which is unfortunately required for backwards compatibility. Exposition should be of raw values which are useful over arbitrary time periods. As an example, you should not expose a gauge with the average rate of increase of a counter over the last 5 minutes. Letting the ingestor calculate the increase over the data points they have consumed across expositions has better mathematical properties and is more resilient to scrape failures. Another example is the average event size of a histogram/summary. Exposing the average rate of increase of a counter since an application started or since a Metric was created has the problems from the earlier example and it also prevents aggregation. Standard deviation also falls into this category. Exposing a sum of squares as a counter would be the correct approach. It was not included in this standard as a Histogram value because 64bit floating point precision is not sufficient for this to work in practice. Due to the squaring only half the 53bit mantissa would be available in terms of precision. As an example a histogram observing 10k events per second would lose precision within 2 hours. Using 64bit integers would be no better due to the loss of the floating decimal point because a nanosecond resolution integer typically tracking events of a second in length would overflow after 19 observations. This design decision can be revisited when 128bit floating point numbers become common. Another example is to avoid exposing a request failure ratio, exposing separate counters for failed requests and total requests instead. ### Number Types For a counter that was incremented a million times per second it would take over a century to begin to lose precision with a float64 as it has a 53 bit mantissa. Yet a 100 Gbps network interface's octet throughput precision could begin to be lost with a float64 within around 20 hours. While losing 1KB of precision over the course of years for a 100Gbps network interface is unlikely to be a problem in practice, int64s are an option for integral data with such a high throughput. Summary quantiles must be float64, as they are estimates and thus | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.026125136762857437,
-0.08487824350595474,
-0.039539869874715805,
0.0009737041546031833,
-0.000036520093999570236,
0.007744161412119865,
0.03839291259646416,
-0.010046207346022129,
0.07040444016456604,
-0.08020682632923126,
0.028536859899759293,
-0.08571845293045044,
0.03675350174307823,
... | 0.114311 |
within around 20 hours. While losing 1KB of precision over the course of years for a 100Gbps network interface is unlikely to be a problem in practice, int64s are an option for integral data with such a high throughput. Summary quantiles must be float64, as they are estimates and thus fundamentally inaccurate. ### Exposing Timestamps One of the core assumptions of OpenMetrics is that exposers expose the most up to date snapshot of what they're exposing. While there are limited use cases for attaching timestamps to exposed data, these are very uncommon. Data which had timestamps previously attached, in particular data which has been ingested into a general purpose monitoring system may carry timestamps. Live or raw data should not carry timestamps. It is valid to expose the same metric MetricPoint value with the same timestamp across expositions, however it is invalid to do so if the underlying metric is now missing. Time synchronization is a hard problem and data should be internally consistent in each system. As such, ingestors should be able to attach the current timestamp from their perspective to data rather than based on the system time of the exposer device. With timestamped metrics it is not generally possible to detect the time when a Metric went missing across expositions. However with non-timestamped metrics the ingestor can use its own timestamp from the exposition where the Metric is no longer present. All of this is to say that, in general, MetricPoint timestamps should not be exposed, as it should be up to the ingestor to apply their own timestamps to samples they ingest. #### Tracking When Metrics Last Changed Presume you had a counter my\_counter which was initialized, and then later incremented by 1 at time 123. This would be a correct way to expose it in the text format: ``` # HELP my\_counter Good increment example # TYPE my\_counter counter my\_counter\_total 1 ``` As per the parent section, ingestors should be free to attach their own timestamps, so this would be incorrect: ``` # HELP my\_counter Bad increment example # TYPE my\_counter counter my\_counter\_total 1 123 ``` In case the specific time of the last change of a counter matters, this would be the correct way: ``` # HELP my\_counter Good increment example # TYPE my\_counter counter my\_counter\_total 1 # HELP my\_counter\_last\_increment\_timestamp\_seconds When my\_counter was last incremented # TYPE my\_counter\_last\_increment\_timestamp\_seconds gauge # UNIT my\_counter\_last\_increment\_timestamp\_seconds seconds my\_counter\_last\_increment\_timestamp\_seconds 123 ``` By putting the timestamp of last change into its own Gauge as a value, ingestors are free to attach their own timestamp to both Metrics. Experience has shown that exposing absolute timestamps (epoch is considered absolute here) is more robust than time elapsed, seconds since, or similar. In either case, they would be gauges. For example: ``` # TYPE my\_boot\_time\_seconds gauge # HELP my\_boot\_time\_seconds Boot time of the machine # UNIT my\_boot\_time\_seconds seconds my\_boot\_time\_seconds 1256060124 ``` Is better than: ``` # TYPE my\_time\_since\_boot\_seconds gauge # HELP my\_time\_since\_boot\_seconds Time elapsed since machine booted # UNIT my\_time\_since\_boot\_seconds seconds my\_time\_since\_boot\_seconds 123 ``` Conversely, there are no best practice restrictions on exemplars timestamps. Keep in mind that due to race conditions or time not being perfectly synced across devices, that an exemplar timestamp may appear to be slightly in the future relative to a ingestor's system clock or other metrics from the same exposition. Similarly it is possible that a "st@" for a MetricPoint could appear to be slightly after an exemplar or sample timestamp for that same MetricPoint. Keep in mind that there are monitoring systems in common use which support everything from nanosecond to second resolution, so having two MetricPoints | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.0205233171582222,
0.05819981172680855,
0.015244473703205585,
0.006144888699054718,
0.05039144679903984,
-0.094589963555336,
-0.05376679077744484,
-0.004448857624083757,
0.017336757853627205,
-0.013544096611440182,
0.035264886915683746,
-0.009497862309217453,
-0.008124650456011295,
-0.01... | 0.041058 |
same exposition. Similarly it is possible that a "st@" for a MetricPoint could appear to be slightly after an exemplar or sample timestamp for that same MetricPoint. Keep in mind that there are monitoring systems in common use which support everything from nanosecond to second resolution, so having two MetricPoints that have the same timestamp when truncated to second resolution may cause an apparent duplicate in the ingestor. In this case the MetricPoint with the earliest timestamp MUST be used. ### Thresholds Exposing desired bounds for a system can make sense, but proper care needs to be taken. For values which are universally true, it can make sense to emit Gauge metrics for such thresholds. For example, a data center HVAC system knows the current measurements, the setpoints, and the alert setpoints. It has a globally valid and correct view of the desired system state. As a counter example, some thresholds can change with scale, deployment model, or over time. A certain amount of CPU usage may be acceptable in one setting and undesirable in another. Aggregation of values can further change acceptable values. In such a system, exposing bounds could be counter-productive. For example the maximum size of a queue may be exposed alongside the number of items currently in the queue like: ``` # HELP acme\_notifications\_queue\_capacity The capacity of the notifications queue. # TYPE acme\_notifications\_queue\_capacity gauge acme\_notifications\_queue\_capacity 10000 # HELP acme\_notifications\_queue\_length The number of notifications in the queue. # TYPE acme\_notifications\_queue\_length gauge acme\_notifications\_queue\_length 42 ``` ### Size Limits This standard does not prescribe any particular limits on the number of samples exposed by a single exposition, the number of labels that may be present, the number of states a stateset may have, the number of labels in an info value, or metric name/label name/label value/help character limits. Specific limits run the risk of preventing reasonable use cases, for example while a given exposition may have an appropriate number of labels after passing through a general purpose monitoring system a few target labels may have been added that would push it over the limit. Specific limits on numbers such as these would also not capture where the real costs are for general purpose monitoring systems. These guidelines are thus both to aid exposers and ingestors in understanding what is reasonable. On the other hand, an exposition which is too large in some dimension could cause significant performance problems compared to the benefit of the metrics exposed. Thus some guidelines on the size of any single exposition would be useful. ingestors may choose to impose limits themselves, for in particular to prevent attacks or outages. Still, ingestors need to consider reasonable use cases and try not to disproportionately impact them. If any single value/metric/exposition exceeds such limits then the whole exposition must be rejected. In general there are three things which impact the performance of a general purpose monitoring system ingestion time series data: the number of unique time series, the number of samples over time in those series, and the number of unique strings such as metric names, label names, label values, and HELP. ingestors can control how often they ingest, so that aspect does not need further consideration. The number of unique time series is roughly equivalent to the number of non-comment lines in the text format. As of 2020, 10 million time series in total is considered a large amount and is commonly the order of magnitude of the upper bound of any single-instance ingestor. Any single exposition should not go above 10k time series without due diligence. One common consideration is horizontal scaling: What happens if | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.0434427335858345,
-0.0099343778565526,
-0.016225216910243034,
-0.027428653091192245,
0.04453838989138603,
0.008044315502047539,
0.029574427753686905,
0.04517696797847748,
0.13185378909111023,
-0.06220078840851784,
-0.027827361598610878,
-0.019115516915917397,
0.016859982162714005,
0.029... | 0.102819 |
of 2020, 10 million time series in total is considered a large amount and is commonly the order of magnitude of the upper bound of any single-instance ingestor. Any single exposition should not go above 10k time series without due diligence. One common consideration is horizontal scaling: What happens if you scale your instance count by 1-2 orders of magnitude? Having a thousand top-of-rack switches in a single deployment would have been hard to imagine 30 years ago. If a target was a singleton (e.g. exposing metrics relating to an entire cluster) then several hundred thousand time series may be reasonable. It is not the number of unique MetricFamilies or the cardinality of individual labels/buckets/statesets that matters, it is the total order of magnitude of the time series. 1,000 gauges with one Metric each are as costly as a single gauge with 1,000 Metrics. If all targets of a particular type are exposing the same set of time series, then each additional targets' strings poses no incremental cost to most reasonably modern monitoring systems. If however each target has unique strings, there is such a cost. As an extreme example, a single 10k character metric name used by many targets is on its own very unlikely to be a problem in practice. To the contrary, a thousand targets each exposing a unique 36 character UUID is over three times as expensive as that single 10k character metric name in terms of strings to be stored assuming modern approaches. In addition, if these strings change over time older strings will still need to be stored for at least some time, incurring extra cost. Assuming the 10 million times series from the last paragraph, 100MB of unique strings per hour might indicate a use case for then the use case may be more like event logging, not metric time series. There is a hard 128 UTF-8 character limit on exemplar length, to prevent misuse of the feature for tracing span data and other event logging. ## Security Implementors MAY choose to offer authentication, authorization, and accounting; if they so choose, this SHOULD be handled outside of OpenMetrics. All exposer implementations SHOULD be able to secure their HTTP traffic with TLS 1.2 or later. If an exposer implementation does not support encryption, operators SHOULD use reverse proxies, firewalling, and/or ACLs where feasible. Metric exposition should be independent of production services exposed to end users; as such, having a /metrics endpoint on ports like TCP/80, TCP/443, TCP/8080, and TCP/8443 is generally discouraged for publicly exposed services using OpenMetrics. ## IANA While currently most implementations of the Prometheus exposition format are using non-IANA-registered ports from an informal registry at {{PrometheusPorts}}, OpenMetrics can be found on a well-defined port. The port assigned by IANA for clients exposing data is <9099 requested for historical consistency>. If more than one metric endpoint needs to be reachable at a common IP address and port, operators might consider using a reverse proxy that communicates with exposers over localhost addresses. To ease multiplexing, endpoints SHOULD carry their own name in their path, i.e. `/node\_exporter/metrics`. Expositions SHOULD NOT be combined into one exposition, for the reasons covered under "Supporting target metadata in both push-based and pull-based systems" and to allow for independent ingestion without a single point of failure. OpenMetrics would like to register two MIME types, `application/openmetrics-text` and `application/openmetrics-proto`. | https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md | main | prometheus | [
-0.027145788073539734,
-0.0725102573633194,
-0.003859192132949829,
-0.02486993372440338,
-0.01143696904182434,
-0.02394770085811615,
-0.004721228498965502,
0.04683602973818779,
0.07211735844612122,
-0.026913341134786606,
-0.06269962340593338,
-0.029040323570370674,
0.011112140491604805,
0.... | 0.079596 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.