content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
to register two MIME types, `application/openmetrics-text` and `application/openmetrics-proto`.
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_2_0.md
main
prometheus
[ -0.047383781522512436, -0.054900288581848145, -0.0603276751935482, -0.059876974672079086, -0.011324204504489899, -0.07244695723056793, 0.020229844376444817, 0.12812408804893494, -0.0065379757434129715, -0.04447131231427193, 0.06269260495901108, -0.017411241307854652, 0.04009103775024414, 0...
0.169833
- Version: 1.0 - Status: Published - Date: November 2020 - Authors: Richard Hartmann, Ben Kochie, Brian Brazil, Rob Skillington Created in 2012, Prometheus has been the default for cloud-native observability since 2015. A central part of Prometheus' design is its text metric exposition format, called the Prometheus exposition format 0.0.4, stable since 2014. In this format, special care has been taken to make it easy to generate, to ingest, and to understand by humans. As of 2020, there are more than 700 publicly listed exporters, an unknown number of unlisted exporters, and thousands of native library integrations using this format. Dozens of ingestors from various projects and companies support consuming it. With OpenMetrics, we are cleaning up and tightening the specification with the express purpose of bringing it into IETF. We are documenting a working standard with wide and organic adoption while introducing minimal, largely backwards-compatible, and well-considered changes. As of 2020, dozens of exporters, integrations, and ingestors use and preferentially negotiate OpenMetrics already. Given the wide adoption and significant coordination requirements in the ecosystem, sweeping changes to either the Prometheus exposition format 0.0.4 or OpenMetrics 1.0 are considered out of scope. > NOTE: OpenMetrics 2.0 development is in progress. Read [here](https://github.com/prometheus/OpenMetrics/issues/276) on how to join the Prometheus OM 2.0 work group. ## Overview Metrics are a specific kind of telemetry data. They represent a snapshot of the current state for a set of data. They are distinct from logs or events, which focus on records or information about individual events. OpenMetrics is primarily a wire format, independent of any particular transport for that format. The format is expected to be consumed on a regular basis and to be meaningful over successive expositions. Implementers MUST expose metrics in the OpenMetrics text format in response to a simple HTTP GET request to a documented URL for a given process or device. This endpoint SHOULD be called "/metrics". Implementers MAY also expose OpenMetrics formatted metrics in other ways, such as by regularly pushing metric sets to an operator-configured endpoint over HTTP. ### Metrics and Time Series This standard expresses all system states as numerical values; counts, current values, enumerations, and boolean states being common examples. Contrary to metrics, singular events occur at a specific time. Metrics tend to aggregate data temporally. While this can lose information, the reduction in overhead is an engineering trade-off commonly chosen in many modern monitoring systems. Time series are a record of changing information over time. While time series can support arbitrary strings or binary data, only numeric data is in scope for this RFC. Common examples of metric time series would be network interface counters, device temperatures, BGP connection states, and alert states. ## Data Model This section MUST be read together with the ABNF section. In case of disagreements between the two, the ABNF's restrictions MUST take precedence. This reduces repetition as the text wire format MUST be supported. ### Data Types #### Values Metric values in OpenMetrics MUST be either floating points or integers. Note that ingestors of the format MAY only support float64. The non-real values NaN, +Inf and -Inf MUST be supported. NaN MUST NOT be considered a missing value, but it MAY be used to signal a division by zero. ##### Booleans Boolean values MUST follow `1==true`, `0==false`. #### Timestamps Timestamps MUST be Unix Epoch in seconds. Negative timestamps MAY be used. #### Strings Strings MUST only consist of valid UTF-8 characters and MAY be zero length. NULL (ASCII 0x0) MUST be supported. #### Label Labels are key-value pairs consisting of strings. Label names beginning with underscores are RESERVED and
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.054278019815683365, 0.005869881249964237, -0.007801094092428684, -0.011779875494539738, 0.1189134269952774, -0.06552928686141968, -0.03912677988409996, 0.010038670152425766, 0.014580846764147282, 0.030424391850829124, -0.005463479086756706, -0.049925804138183594, 0.00029007927514612675, ...
0.196757
Timestamps MUST be Unix Epoch in seconds. Negative timestamps MAY be used. #### Strings Strings MUST only consist of valid UTF-8 characters and MAY be zero length. NULL (ASCII 0x0) MUST be supported. #### Label Labels are key-value pairs consisting of strings. Label names beginning with underscores are RESERVED and MUST NOT be used unless specified by this standard. Label names MUST follow the restrictions in the ABNF section. Empty label values SHOULD be treated as if the label was not present. #### LabelSet A LabelSet MUST consist of Labels and MAY be empty. Label names MUST be unique within a LabelSet. #### MetricPoint Each MetricPoint consists of a set of values, depending on the MetricFamily type. #### Exemplars Exemplars are references to data outside of the MetricSet. A common use case are IDs of program traces. Exemplars MUST consist of a LabelSet and a value, and MAY have a timestamp. They MAY each be different from the MetricPoints' LabelSet and timestamp. The combined length of the label names and values of an Exemplar's LabelSet MUST NOT exceed 128 UTF-8 character code points. Other characters in the text rendering of an exemplar such as `",=` are not included in this limit for implementation simplicity and for consistency between the text and proto formats. Ingestors MAY discard exemplars. #### Metric Metrics are defined by a unique LabelSet within a MetricFamily. Metrics MUST contain a list of one or more MetricPoints. Metrics with the same name for a given MetricFamily SHOULD have the same set of label names in their LabelSet. MetricPoints SHOULD NOT have explicit timestamps. If more than one MetricPoint is exposed for a Metric, then its MetricPoints MUST have monotonically increasing timestamps. #### MetricFamily A MetricFamily MAY have zero or more Metrics. A MetricFamily MUST have a name, HELP, TYPE, and UNIT metadata. Every Metric within a MetricFamily MUST have a unique LabelSet. ##### Name MetricFamily names are a string and MUST be unique within a MetricSet. Names SHOULD be in snake\_case. Metric names MUST follow the restrictions in the ABNF section. Colons in MetricFamily names are RESERVED to signal that the MetricFamily is the result of a calculation or aggregation of a general purpose monitoring system. MetricFamily names beginning with underscores are RESERVED and MUST NOT be used unless specified by this standard. ###### Suffixes The name of a MetricFamily MUST NOT result in a potential clash for sample metric names as per the ABNF with another MetricFamily in the Text Format within a MetricSet. An example would be a gauge called "foo\_created" as a counter called "foo" could create a "foo\_created" in the text format. Exposers SHOULD avoid names that could be confused with the suffixes that text format sample metric names use. \* Suffixes for the respective types are: \* Counter: `\_total`, `\_created` \* Summary: `\_count`, `\_sum`, `\_created`, `` (empty) \* Histogram: `\_count`, `\_sum`, `\_bucket`, `\_created` \* GaugeHistogram: `\_gcount`, `\_gsum`, `\_bucket` \* Info: `\_info` \* Gauge: `` (empty) \* StateSet: `` (empty) \* Unknown: `` (empty) ##### Type Type specifies the MetricFamily type. Valid values are "unknown", "gauge", "counter", "stateset", "info", "histogram", "gaugehistogram", and "summary". ##### Unit Unit specifies MetricFamily units. If non-empty, it MUST be a suffix of the MetricFamily name separated by an underscore. Be aware that further generation rules might make it an infix in the text format. ##### Help Help is a string and SHOULD be non-empty. It is used to give a brief description of the MetricFamily for human consumption and SHOULD be short enough to be used as a tooltip. ##### MetricSet A MetricSet is the top level object exposed
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.06318012624979019, 0.017891280353069305, -0.08855069428682327, -0.06253617256879807, -0.0568634532392025, -0.009784881956875324, 0.03206011280417442, 0.06377767771482468, 0.022635743021965027, -0.09502256661653519, 0.06998182088136673, -0.13233047723770142, 0.012017614208161831, -0.0282...
0.218154
infix in the text format. ##### Help Help is a string and SHOULD be non-empty. It is used to give a brief description of the MetricFamily for human consumption and SHOULD be short enough to be used as a tooltip. ##### MetricSet A MetricSet is the top level object exposed by OpenMetrics. It MUST consist of MetricFamilies and MAY be empty. Each MetricFamily name MUST be unique. The same label name and value SHOULD NOT appear on every Metric within a MetricSet. There is no specific ordering of MetricFamilies required within a MetricSet. An exposer MAY make an exposition easier to read for humans, for example sort alphabetically if the performance tradeoff makes sense. If present, an Info MetricFamily called "target" per the "Supporting target metadata in both push-based and pull-based systems" section below SHOULD be first. ### Metric Types #### Gauge Gauges are current measurements, such as bytes of memory currently used or the number of items in a queue. For gauges the absolute value is what is of interest to a user. A MetricPoint in a Metric with the type gauge MUST have a single value. Gauges MAY increase, decrease, or stay constant over time. Even if they only ever go in one direction, they might still be gauges and not counters. The size of a log file would usually only increase, a resource might decrease, and the limit of a queue size may be constant. A gauge MAY be used to encode an enum where the enum has many states and changes over time, it is the most efficient but least user friendly. #### Counter Counters measure discrete events. Common examples are the number of HTTP requests received, CPU seconds spent, or bytes sent. For counters how quickly they are increasing over time is what is of interest to a user. A MetricPoint in a Metric with the type Counter MUST have one value called Total. A Total is a non-NaN and MUST be monotonically non-decreasing over time, starting from 0. A MetricPoint in a Metric with the type Counter SHOULD have a Timestamp value called Created. This can help ingestors discern between new metrics and long-running ones it did not see before. A MetricPoint in a Metric's Counter's Total MAY reset to 0. If present, the corresponding Created time MUST also be set to the timestamp of the reset. A MetricPoint in a Metric's Counter's Total MAY have an exemplar. #### StateSet StateSets represent a series of related boolean values, also called a bitset. If ENUMs need to be encoded this MAY be done via StateSet. A point of a StateSet metric MAY contain multiple states and MUST contain one boolean per State. States have a name which are Strings. A StateSet Metric's LabelSet MUST NOT have a label name which is the same as the name of its MetricFamily. If encoded as a StateSet, ENUMs MUST have exactly one Boolean which is true within a MetricPoint. This is suitable where the enum value changes over time, and the number of States isn't much more than a handful. MetricFamilies of type StateSets MUST have an empty Unit string. #### Info Info metrics are used to expose textual information which SHOULD NOT change during process lifetime. Common examples are an application's version, revision control commit, and the version of a compiler. A MetricPoint of an Info Metric contains a LabelSet. An Info MetricPoint's LabelSet MUST NOT have a label name which is the same as the name of a label of the LabelSet of its Metric. Info MAY be used to encode ENUMs whose values do not
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.0656631588935852, -0.05227445438504219, -0.014232007786631584, 0.005275443196296692, -0.004537890199571848, -0.0003333331842441112, 0.09528281539678574, 0.08141075074672699, 0.0337991863489151, -0.04170335456728935, -0.0041801538318395615, -0.03359907492995262, 0.024177061393857002, 0.0...
0.182624
version of a compiler. A MetricPoint of an Info Metric contains a LabelSet. An Info MetricPoint's LabelSet MUST NOT have a label name which is the same as the name of a label of the LabelSet of its Metric. Info MAY be used to encode ENUMs whose values do not change over time, such as the type of a network interface. MetricFamilies of type Info MUST have an empty Unit string. #### Histogram Histograms measure distributions of discrete events. Common examples are the latency of HTTP requests, function runtimes, or I/O request sizes. A Histogram MetricPoint MUST contain at least one bucket, and SHOULD contain Sum, and Created values. Every bucket MUST have a threshold and a value. Histogram MetricPoints MUST have one bucket with an +Inf threshold. Buckets MUST be cumulative. As an example for a metric representing request latency in seconds its values for buckets with thresholds 1, 2, 3, and +Inf MUST follow value\_1 <= value\_2 <= value\_3 <= value\_+Inf. If ten requests took 1 second each, the values of the 1, 2, 3, and +Inf buckets MUST equal 10. The +Inf bucket counts all requests. If present, the Sum value MUST equal the Sum of all the measured event values. Bucket thresholds within a MetricPoint MUST be unique. Semantically, Sum, and buckets values are counters so MUST NOT be NaN or negative. Negative threshold buckets MAY be used, but then the Histogram MetricPoint MUST NOT contain a sum value as it would no longer be a counter semantically. Bucket thresholds MUST NOT equal NaN. Count and bucket values MUST be integers. A Histogram MetricPoint SHOULD have a Timestamp value called Created. This can help ingestors discern between new metrics and long-running ones it did not see before. A Histogram's Metric's LabelSet MUST NOT have a "le" label name. Bucket values MAY have exemplars. Buckets are cumulative to allow monitoring systems to drop any non-+Inf bucket for performance/anti-denial-of-service reasons in a way that loses granularity but is still a valid Histogram. Each bucket covers the values less and or equal to it, and the value of the exemplar MUST be within this range. Exemplars SHOULD be put into the bucket with the highest value. A bucket MUST NOT have more than one exemplar. #### GaugeHistogram GaugeHistograms measure current distributions. Common examples are how long items have been waiting in a queue, or size of the requests in a queue. A GaugeHistogram MetricPoint MUST have one bucket with an +Inf threshold, and SHOULD contain a Gsum value. Every bucket MUST have a threshold and a value. The buckets for a GaugeHistogram follow all the same rules as for a Histogram. The bucket and Gsum of a GaugeHistogram are conceptually gauges, however bucket values MUST NOT be negative or NaN. If negative threshold buckets are present, then sum MAY be negative. Gsum MUST NOT be NaN. Bucket values MUST be integers. A GaugeHistogram's Metric's LabelSet MUST NOT have a "le" label name. Bucket values can have exemplars. Each bucket covers the values less and or equal to it, and the value of the exemplar MUST be within this range. Exemplars SHOULD be put into the bucket with the highest value. A bucket MUST NOT have more than one exemplar. #### Summary Summaries also measure distributions of discrete events and MAY be used when Histograms are too expensive and/or an average event size is sufficient. They MAY also be used for backwards compatibility, because some existing instrumentation libraries expose precomputed quantiles and do not support Histograms. Precomputed quantiles SHOULD NOT be used, because quantiles are not aggregatable and the user
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ 0.020628802478313446, -0.03050813265144825, -0.05024617537856102, -0.07002752274274826, -0.04366738721728325, -0.004396116826683283, 0.06213650107383728, 0.029097918421030045, 0.0682251900434494, -0.0337572805583477, -0.0030850868206471205, -0.09669636189937592, 0.04154239967465401, -0.006...
0.2241
MAY be used when Histograms are too expensive and/or an average event size is sufficient. They MAY also be used for backwards compatibility, because some existing instrumentation libraries expose precomputed quantiles and do not support Histograms. Precomputed quantiles SHOULD NOT be used, because quantiles are not aggregatable and the user often can not deduce what timeframe they cover. A Summary MetricPoint MAY consist of a Count, Sum, Created, and a set of quantiles. Semantically, Count and Sum values are counters so MUST NOT be NaN or negative. Count MUST be an integer. A MetricPoint in a Metric with the type Summary which contains Count or Sum values SHOULD have a Timestamp value called Created. This can help ingestors discern between new metrics and long-running ones it did not see before. Created MUST NOT relate to the collection period of quantile values. Quantiles are a map from a quantile to a value. An example is a quantile 0.95 with value 0.2 in a metric called myapp\_http\_request\_duration\_seconds which means that the 95th percentile latency is 200ms over an unknown timeframe. If there are no events in the relevant timeframe, the value for a quantile MUST be NaN. A Quantile's Metric's LabelSet MUST NOT have "quantile" label name. Quantiles MUST be between 0 and 1 inclusive. Quantile values MUST NOT be negative. Quantile values SHOULD represent the recent values. Commonly this would be over the last 5-10 minutes. #### Unknown Unknown SHOULD NOT be used. Unknown MAY be used when it is impossible to determine the types of individual metrics from 3rd party systems. A point in a metric with the unknown type MUST have a single value. # Data transmission & wire formats The text wire format MUST be supported and is the default. The protobuf wire format MAY be supported and MUST ONLY be used after negotiation. The OpenMetrics formats are Regular Chomsky Grammars, making writing quick and small parsers possible. The text format compresses well, and protobuf is already binary and efficiently encoded. Partial or invalid expositions MUST be considered erroneous in their entirety. ### Protocol Negotiation All ingestor implementations MUST be able to ingest data secured with TLS 1.2 or later. All exposers SHOULD be able to emit data secured with TLS 1.2 or later. ingestor implementations SHOULD be able to ingest data from HTTP without TLS. All implementations SHOULD use TLS to transmit data. Negotiation of what version of the OpenMetrics format to use is out-of-band. For example for pull-based exposition over HTTP standard HTTP content type negotiation is used, and MUST default to the oldest version of the standard (i.e. 1.0.0) if no newer version is requested. Push-based negotiation is inherently more complex, as the exposer typically initiates the connection. Producers MUST use the oldest version of the standard (i.e. 1.0.0) unless requested otherwise by the ingestor. ### Text format #### ABNF ABNF as per RFC 5234 "exposition" is the top level token of the ABNF. ```abnf exposition = metricset HASH SP eof [ LF ] metricset = \*metricfamily metricfamily = \*metric-descriptor \*metric metric-descriptor = HASH SP type SP metricname SP metric-type LF metric-descriptor =/ HASH SP help SP metricname SP escaped-string LF metric-descriptor =/ HASH SP unit SP metricname SP \*metricname-char LF metric = \*sample metric-type = counter / gauge / histogram / gaugehistogram / stateset metric-type =/ info / summary / unknown sample = metricname [labels] SP number [SP timestamp] [exemplar] LF exemplar = SP HASH SP labels SP number [SP timestamp] labels = "{" [label \*(COMMA label)] "}" label = label-name EQ DQUOTE escaped-string DQUOTE number = realnumber ; Case insensitive number =/ [SIGN]
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.04942198470234871, -0.003024469595402479, -0.027419043704867363, -0.06240523234009743, -0.0892224982380867, 0.013388416729867458, 0.0224284827709198, 0.04052077606320381, 0.14570030570030212, -0.007490505464375019, 0.01809821091592312, -0.07183025032281876, -0.017407815903425217, 0.0530...
0.179929
metric-type =/ info / summary / unknown sample = metricname [labels] SP number [SP timestamp] [exemplar] LF exemplar = SP HASH SP labels SP number [SP timestamp] labels = "{" [label \*(COMMA label)] "}" label = label-name EQ DQUOTE escaped-string DQUOTE number = realnumber ; Case insensitive number =/ [SIGN] ("inf" / "infinity") number =/ "nan" timestamp = realnumber ; Not 100% sure this captures all float corner cases. ; Leading 0s explicitly okay realnumber = [SIGN] 1\*DIGIT realnumber =/ [SIGN] 1\*DIGIT ["." \*DIGIT] [ "e" [SIGN] 1\*DIGIT ] realnumber =/ [SIGN] \*DIGIT "." 1\*DIGIT [ "e" [SIGN] 1\*DIGIT ] ; RFC 5234 is case insensitive. ; Uppercase eof = %d69.79.70 type = %d84.89.80.69 help = %d72.69.76.80 unit = %d85.78.73.84 ; Lowercase counter = %d99.111.117.110.116.101.114 gauge = %d103.97.117.103.101 histogram = %d104.105.115.116.111.103.114.97.109 gaugehistogram = gauge histogram stateset = %d115.116.97.116.101.115.101.116 info = %d105.110.102.111 summary = %d115.117.109.109.97.114.121 unknown = %d117.110.107.110.111.119.110 BS = "\" EQ = "=" COMMA = "," HASH = "#" SIGN = "-" / "+" metricname = metricname-initial-char 0\*metricname-char metricname-char = metricname-initial-char / DIGIT metricname-initial-char = ALPHA / "\_" / ":" label-name = label-name-initial-char \*label-name-char label-name-char = label-name-initial-char / DIGIT label-name-initial-char = ALPHA / "\_" escaped-string = \*escaped-char escaped-char = normal-char escaped-char =/ BS ("n" / DQUOTE / BS) escaped-char =/ BS normal-char ; Any unicode character, except newline, double quote, and backslash normal-char = %x00-09 / %x0B-21 / %x23-5B / %x5D-D7FF / %xE000-10FFFF ``` #### Overall Structure UTF-8 MUST be used. Byte order markers (BOMs) MUST NOT be used. As an important reminder for implementers, byte 0 is valid UTF-8 while, for example, byte 255 is not. The content type MUST be: ``` application/openmetrics-text; version=1.0.0; charset=utf-8 ``` Line endings MUST be signalled with line feed (\n) and MUST NOT contain carriage returns (\r). Expositions MUST end with EOF and SHOULD end with `EOF\n`. An example of a complete exposition: ``` # TYPE acme\_http\_router\_request\_seconds summary # UNIT acme\_http\_router\_request\_seconds seconds # HELP acme\_http\_router\_request\_seconds Latency though all of ACME's HTTP request router. acme\_http\_router\_request\_seconds\_sum{path="/api/v1",method="GET"} 9036.32 acme\_http\_router\_request\_seconds\_count{path="/api/v1",method="GET"} 807283.0 acme\_http\_router\_request\_seconds\_created{path="/api/v1",method="GET"} 1605281325.0 acme\_http\_router\_request\_seconds\_sum{path="/api/v2",method="POST"} 479.3 acme\_http\_router\_request\_seconds\_count{path="/api/v2",method="POST"} 34.0 acme\_http\_router\_request\_seconds\_created{path="/api/v2",method="POST"} 1605281325.0 # TYPE go\_goroutines gauge # HELP go\_goroutines Number of goroutines that currently exist. go\_goroutines 69 # TYPE process\_cpu\_seconds counter # UNIT process\_cpu\_seconds seconds # HELP process\_cpu\_seconds Total user and system CPU time spent in seconds. process\_cpu\_seconds\_total 4.20072246e+06 # EOF ``` ##### Escaping Where the ABNF notes escaping, the following escaping MUST be applied Line feed, `\n` (0x0A) -> literally `\\n` (Bytecode 0x5c 0x6e) Double quotes -> `\\"` (Bytecode 0x5c 0x22) Backslash -> `\\\\` (Bytecode 0x5c 0x5c) A double backslash SHOULD be used to represent a backslash character. A single backslash SHOULD NOT be used for undefined escape sequences. As an example, `\\\\a` is equivalent and preferable to `\\a`. ##### Numbers Integer numbers MUST NOT have a decimal point. Examples are `23`, `0042`, and `1341298465647914`. Floating point numbers MUST be represented either with a decimal point or using scientific notation. Examples are `8903.123421` and `1.89e-7`. Floating point numbers MUST fit within the range of a 64-bit floating point value as defined by IEEE 754, but MAY require so many bits in the mantissa that results in lost precision. This MAY be used to encode nanosecond resolution timestamps. Arbitrary integer and floating point rendering of numbers MUST NOT be used for "quantile" and "le" label values as in section "Canonical Numbers". They MAY be used anywhere else numbers are used. ###### Considerations: Canonical Numbers Numbers in the "le" label values of histograms and "quantile" label values of summary metrics are special in that they're label values, and label values are intended to be opaque. As end users will likely directly interact
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.003763437271118164, 0.03204803913831711, -0.03548821061849594, -0.11621585488319397, 0.0058744498528540134, 0.013415931724011898, 0.04755428805947304, 0.07408197969198227, 0.005151193588972092, -0.05153995379805565, 0.04446031525731087, -0.06980657577514648, 0.05329766497015953, 0.01612...
0.073237
They MAY be used anywhere else numbers are used. ###### Considerations: Canonical Numbers Numbers in the "le" label values of histograms and "quantile" label values of summary metrics are special in that they're label values, and label values are intended to be opaque. As end users will likely directly interact with these string values, and as many monitoring systems lack the ability to deal with them as first-class numbers, it would be beneficial if a given number had the exact same text representation. Consistency is highly desirable, but real world implementations of languages and their runtimes make mandating this impractical. The most important common quantiles are 0.5, 0.95, 0.9, 0.99, 0.999 and bucket values representing values from a millisecond up to 10.0 seconds, because those cover cases like latency SLAs and Apdex for typical web services. Powers of ten are covered to try to ensure that the switch between fixed point and exponential rendering is consistent as this varies across runtimes. The target rendering is equivalent to the default Go rendering of float64 values (i.e. %g), with a .0 appended in case there is no decimal point or exponent to make clear that they are floats. Exposers MUST produce output for positive infinity as +Inf. Exposers SHOULD produce output for the values 0.0 up to 10.0 in 0.001 increments in line with the following examples: 0.0 0.001 0.002 0.01 0.1 0.9 0.95 0.99 0.999 1.0 1.7 10.0 Exposers SHOULD produce output for the values 1e-10 up to 1e+10 in powers of ten in line with the following examples: 1e-10 1e-09 1e-05 0.0001 0.1 1.0 100000.0 1e+06 1e+10 Parsers MUST NOT reject inputs which are outside of the canonical values merely because they are not consistent with the canonical values. For example 1.1e-4 must not be rejected, even though it is not the consistent rendering of 0.00011. Exposers SHOULD follow these patterns for non-canonical numbers, and the intention is by adjusting the rendering algorithm to be consistent for these values that the vast majority of other values will also have consistent rendering. Exposers using only a few particular le/quantile values could also hardcode. In languages such as C where a minimal floating point rendering algorithm such as Grisu3 is not readily available, exposers MAY use a different rendering. A warning to implementers in C and other languages that share its printf implementation: The standard precision of %f, %e and %g is only six significant digits. 17 significant digits are required for full precision, e.g. `printf("%.17g", d)`. ##### Timestamps Timestamps SHOULD NOT use exponential float rendering for timestamps if nanosecond precision is needed as rendering of a float64 does not have sufficient precision, e.g. `1604676851.123456789`. #### MetricFamily There MUST NOT be an explicit separator between MetricFamilies. The next MetricFamily MUST be signalled with either metadata or a new sample metric name which cannot be part of the previous MetricFamily. MetricFamilies MUST NOT be interleaved. ##### MetricFamily metadata There are four pieces of metadata: The MetricFamily name, TYPE, UNIT and HELP. An example of the metadata for a counter Metric called foo is: ``` # TYPE foo counter ``` If no TYPE is exposed, the MetricFamily MUST be of type Unknown. If a unit is specified it MUST be provided in a UNIT metadata line. In addition, an underscore and the unit MUST be the suffix of the MetricFamily name. A valid example for a foo\_seconds metric with a unit of "seconds": ``` # TYPE foo\_seconds counter # UNIT foo\_seconds seconds ``` An invalid example, where the unit is not a suffix on the name: ``` # TYPE foo counter # UNIT foo
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.08225053548812866, 0.03470780700445175, -0.0780077576637268, -0.06738819926977158, -0.03336648643016815, 0.02565235272049904, 0.05688583478331566, 0.04879702255129814, 0.11391419172286987, -0.05826359614729881, 0.004223851952701807, -0.009629656560719013, -0.007844984531402588, 0.008500...
0.139375
be the suffix of the MetricFamily name. A valid example for a foo\_seconds metric with a unit of "seconds": ``` # TYPE foo\_seconds counter # UNIT foo\_seconds seconds ``` An invalid example, where the unit is not a suffix on the name: ``` # TYPE foo counter # UNIT foo seconds ``` It is also valid to have: ``` # TYPE foo\_seconds counter ``` If the unit is known it SHOULD be provided. The value of a UNIT or HELP line MAY be empty. This MUST be treated as if no metadata line for the MetricFamily existed. ``` # TYPE foo\_seconds counter # UNIT foo\_seconds seconds # HELP foo\_seconds Some text and \n some \" escaping ``` There MUST NOT be more than one of each type of metadata line for a MetricFamily. The ordering SHOULD be TYPE, UNIT, HELP. Aside from this metadata and the EOF line at the end of the message, you MUST NOT expose lines beginning with a #. ##### Metric Metrics MUST NOT be interleaved. See the example in "Text format -> MetricPoint". Labels A sample without labels or a timestamp and the value 0 MUST be rendered either like: ``` bar\_seconds\_count 0 ``` or like ``` bar\_seconds\_count{} 0 ``` Label values MAY be any valid UTF-8 value, so escaping MUST be applied as per the ABNF. A valid example with two labels: ``` bar\_seconds\_count{a="x",b="escaping\" example \n "} 0 ``` The rendering of values for a MetricPoint can include additional labels (e.g. the "le" label for a Histogram type), which MUST be rendered in the same way as a Metric's own LabelSet. #### MetricPoint MetricPoints MUST NOT be interleaved. A correct example where there were multiple MetricPoints and Samples within a MetricFamily would be: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_sum{a="bb"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_sum{a="bb"} 0 456 foo\_seconds\_count{a="ccc"} 0 123 foo\_seconds\_sum{a="ccc"} 0 123 foo\_seconds\_count{a="ccc"} 0 456 foo\_seconds\_sum{a="ccc"} 0 456 ``` An incorrect example where Metrics are interleaved: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_count{a="ccc"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_count{a="ccc"} 0 456 ``` An incorrect example where MetricPoints are interleaved: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_sum{a="bb"} 0 123 foo\_seconds\_sum{a="bb"} 0 456 ``` #### Metric types ##### Gauge The Sample MetricName for the value of a MetricPoint for a MetricFamily of type Gauge MUST NOT have a suffix. An example MetricFamily with a Metric with no labels and a MetricPoint with no timestamp: ``` # TYPE foo gauge foo 17.0 ``` An example of a MetricFamily with two Metrics with a label and MetricPoints with no timestamp: ``` # TYPE foo gauge foo{a="bb"} 17.0 foo{a="ccc"} 17.0 ``` An example of a MetricFamily with no Metrics: ``` # TYPE foo gauge ``` An example with a Metric with a label and a MetricPoint with a timestamp: ``` # TYPE foo gauge foo{a="b"} 17.0 1520879607.789 ``` An example with a Metric with no labels and MetricPoint with a timestamp: ``` # TYPE foo gauge foo 17.0 1520879607.789 ``` An example with a Metric with no labels and two MetricPoints with timestamps: ``` # TYPE foo gauge foo 17.0 123 foo 18.0 456 ``` ##### Counter The MetricPoint's Total Value Sample MetricName MUST have the suffix `\_total`. If present the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. An example with a Metric with no labels, and a MetricPoint with no timestamp and no created: ``` # TYPE foo counter foo\_total 17.0 ``` An example with a Metric with no labels, and a
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.07905760407447815, -0.004689470864832401, -0.02651786245405674, 0.014502840116620064, -0.09507223218679428, 0.03603595867753029, 0.024296743795275688, 0.038885992020368576, 0.059758782386779785, -0.03478415682911873, 0.02885502576828003, -0.10468059033155441, 0.05817650631070137, 0.0499...
0.131778
suffix `\_total`. If present the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. An example with a Metric with no labels, and a MetricPoint with no timestamp and no created: ``` # TYPE foo counter foo\_total 17.0 ``` An example with a Metric with no labels, and a MetricPoint with a timestamp and no created: ``` # TYPE foo counter foo\_total 17.0 1520879607.789 ``` An example with a Metric with no labels, and a MetricPoint with no timestamp and a created: ``` # TYPE foo counter foo\_total 17.0 foo\_created 1520430000.123 ``` An example with a Metric with no labels, and a MetricPoint with a timestamp and a created: ``` # TYPE foo counter foo\_total 17.0 1520879607.789 foo\_created 1520430000.123 1520879607.789 ``` Exemplars MAY be attached to the MetricPoint's Total sample. ##### StateSet The Sample MetricName for the value of a MetricPoint for a MetricFamily of type StateSet MUST NOT have a suffix. StateSets MUST have one sample per State in the MetricPoint. Each State's sample MUST have a label with the MetricFamily name as the label name and the State name as the label value. The State sample's value MUST be 1 if the State is true and MUST be 0 if the State is false. An example with the states "a", "bb", and "ccc" in which only the value bb is enabled and the metric name is foo: ``` # TYPE foo stateset foo{foo="a"} 0 foo{foo="bb"} 1 foo{foo="ccc"} 0 ``` An example of an "entity" label on the Metric: ``` # TYPE foo stateset foo{entity="controller",foo="a"} 1.0 foo{entity="controller",foo="bb"} 0.0 foo{entity="controller",foo="ccc"} 0.0 foo{entity="replica",foo="a"} 1.0 foo{entity="replica",foo="bb"} 0.0 foo{entity="replica",foo="ccc"} 1.0 ``` ##### Info The Sample MetricName for the value of a MetricPoint for a MetricFamily of type Info MUST have the suffix `\_info`. The Sample value MUST always be 1. An example of a Metric with no labels, and one MetricPoint value with "name" and "version" labels: ``` # TYPE foo info foo\_info{name="pretty name",version="8.2.7"} 1 ``` An example of a Metric with label "entity" and one MetricPoint value with “name” and “version” labels: ``` # TYPE foo info foo\_info{entity="controller",name="pretty name",version="8.2.7"} 1.0 foo\_info{entity="replica",name="prettier name",version="8.1.9"} 1.0 ``` Metric labels and MetricPoint value labels MAY be in any order. ##### Summary If present, the MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_sum`. If present, the MetricPoint's Count Value Sample MetricName MUST have the suffix `\_count`. If present, the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. If present, the MetricPoint's Quantile Values MUST specify the quantile measured using a label with a label name of "quantile" and with a label value of the quantile measured. An example of a Metric with no labels and a MetricPoint with Sum, Count and Created values: ``` # TYPE foo summary foo\_count 17.0 foo\_sum 324789.3 foo\_created 1520430000.123 ``` An example of a Metric with no labels and a MetricPoint with two quantiles: ``` # TYPE foo summary foo{quantile="0.95"} 123.7 foo{quantile="0.99"} 150.0 ``` Quantiles MAY be in any order. ##### Histogram The MetricPoint's Bucket Values Sample MetricNames MUST have the suffix `\_bucket`. If present, the MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_sum`. If present, the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. If and only if a Sum Value is present in a MetricPoint, then the MetricPoint's +Inf Bucket value MUST also appear in a Sample with a MetricName with the suffix "\_count". Buckets MUST be sorted in number increasing order of "le", and the value of the "le" label MUST follow the rules for Canonical Numbers. An example of a Metric with no labels and a MetricPoint with Sum,
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.06235724315047264, -0.05174030736088753, -0.005960563197731972, 0.018298272043466568, -0.057488031685352325, -0.01829330436885357, 0.04609304666519165, 0.018509604036808014, 0.13847312331199646, -0.012294059619307518, 0.06868094950914383, -0.08608616888523102, 0.030912894755601883, 0.03...
0.104567
MUST also appear in a Sample with a MetricName with the suffix "\_count". Buckets MUST be sorted in number increasing order of "le", and the value of the "le" label MUST follow the rules for Canonical Numbers. An example of a Metric with no labels and a MetricPoint with Sum, Count, and Created values, and with 12 buckets. A wide and atypical but valid variety of “le” values is shown on purpose: ``` # TYPE foo histogram foo\_bucket{le="0.0"} 0 foo\_bucket{le="1e-05"} 0 foo\_bucket{le="0.0001"} 5 foo\_bucket{le="0.1"} 8 foo\_bucket{le="1.0"} 10 foo\_bucket{le="10.0"} 11 foo\_bucket{le="100000.0"} 11 foo\_bucket{le="1e+06"} 15 foo\_bucket{le="1e+23"} 16 foo\_bucket{le="1.1e+23"} 17 foo\_bucket{le="+Inf"} 17 foo\_count 17 foo\_sum 324789.3 foo\_created 1520430000.123 ``` ###### Exemplars Exemplars without Labels MUST represent an empty LabelSet as {}. An example of Exemplars showcasing several valid cases: The "0.01" bucket has no Exemplar. The 0.1 bucket has an Exemplar with no Labels. The 1 bucket has an Exemplar with one Label. The 10 bucket has an Exemplar with a Label and a timestamp. In practice all buckets SHOULD have the same style of Exemplars. ``` # TYPE foo histogram foo\_bucket{le="0.01"} 0 foo\_bucket{le="0.1"} 8 # {} 0.054 foo\_bucket{le="1"} 11 # {trace\_id="KOO5S4vxi0o"} 0.67 foo\_bucket{le="10"} 17 # {trace\_id="oHg5SJYRHA0"} 9.8 1520879607.789 foo\_bucket{le="+Inf"} 17 foo\_count 17 foo\_sum 324789.3 foo\_created 1520430000.123 ``` ##### GaugeHistogram The MetricPoint's Bucket Values Sample MetricNames MUST have the suffix `\_bucket`. If present, the MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_gsum`. If and only if a Sum Value is present in a MetricPoint, then the MetricPoint's +Inf Bucket value MUST also appear in a Sample with a MetricName with the suffix `\_gcount`. Buckets MUST be sorted in number increasing order of "le", and the value of the "le" label MUST follow the rules for Canonical Numbers. An example of a Metric with no labels, and one MetricPoint value with no Exemplar with no Exemplars in the buckets: ``` # TYPE foo gaugehistogram foo\_bucket{le="0.01"} 20.0 foo\_bucket{le="0.1"} 25.0 foo\_bucket{le="1"} 34.0 foo\_bucket{le="10"} 34.0 foo\_bucket{le="+Inf"} 42.0 foo\_gcount 42.0 foo\_gsum 3289.3 ``` ##### Unknown The sample metric name for the value of the MetricPoint for a MetricFamily of type Unknown MUST NOT have a suffix. An example with a Metric with no labels and a MetricPoint with no timestamp: ``` # TYPE foo unknown foo 42.23 ``` ### Protobuf format #### Overall Structure Protobuf messages MUST be encoded in binary and MUST have `application/openmetrics-protobuf; version=1.0.0` as their content type. All payloads MUST be a single binary encoded MetricSet message, as defined by the OpenMetrics protobuf schema. ##### Version The protobuf format MUST follow the proto3 version of the protocol buffer language. ##### Strings All string fields MUST be UTF-8 encoded. ##### Timestamps Timestamp representations in the OpenMetrics protobuf schema MUST follow the published google.protobuf.Timestamp [timestamp] message. The timestamp message MUST be in Unix epoch seconds as an int64 and a non-negative fraction of a second at nanosecond resolution as an int32 that counts forward from the seconds timestamp component. It MUST be within 0 to 999,999,999 inclusive. #### Protobuf schema Protobuf schema is currently available [here](https://github.com/prometheus/OpenMetrics/blob/3bb328ab04d26b25ac548d851619f90d15090e5d/proto/openmetrics\_data\_model.proto). > NOTE: Prometheus and ecosystem does not support OpenMetrics protobuf schema, instead it uses similar `io.prometheus.client` [format](https://github.com/prometheus/client\_model/blob/master/io/prometheus/client/metrics.proto). Discussions about the future of the protobuf schema in OpenMetrics 2.0 [are in progress](https://github.com/prometheus/OpenMetrics/issues/296). ## Design Considerations ### Scope OpenMetrics is intended to provide telemetry for online systems. It runs over protocols which do not provide hard or soft real time guarantees, so it can not make any real time guarantees itself. Latency and jitter properties of OpenMetrics are as imprecise as the underlying network, operating systems, CPUs, and the like. It is sufficiently accurate for aggregations to be used as a
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ 0.04458596184849739, 0.0016966280527412891, 0.04421239346265793, -0.07925049215555191, -0.031175674870610237, 0.021492596715688705, 0.04639361426234245, 0.015379988588392735, 0.03035365231335163, -0.02362448163330555, -0.04713094234466553, -0.10133349150419235, 0.08160047233104706, 0.00748...
0.002731
which do not provide hard or soft real time guarantees, so it can not make any real time guarantees itself. Latency and jitter properties of OpenMetrics are as imprecise as the underlying network, operating systems, CPUs, and the like. It is sufficiently accurate for aggregations to be used as a basis for decision-making, but not to reflect individual events. Systems of all sizes should be supported, from applications that receive a few requests an hour up to monitoring bandwidth usage on a 400Gb network port. Aggregation and analysis of transmitted telemetry should be possible over arbitrary time periods. It is intended to transport snapshots of state at the time of data transmission at a regular cadence. #### Out of scope How ingestors discover which exposers exist, and vice-versa, is out of scope for and thus not defined in this standard. ### Extensions and Improvements This first version of OpenMetrics is based upon well established and de facto standard Prometheus text format 0.0.4, deliberately without adding major syntactic or semantic extensions, or optimisations on top of it. For example no attempt has been made to make the text representation of Histogram buckets more compact, relying on compression in the underlying stack to deal with their repetitive nature. This is a deliberate choice, so that the standard can take advantage of the adoption and momentum of the existing user base. This ensures a relatively easy transition from the Prometheus text format 0.0.4. It also ensures that there is a basic standard which is easy to implement. This can be built upon in future versions of the standard. The intention is that future versions of the standard will always require support for this 1.0 version, both syntactically and semantically. We want to allow monitoring systems to get usable information from an OpenMetrics exposition without undue burden. If one were to strip away all metadata and structure and just look at an OpenMetrics exposition as an unordered set of samples that should be usable on its own. As such, there are also no opaque binary types, such as sketches or t-digests which could not be expressed as a mix of gauges and counters as they would require custom parsing and handling. This principle is applied consistently throughout the standard. For example a MetricFamily's unit is duplicated in the name so that the unit is available for systems that don't understand the unit metadata. The "le" label is a normal label value, rather than getting its own special syntax, so that ingestors don't have to add special histogram handling code to ingest them. As a further example, there are no composite data types. For example, there is no geolocation type for latitude/longitude as this can be done with separate gauge metrics. ### Units and Base Units For consistency across systems and to avoid confusion, units are largely based on SI base units. Base units include seconds, bytes, joules, grams, meters, ratios, volts, amperes, and celsius. Units should be provided where they are applicable. For example, having all duration metrics in seconds, there is no risk of having to guess whether a given metric is nanoseconds, microseconds, milliseconds, seconds, minutes, hours, days or weeks nor having to deal with mixed units. By choosing unprefixed units, we avoid situations like ones in which kilomilliseconds were the result of emergent behaviour of complex systems. As values can be floating point, sub-base-unit precision is built into the standard. Similarly, mixing bits and bytes is confusing, so bytes are chosen as the base. While Kelvin is a better base unit in theory, in practice most existing hardware exposes Celsius. Kilograms
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.04584924131631851, 0.02609839290380478, -0.028418419882655144, -0.036601822823286057, -0.0037141980137676, -0.09467143565416336, -0.02394620142877102, 0.03562219813466072, 0.0631491169333458, 0.0004319424042478204, 0.0028914541471749544, 0.0065636648796498775, -0.0039764088578522205, -0...
0.193454
of emergent behaviour of complex systems. As values can be floating point, sub-base-unit precision is built into the standard. Similarly, mixing bits and bytes is confusing, so bytes are chosen as the base. While Kelvin is a better base unit in theory, in practice most existing hardware exposes Celsius. Kilograms are the SI base unit, however the kilo prefix is problematic so grams are chosen as the base unit. While base units SHOULD be used in all possible cases, Kelvin is a well-established unit which MAY be used instead of Celsius for use cases such as color or black body temperatures where a comparison between a Celsius and Kelvin metric are unlikely. Ratios are the base unit, not percentages. Where possible, raw data in the form of gauges or counters for the given numerator and denominator should be exposed. This has better mathematical properties for analysis and aggregation in the ingestors. Decibels are not a base unit as firstly, deci is a SI prefix and secondly, bels are logarithmic. To expose signal/energy/power ratios exposing the ratio directly would be better, or better still the raw power/energy if possible. Floating point exponents are more than sufficient to cover even extreme scientific uses. An electron volt (~1e-19 J) all the way up to the energy emitted by a supernova (~1e44 J) is 63 orders of magnitude, and a 64-bit floating point number can cover over 2000 orders of magnitude. If non-base units can not be avoided and conversion is not feasible, the actual unit should still be included in the metric name for clarity. For example, joule is the base unit for both energy and power, as watts can be expressed as a counter with a joule unit. In practice a given 3rd party system may only expose watts, so a gauge expressed in watts would be the only realistic choice in that case. Not all MetricFamilies have units. For example a count of HTTP requests wouldn't have a unit. Technically the unit would be HTTP requests, but in that sense the entire MetricFamily name is the unit. Going to that extreme would not be useful. The possibility of having good axes on graphs in downstream systems for human consumption should always be kept in mind. ### Statelessness The wire format defined by OpenMetrics is stateless across expositions. What information has been exposed before MUST have no impact on future expositions. Each exposition is a self-contained snapshot of the current state of the exposer. The same self-contained exposition MUST be provided to existing and new ingestors. A core design choice is that exposers MUST NOT exclude a metric merely because it has had no recent changes, or observations. An exposer must not make any assumptions about how often ingestors are consuming expositions. ### Exposition Across Time and Metric Evolution Metrics are most useful when their evolution over time can be analysed, so accordingly expositions must make sense over time. Thus, it is not sufficient for one single exposition on its own to be useful and valid. Some changes to metric semantics can also break downstream users. Parsers commonly optimize by caching previous results. Thus, changing the order in which labels are exposed across expositions SHOULD be avoided even though it is technically not breaking This also tends to make writing unit tests for exposition easier. Metrics and samples SHOULD NOT appear and disappear from exposition to exposition, for example a counter is only useful if it has history. In principle, a given Metric should be present in exposition from when the process starts until the process terminates. It is often not possible
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.05490642413496971, -0.006106055807322264, -0.08166094869375229, -0.01336196344345808, -0.03790827840566635, -0.06858514249324799, -0.02627803012728691, 0.046541761606931686, 0.11714519560337067, 0.005135586950927973, -0.038107845932245255, -0.1504487246274948, 0.034494731575250626, 0.05...
0.092023
for exposition easier. Metrics and samples SHOULD NOT appear and disappear from exposition to exposition, for example a counter is only useful if it has history. In principle, a given Metric should be present in exposition from when the process starts until the process terminates. It is often not possible to know in advance what Metrics a MetricFamily will have over the lifetime of a given process (e.g. a label value of a latency histogram is a HTTP path, which is provided by an end user at runtime), but once a counter-like Metric is exposed it should continue to be exposed until the process terminates. That a counter is not getting increments doesn't invalidate that it still has its current value. There are cases where it may make sense to stop exposing a given Metric; see the section on Missing Data. In general changing a MetricFamily's type, or adding or removing a label from its Metrics will be breaking to ingestors. A notable exception is that adding a label to the value of an Info MetricPoints is not breaking. This is so that you can add additional information to an existing Info MetricFamily where it makes sense to be, rather than being forced to create a brand new info metric with an additional label value. ingestor systems should ensure that they are resilient to such additions. Changing a MetricFamily's Help is not breaking. For values where it is possible, switching between floats and ints is not breaking. Adding a new state to a stateset is not breaking. Adding unit metadata where it doesn't change the metric name is not breaking. Histogram buckets SHOULD NOT change from exposition to exposition, as this is likely to both cause performance issues and break ingestors and cause. Similarly all expositions from any consistent binary and environment of an application SHOULD have the same buckets for a given Histogram MetricFamily, so that they can be aggregated by all ingestors without ingestors having to implement histogram merging logic for heterogeneous buckets. An exception might be occasional manual changes to buckets which are considered breaking, but may be a valid tradeoff when performance characteristics change due to a new software release. Even if changes are not technically breaking, they still carry a cost. For example frequent changes may cause performance issues for ingestors. A Help string that varies from exposition to exposition may cause each Help value to be stored. Frequently switching between int and float values could prevent efficient compression. ### NaN NaN is a number like any other in OpenMetrics, usually resulting from a division by zero such as for a summary quantile if there have been no observations recently. NaN does not have any special meaning in OpenMetrics, and in particular MUST NOT be used as a marker for missing or otherwise bad data. ### Missing Data There are valid cases when data stops being present. For example a filesystem can be unmounted and thus its Gauge Metric for free disk space no longer exists. There is no special marker or signal for this situation. Subsequent expositions simply do not include this Metric. ### Exposition Performance Metrics are only useful if they can be collected in reasonable time frames. Metrics that take minutes to expose are not considered useful. As a rule of thumb, exposition SHOULD take no more than a second. Metrics from legacy systems serialized through OpenMetrics may take longer. For this reason, no hard performance assumptions can be made. Exposition SHOULD be of the most recent state. For example, a thread serving the exposition request SHOULD NOT rely on cached values,
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.08187277615070343, -0.07039482146501541, -0.039561010897159576, -0.008569637313485146, 0.003760735737159848, 0.08868413418531418, 0.044292058795690536, 0.004752771463245153, 0.1793539971113205, -0.050057634711265564, -0.0012567919911816716, -0.03015301190316677, 0.01563004031777382, 0.0...
0.135541
exposition SHOULD take no more than a second. Metrics from legacy systems serialized through OpenMetrics may take longer. For this reason, no hard performance assumptions can be made. Exposition SHOULD be of the most recent state. For example, a thread serving the exposition request SHOULD NOT rely on cached values, to the extent it is able to bypass any such caching ### Concurrency For high availability and ad-hoc access a common approach is to have multiple ingestors. To support this, concurrent expositions MUST be supported. All BCPs for concurrent systems SHOULD be followed, common pitfalls include deadlocks, race conditions, and overly-coarse grained locking preventing expositions progressing concurrently. ### Metric Naming and Namespaces We aim for a balance between understandability, avoiding clashes, and succinctness in the naming of metrics and label names. Names are separated through underscores, so metric names end up being in “snake\_case”. To take an example "http\_request\_seconds" is succinct but would clash between large numbers of applications, and it's also unclear exactly what this metric is measuring. For example, it might be before or after auth middleware in a complex system. Metric names should indicate what piece of code they come from. So a company called A Company Manufacturing Everything might prefix all metrics in their code with "acme\_", and if they had a HTTP router library measuring latency it might have a metric such as "acme\_http\_router\_request\_seconds" with a Help string indicating that it is the overall latency. It is not the aim to prevent all potential clashes across all applications, as that would require heavy handed solutions such as a global registry of metric namespaces or very long namespaces based on DNS. Rather the aim is to keep to a lightweight informal approach, so that for a given application that it is very unlikely that there is clash across its constituent libraries. Across a given deployment of a monitoring system as a whole the aim is that clashes where the same metric name means different things are uncommon. For example acme\_http\_router\_request\_seconds might end up in hundreds of different applications developed by A Company Manufacturing Everything, which is normal. If Another Corporation Making Entities also used the metric name acme\_http\_router\_request\_seconds in their HTTP router that's also fine. If applications from both companies were being monitored by the same monitoring system the clash is undesirable, but acceptable as no application is trying to expose both names and no one target is trying to (incorrectly) expose the same metric name twice. If an application wished to contain both My Example Company's and Mega Exciting Company's HTTP router libraries that would be a problem, and one of the metric names would need to be changed somehow. As a corollary, the more public a library is the better namespaced its metric names should be to reduce the risk of such scenarios arising. acme\_ is not a bad choice for internal use within a company, but these companies might for example choose the prefixes acmeverything\_ or acorpme\_ for code shared outside their company. After namespacing by company or organisation, namespacing and naming should continue by library/subsystem/application fractally as needed such as the http\_router library above. The goal is that if you are familiar with the overall structure of a codebase, you could make a good guess at where the instrumentation for a given metric is given its metric name. For a common very well known existing piece of software, the name of the software itself may be sufficiently distinguishing. For example bind\_ is probably sufficient for the DNS software, even though isc\_bind\_ would be the more usual naming. Metric names prefixed by scrape\_
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.1077611967921257, -0.025860201567411423, -0.038101039826869965, -0.0637703612446785, -0.02439587563276291, 0.0007864524959586561, -0.016845494508743286, -0.0086622079834342, 0.08488423377275467, -0.037361834198236465, -0.06898291409015656, -0.07278935611248016, 0.00903740618377924, 0.02...
0.205846
metric is given its metric name. For a common very well known existing piece of software, the name of the software itself may be sufficiently distinguishing. For example bind\_ is probably sufficient for the DNS software, even though isc\_bind\_ would be the more usual naming. Metric names prefixed by scrape\_ are used by ingestors to attach information related to individual expositions, so should not be exposed by applications directly. Metrics that have already been consumed and passed through a general purpose monitoring system may include such metric names on subsequent expositions. If an exposer wishes to provide information about an individual exposition, a metric prefix such as myexposer\_scrape\_ may be used. A common example is a gauge myexposer\_scrape\_duration\_seconds for how long that exposition took from the exposer's standpoint. Within the Prometheus ecosystem a set of per-process metrics has emerged that are consistent across all implementations, prefixed with process\_. For example for open file ulimits the MetricFamiles process\_open\_fds and process\_max\_fds gauges provide both the current and maximum value. (These names are legacy, if such metrics were defined today they would be more likely called process\_fds\_open and process\_fds\_limit). In general it is very challengings to get names with identical semantics like this, which is why different instrumentation should use different names. Avoid redundancy in metric names. Avoid substrings like "metric", "timer", "stats", "counter", "total", "float64" and so on - by virtue of being a metric with a given type (and possibly unit) exposed via OpenMetrics information like this is already implied so should not be included explicitly. You should not include label names of a metric in the metric name for the same reasons, and in addition subsequent aggregation of the metric by a monitoring system could make such information incorrect. Avoid including implementation details from other layers of your monitoring system in the metric names contained in your instrumentation. For example a MetricFamily name should not contain the string "openmetrics" merely because it happens to be currently exposed via OpenMetrics somewhere, or "prometheus" merely because your current monitoring system is Prometheus. ### Label Namespacing For label names no explicit namespacing by company or library is recommended, namespacing from the metric name is sufficient for this when considered against the length increase of the label name. However some minimal care to avoid common clashes is recommended. There are label names such as region, zone, cluster, availability\_zone, az, datacenter, dc, owner, customer, stage, service, team, job, instance, environment, and env which are highly likely to clash with labels used to identify targets which a general purpose monitoring system may add. Try to avoid them, adding minimal namespacing may be appropriate in these cases. The label name "type" is highly generic and should be avoided. For example for HTTP-related metrics "method" would be a better label name if you were distinguishing between GET, POST, and PUT requests. While there is metadata about metric names such as HELP, TYPE and UNIT there is no metadata for label names. This is as it would be bloating the format for little gain. Out-of-band documentation is one way for exposers could present this their ingestors. ### Metric Names versus Labels There are situations in which both using multiple Metrics within a MetricFamily or multiple MetricFamilies seem to make sense. Summing or averaging aMetricFamily should be meaningful even if it's not always useful. For example, mixing voltage and fan speed is not meaningful. As a reminder, OpenMetrics is built with the assumption that ingestors can process and perform aggregations on data. Exposing a total sum alongside other metrics is wrong, as this would result in double-counting upon aggregation in
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.12368429452180862, -0.08637081831693649, -0.04726659879088402, -0.03476274758577347, -0.033356960862874985, -0.0825769305229187, 0.007190247066318989, 0.05569213256239891, 0.10723918676376343, -0.03280545771121979, -0.021860914304852486, -0.05253266543149948, 0.017091380432248116, 0.001...
0.220528
it's not always useful. For example, mixing voltage and fan speed is not meaningful. As a reminder, OpenMetrics is built with the assumption that ingestors can process and perform aggregations on data. Exposing a total sum alongside other metrics is wrong, as this would result in double-counting upon aggregation in downstream ingestors. ``` wrong\_metric{label="a"} 1 wrong\_metric{label="b"} 6 wrong\_metric{label="total"} 7 ``` Labels of a Metric should be to the minimum needed to ensure uniqueness as every extra label is one more that users need to consider when determining what Labels to work with downstream. Labels which could be applied many MetricFamilies are candidates for being moved into \_info metrics similar to database {{normalization}}. If virtually all users of a Metric could be expected to want the additional label, it may be a better trade-off to add it to all MetricFamilies. For example if you had a MetricFamily relating to different SQL statements where uniqueness was provided by a label containing a hash of the full SQL statements, it would be okay to have another label with the first 500 characters of the SQL statement for human readability. Experience has shown that downstream ingestors find it easier to work with separate total and failure MetricFamiles rather than using {result="success"} and {result="failure"} Labels within one MetricFamily. Also it is usually better to expose separate read & write and send & receive MetricFamiles as full duplex systems are common and downstream ingestors are more likely to care about those values separately than in aggregate. All of this is not as easy as it may sound. It's an area where experience and engineering trade-offs by domain-specific experts in both exposition and the exposed system are required to find a good balance. Metric and Label Name Characters OpenMetrics builds on the existing widely adopted Prometheus text exposition format and the ecosystem which formed around it. Backwards compatibility is a core design goal. Expanding or contracting the set of characters that are supported by the Prometheus text format would work against that goal. Breaking backwards compatibility would have wider implications than just the wire format. In particular, the query languages created or adopted to work with data transmitted within the Prometheus ecosystem rely on these precise character sets. Label values support full UTF-8, so the format can represent multi-lingual metrics. ### Types of Metadata Metadata can come from different sources. Over the years, two main sources have emerged. While they are often functionally the same, it helps in understanding to talk about their conceptual differences. "Target metadata" is metadata commonly external to an exposer. Common examples would be data coming from service discovery, a CMDB, or similar, like information about a datacenter region, if a service is part of a particular deployment, or production or testing. This can be achieved by either the exposer or the ingestor adding labels to all Metrics that capture this metadata. Doing this through the ingestor is preferred as it is more flexible and carries less overhead. On flexibility, the hardware maintenance team might care about which server rack a machine is located in, whereas the database team using that same machine might care that it contains replica number 2 of the production database. On overhead, hardcoding or configuring this information needs an additional distribution path. "Exposer metadata" is coming from within an exposer. Common examples would be software version, compiler version, or Git commit SHA. #### Supporting Target Metadata in both Push-based and Pull-based Systems In push-based consumption, it is typical for the exposer to provide the relevant target metadata to the ingestor. In pull-based consumption the push-based approach could be
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.053763385862112045, -0.023918667808175087, -0.04545227065682411, -0.004489392042160034, -0.006120576523244381, 0.05258942022919655, 0.059135518968105316, 0.023923836648464203, 0.10026390850543976, -0.06342681497335434, 0.02810313180088997, -0.046423304826021194, 0.05737786740064621, 0.0...
0.113926
an exposer. Common examples would be software version, compiler version, or Git commit SHA. #### Supporting Target Metadata in both Push-based and Pull-based Systems In push-based consumption, it is typical for the exposer to provide the relevant target metadata to the ingestor. In pull-based consumption the push-based approach could be taken, but more typically the ingestor already knows the metadata of the target a-priori such as from a machine database or service discovery system, and associates it with the metrics as it consumes the exposition. OpenMetrics is stateless and provides the same exposition to all ingestors, which is in conflict with the push-style approach. In addition the push-style approach would break pull-style ingestors, as unwanted metadata would be exposed. One approach would be for push-style ingestors to provide target metadata based on operator configuration out-of-band, for example as a HTTP header. While this would transport target metadata for push-style ingestors, and is not precluded by this standard, it has the disadvantage that even though pull-style ingestors should use their own target metadata, it is still often useful to have access to the metadata the exposer itself is aware of. The preferred solution is to provide this target metadata as part of the exposition, but in a way that does not impact on the exposition as a whole. Info MetricFamilies are designed for this. An exposer may include an Info MetricFamily called "target" with a single Metric with no labels with the metadata. An example in the text format might be: ``` # TYPE target info # HELP target Target metadata target\_info{env="prod",hostname="myhost",datacenter="sdc",region="europe",owner="frontend"} 1 ``` When an exposer is providing this metric for this purpose it SHOULD be first in the exposition. This is for efficiency, so that ingestors relying on it for target metadata don't have to buffer up the rest of the exposition before applying business logic based on its content. Exposers MUST NOT add target metadata labels to all Metrics from an exposition, unless explicitly configured for a specific ingestor. Exposers MUST NOT prefix MetricFamily names or otherwise vary MetricFamily names based on target metadata. Generally, the same Label should not appear on every Metric of an exposition, but there are rare cases where this can be the result of emergent behaviour. Similarly all MetricFamily names from an exposer may happen to share a prefix in very small expositions. For example an application written in the Go language by A Company Manufacturing Everything would likely include metrics with prefixes of acme\_, go\_, process\_, and metric prefixes from any 3rd party libraries in use. Exposers can expose exposer metadata as Info MetricFamilies. The above discussion is in the context of individual exposers. An exposition from a general purpose monitoring system may contain metrics from many individual targets, and thus may expose multiple target info Metrics. The metrics may already have had target metadata added to them as labels as part of ingestion. The metric names MUST NOT be varied based on target metadata. For example it would be incorrect for all metrics to end up being prefixed with staging\_ even if they all originated from targets in a staging environment). ### Client Calculations and Derived Metrics Exposers should leave any math or calculation up to ingestors. A notable exception is the Summary quantile which is unfortunately required for backwards compatibility. Exposition should be of raw values which are useful over arbitrary time periods. As an example, you should not expose a gauge with the average rate of increase of a counter over the last 5 minutes. Letting the ingestor calculate the increase over the data points they have consumed across
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.01654793508350849, -0.018660584464669228, -0.030784402042627335, -0.03591447323560715, 0.060576628893613815, -0.04828004539012909, -0.030077021569013596, 0.06800619512796402, 0.06994769722223282, 0.018509332090616226, 0.02333689294755459, -0.02137828804552555, 0.008294143714010715, -0.0...
0.179599
Exposition should be of raw values which are useful over arbitrary time periods. As an example, you should not expose a gauge with the average rate of increase of a counter over the last 5 minutes. Letting the ingestor calculate the increase over the data points they have consumed across expositions has better mathematical properties and is more resilient to scrape failures. Another example is the average event size of a histogram/summary. Exposing the average rate of increase of a counter since an application started or since a Metric was created has the problems from the earlier example and it also prevents aggregation. Standard deviation also falls into this category. Exposing a sum of squares as a counter would be the correct approach. It was not included in this standard as a Histogram value because 64bit floating point precision is not sufficient for this to work in practice. Due to the squaring only half the 53bit mantissa would be available in terms of precision. As an example a histogram observing 10k events per second would lose precision within 2 hours. Using 64bit integers would be no better due to the loss of the floating decimal point because a nanosecond resolution integer typically tracking events of a second in length would overflow after 19 observations. This design decision can be revisited when 128bit floating point numbers become common. Another example is to avoid exposing a request failure ratio, exposing separate counters for failed requests and total requests instead. ### Number Types For a counter that was incremented a million times per second it would take over a century to begin to lose precision with a float64 as it has a 53 bit mantissa. Yet a 100 Gbps network interface's octet throughput precision could begin to be lost with a float64 within around 20 hours. While losing 1KB of precision over the course of years for a 100Gbps network interface is unlikely to be a problem in practice, int64s are an option for integral data with such a high throughput. Summary quantiles must be float64, as they are estimates and thus fundamentally inaccurate. ### Exposing Timestamps One of the core assumptions of OpenMetrics is that exposers expose the most up to date snapshot of what they're exposing. While there are limited use cases for attaching timestamps to exposed data, these are very uncommon. Data which had timestamps previously attached, in particular data which has been ingested into a general purpose monitoring system may carry timestamps. Live or raw data should not carry timestamps. It is valid to expose the same metric MetricPoint value with the same timestamp across expositions, however it is invalid to do so if the underlying metric is now missing. Time synchronization is a hard problem and data should be internally consistent in each system. As such, ingestors should be able to attach the current timestamp from their perspective to data rather than based on the system time of the exposer device. With timestamped metrics it is not generally possible to detect the time when a Metric went missing across expositions. However with non-timestamped metrics the ingestor can use its own timestamp from the exposition where the Metric is no longer present. All of this is to say that, in general, MetricPoint timestamps should not be exposed, as it should be up to the ingestor to apply their own timestamps to samples they ingest. #### Tracking When Metrics Last Changed Presume you had a counter my\_counter which was initialized, and then later incremented by 1 at time 123. This would be a correct way to expose it in the
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ 0.01420663297176361, 0.05840581655502319, -0.06230537220835686, -0.012772141955792904, -0.053779661655426025, 0.08025040477514267, -0.01127445138990879, 0.04409920424222946, 0.03984558954834938, -0.013747749850153923, -0.052195485681295395, -0.05953768640756607, 0.008627480827271938, 0.049...
0.106602
should be up to the ingestor to apply their own timestamps to samples they ingest. #### Tracking When Metrics Last Changed Presume you had a counter my\_counter which was initialized, and then later incremented by 1 at time 123. This would be a correct way to expose it in the text format: ``` # HELP my\_counter Good increment example # TYPE my\_counter counter my\_counter\_total 1 ``` As per the parent section, ingestors should be free to attach their own timestamps, so this would be incorrect: ``` # HELP my\_counter Bad increment example # TYPE my\_counter counter my\_counter\_total 1 123 ``` In case the specific time of the last change of a counter matters, this would be the correct way: ``` # HELP my\_counter Good increment example # TYPE my\_counter counter my\_counter\_total 1 # HELP my\_counter\_last\_increment\_timestamp\_seconds When my\_counter was last incremented # TYPE my\_counter\_last\_increment\_timestamp\_seconds gauge # UNIT my\_counter\_last\_increment\_timestamp\_seconds seconds my\_counter\_last\_increment\_timestamp\_seconds 123 ``` By putting the timestamp of last change into its own Gauge as a value, ingestors are free to attach their own timestamp to both Metrics. Experience has shown that exposing absolute timestamps (epoch is considered absolute here) is more robust than time elapsed, seconds since, or similar. In either case, they would be gauges. For example: ``` # TYPE my\_boot\_time\_seconds gauge # HELP my\_boot\_time\_seconds Boot time of the machine # UNIT my\_boot\_time\_seconds seconds my\_boot\_time\_seconds 1256060124 ``` Is better than: ``` # TYPE my\_time\_since\_boot\_seconds gauge # HELP my\_time\_since\_boot\_seconds Time elapsed since machine booted # UNIT my\_time\_since\_boot\_seconds seconds my\_time\_since\_boot\_seconds 123 ``` Conversely, there are no best practice restrictions on exemplars timestamps. Keep in mind that due to race conditions or time not being perfectly synced across devices, that an exemplar timestamp may appear to be slightly in the future relative to a ingestor's system clock or other metrics from the same exposition. Similarly it is possible that a "\_created" for a MetricPoint could appear to be slightly after an exemplar or sample timestamp for that same MetricPoint. Keep in mind that there are monitoring systems in common use which support everything from nanosecond to second resolution, so having two MetricPoints that have the same timestamp when truncated to second resolution may cause an apparent duplicate in the ingestor. In this case the MetricPoint with the earliest timestamp MUST be used. ### Thresholds Exposing desired bounds for a system can make sense, but proper care needs to be taken. For values which are universally true, it can make sense to emit Gauge metrics for such thresholds. For example, a data center HVAC system knows the current measurements, the setpoints, and the alert setpoints. It has a globally valid and correct view of the desired system state. As a counter example, some thresholds can change with scale, deployment model, or over time. A certain amount of CPU usage may be acceptable in one setting and undesirable in another. Aggregation of values can further change acceptable values. In such a system, exposing bounds could be counter-productive. For example the maximum size of a queue may be exposed alongside the number of items currently in the queue like: ``` # HELP acme\_notifications\_queue\_capacity The capacity of the notifications queue. # TYPE acme\_notifications\_queue\_capacity gauge acme\_notifications\_queue\_capacity 10000 # HELP acme\_notifications\_queue\_length The number of notifications in the queue. # TYPE acme\_notifications\_queue\_length gauge acme\_notifications\_queue\_length 42 ``` ### Size Limits This standard does not prescribe any particular limits on the number of samples exposed by a single exposition, the number of labels that may be present, the number of states a stateset may have, the number of labels in an info value, or metric name/label name/label value/help character limits. Specific limits
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.06931750476360321, -0.012591267935931683, 0.023899808526039124, 0.027781065553426743, -0.038983236998319626, 0.08357764780521393, 0.05416264757514, 0.09928543120622635, 0.01893138326704502, -0.05900125950574875, 0.04210691154003143, -0.09378746151924133, 0.037318598479032516, 0.05913530...
0.143704
standard does not prescribe any particular limits on the number of samples exposed by a single exposition, the number of labels that may be present, the number of states a stateset may have, the number of labels in an info value, or metric name/label name/label value/help character limits. Specific limits run the risk of preventing reasonable use cases, for example while a given exposition may have an appropriate number of labels after passing through a general purpose monitoring system a few target labels may have been added that would push it over the limit. Specific limits on numbers such as these would also not capture where the real costs are for general purpose monitoring systems. These guidelines are thus both to aid exposers and ingestors in understanding what is reasonable. On the other hand, an exposition which is too large in some dimension could cause significant performance problems compared to the benefit of the metrics exposed. Thus some guidelines on the size of any single exposition would be useful. ingestors may choose to impose limits themselves, for in particular to prevent attacks or outages. Still, ingestors need to consider reasonable use cases and try not to disproportionately impact them. If any single value/metric/exposition exceeds such limits then the whole exposition must be rejected. In general there are three things which impact the performance of a general purpose monitoring system ingestion time series data: the number of unique time series, the number of samples over time in those series, and the number of unique strings such as metric names, label names, label values, and HELP. ingestors can control how often they ingest, so that aspect does not need further consideration. The number of unique time series is roughly equivalent to the number of non-comment lines in the text format. As of 2020, 10 million time series in total is considered a large amount and is commonly the order of magnitude of the upper bound of any single-instance ingestor. Any single exposition should not go above 10k time series without due diligence. One common consideration is horizontal scaling: What happens if you scale your instance count by 1-2 orders of magnitude? Having a thousand top-of-rack switches in a single deployment would have been hard to imagine 30 years ago. If a target was a singleton (e.g. exposing metrics relating to an entire cluster) then several hundred thousand time series may be reasonable. It is not the number of unique MetricFamilies or the cardinality of individual labels/buckets/statesets that matters, it is the total order of magnitude of the time series. 1,000 gauges with one Metric each are as costly as a single gauge with 1,000 Metrics. If all targets of a particular type are exposing the same set of time series, then each additional targets' strings poses no incremental cost to most reasonably modern monitoring systems. If however each target has unique strings, there is such a cost. As an extreme example, a single 10k character metric name used by many targets is on its own very unlikely to be a problem in practice. To the contrary, a thousand targets each exposing a unique 36 character UUID is over three times as expensive as that single 10k character metric name in terms of strings to be stored assuming modern approaches. In addition, if these strings change over time older strings will still need to be stored for at least some time, incurring extra cost. Assuming the 10 million times series from the last paragraph, 100MB of unique strings per hour might indicate a use case for then the use case may be more
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ 0.0236801877617836, -0.00021435650705825537, -0.04171663150191307, -0.042240310460329056, 0.06675533205270767, 0.07154583185911179, 0.03216021507978439, 0.056096043437719345, 0.05819128081202507, -0.053002290427684784, -0.001235772273503244, -0.04343738779425621, -0.005096442997455597, 0.0...
0.148878
these strings change over time older strings will still need to be stored for at least some time, incurring extra cost. Assuming the 10 million times series from the last paragraph, 100MB of unique strings per hour might indicate a use case for then the use case may be more like event logging, not metric time series. There is a hard 128 UTF-8 character limit on exemplar length, to prevent misuse of the feature for tracing span data and other event logging. ## Security Implementors MAY choose to offer authentication, authorization, and accounting; if they so choose, this SHOULD be handled outside of OpenMetrics. All exposer implementations SHOULD be able to secure their HTTP traffic with TLS 1.2 or later. If an exposer implementation does not support encryption, operators SHOULD use reverse proxies, firewalling, and/or ACLs where feasible. Metric exposition should be independent of production services exposed to end users; as such, having a /metrics endpoint on ports like TCP/80, TCP/443, TCP/8080, and TCP/8443 is generally discouraged for publicly exposed services using OpenMetrics. ## IANA While currently most implementations of the Prometheus exposition format are using non-IANA-registered ports from an informal registry at {{PrometheusPorts}}, OpenMetrics can be found on a well-defined port. The port assigned by IANA for clients exposing data is <9099 requested for historical consistency>. If more than one metric endpoint needs to be reachable at a common IP address and port, operators might consider using a reverse proxy that communicates with exposers over localhost addresses. To ease multiplexing, endpoints SHOULD carry their own name in their path, i.e. `/node\_exporter/metrics`. Expositions SHOULD NOT be combined into one exposition, for the reasons covered under "Supporting target metadata in both push-based and pull-based systems" and to allow for independent ingestion without a single point of failure. OpenMetrics would like to register two MIME types, `application/openmetrics-text` and `application/openmetrics-proto`.
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec.md
main
prometheus
[ -0.08409638702869415, 0.048964496701955795, -0.03294972702860832, -0.02480650506913662, -0.06538158655166626, -0.05429316684603691, 0.0035381470806896687, 0.004325122106820345, 0.11571758985519409, -0.04471566528081894, 0.007537065539509058, 0.020391222089529037, 0.014147724024951458, 0.03...
0.10662
- Version: 1.1 - Status: Draft - Date: TBD - Authors: Richard Hartmann, Ben Kochie, Brian Brazil, Rob Skillington Created in 2012, Prometheus has been the default for cloud-native observability since 2015. A central part of Prometheus' design is its text metric exposition format, called the Prometheus exposition format 0.0.4, stable since 2014. In this format, special care has been taken to make it easy to generate, to ingest, and to understand by humans. As of 2020, there are more than 700 publicly listed exporters, an unknown number of unlisted exporters, and thousands of native library integrations using this format. Dozens of ingestors from various projects and companies support consuming it. With OpenMetrics, we are cleaning up and tightening the specification with the express purpose of bringing it into IETF. We are documenting a working standard with wide and organic adoption while introducing minimal, largely backwards-compatible, and well-considered changes. As of 2020, dozens of exporters, integrations, and ingestors use and preferentially negotiate OpenMetrics already. Given the wide adoption and significant coordination requirements in the ecosystem, sweeping changes to either the Prometheus exposition format 0.0.4 or OpenMetrics 1.0 are considered out of scope. > NOTE: OpenMetrics 2.0 development is in progress. Read [here](https://github.com/prometheus/OpenMetrics/issues/276) on how to join the Prometheus OM 2.0 work group. ## Overview Metrics are a specific kind of telemetry data. They represent a snapshot of the current state for a set of data. They are distinct from logs or events, which focus on records or information about individual events. OpenMetrics is primarily a wire format, independent of any particular transport for that format. The format is expected to be consumed on a regular basis and to be meaningful over successive expositions. Implementers MUST expose metrics in the OpenMetrics text format in response to a simple HTTP GET request to a documented URL for a given process or device. This endpoint SHOULD be called "/metrics". Implementers MAY also expose OpenMetrics formatted metrics in other ways, such as by regularly pushing metric sets to an operator-configured endpoint over HTTP. ### Metrics and Time Series This standard expresses all system states as numerical values; counts, current values, enumerations, and boolean states being common examples. Contrary to metrics, singular events occur at a specific time. Metrics tend to aggregate data temporally. While this can lose information, the reduction in overhead is an engineering trade-off commonly chosen in many modern monitoring systems. Time series are a record of changing information over time. While time series can support arbitrary strings or binary data, only numeric data is in scope for this RFC. Common examples of metric time series would be network interface counters, device temperatures, BGP connection states, and alert states. ## Data Model This section MUST be read together with the ABNF section. In case of disagreements between the two, the ABNF's restrictions MUST take precedence. This reduces repetition as the text wire format MUST be supported. ### Data Types #### Values Metric values in OpenMetrics MUST be either floating points or integers. Note that ingestors of the format MAY only support float64. The non-real values NaN, +Inf and -Inf MUST be supported. NaN MUST NOT be considered a missing value, but it MAY be used to signal a division by zero. ##### Booleans Boolean values MUST follow `1==true`, `0==false`. #### Timestamps Timestamps MUST be Unix Epoch in seconds. Negative timestamps MAY be used. #### Strings Strings MUST only consist of valid UTF-8 characters and MAY be zero length. NULL (ASCII 0x0) MUST be supported. #### Label Labels are key-value pairs consisting of strings. Label names beginning with underscores are RESERVED and MUST
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.05961916968226433, 0.006247097160667181, -0.00884675420820713, -0.01723664253950119, 0.11754748970270157, -0.054177358746528625, -0.04164974391460419, 0.007378827780485153, 0.010809316299855709, 0.0330796018242836, -0.01503997202962637, -0.044680457562208176, 0.004545900970697403, 0.032...
0.189088
MUST be Unix Epoch in seconds. Negative timestamps MAY be used. #### Strings Strings MUST only consist of valid UTF-8 characters and MAY be zero length. NULL (ASCII 0x0) MUST be supported. #### Label Labels are key-value pairs consisting of strings. Label names beginning with underscores are RESERVED and MUST NOT be used unless specified by this standard. Label names MUST follow the restrictions in the ABNF section. Empty label values SHOULD be treated as if the label was not present. #### LabelSet A LabelSet MUST consist of Labels and MAY be empty. Label names MUST be unique within a LabelSet. #### MetricPoint Each MetricPoint consists of a set of values, depending on the MetricFamily type. #### Exemplars Exemplars are references to data outside of the MetricSet. A common use case are IDs of program traces. Exemplars MUST consist of a LabelSet and a value, and MAY have a timestamp. They MAY each be different from the MetricPoints' LabelSet and timestamp. The combined length of the label names and values of an Exemplar's LabelSet MUST NOT exceed 128 UTF-8 character code points. Other characters in the text rendering of an exemplar such as `",=` are not included in this limit for implementation simplicity and for consistency between the text and proto formats. Ingestors MAY discard exemplars. #### Metric Metrics are defined by a unique LabelSet within a MetricFamily. Metrics MUST contain a list of one or more MetricPoints. Metrics with the same name for a given MetricFamily SHOULD have the same set of label names in their LabelSet. MetricPoints SHOULD NOT have explicit timestamps. If more than one MetricPoint is exposed for a Metric, then its MetricPoints MUST have monotonically increasing timestamps. #### MetricFamily A MetricFamily MAY have zero or more Metrics. A MetricFamily MUST have a name, HELP, TYPE, and UNIT metadata. Every Metric within a MetricFamily MUST have a unique LabelSet. ##### Name MetricFamily names are a string and MUST be unique within a MetricSet. Names SHOULD be in snake\_case. Metric names MUST follow the restrictions in the ABNF section. Colons in MetricFamily names are RESERVED to signal that the MetricFamily is the result of a calculation or aggregation of a general purpose monitoring system. MetricFamily names beginning with underscores are RESERVED and MUST NOT be used unless specified by this standard. ###### Suffixes The name of a MetricFamily MUST NOT result in a potential clash for sample metric names as per the ABNF with another MetricFamily in the Text Format within a MetricSet. An example would be a gauge called "foo\_created" as a counter called "foo" could create a "foo\_created" in the text format. Exposers SHOULD avoid names that could be confused with the suffixes that text format sample metric names use. \* Suffixes for the respective types are: \* Counter: `\_total`, `\_created` \* Summary: `\_count`, `\_sum`, `\_created`, `` (empty) \* Histogram: `\_count`, `\_sum`, `\_bucket`, `\_created` \* GaugeHistogram: `\_gcount`, `\_gsum`, `\_bucket` \* Info: `\_info` \* Gauge: `` (empty) \* StateSet: `` (empty) \* Unknown: `` (empty) ##### Type Type specifies the MetricFamily type. Valid values are "unknown", "gauge", "counter", "stateset", "info", "histogram", "gaugehistogram", and "summary". ##### Unit Unit specifies MetricFamily units. If non-empty, it SHOULD be a suffix of the MetricFamily name separated by an underscore. Be aware that further generation rules might make it an infix in the text format. Be aware that exposing metrics without the unit being a suffix of the MetricFamily name directly to end-users may reduce the usability due to confusion about what the metric's unit is. ##### Help Help is a string and SHOULD be non-empty. It is used to give
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.06016639620065689, 0.018883882090449333, -0.09193136543035507, -0.058912355452775955, -0.052312325686216354, -0.015078071504831314, 0.01902354136109352, 0.07077410072088242, 0.02196653187274933, -0.10167244076728821, 0.0738600417971611, -0.13423416018486023, 0.007978731766343117, -0.034...
0.227055
in the text format. Be aware that exposing metrics without the unit being a suffix of the MetricFamily name directly to end-users may reduce the usability due to confusion about what the metric's unit is. ##### Help Help is a string and SHOULD be non-empty. It is used to give a brief description of the MetricFamily for human consumption and SHOULD be short enough to be used as a tooltip. ##### MetricSet A MetricSet is the top level object exposed by OpenMetrics. It MUST consist of MetricFamilies and MAY be empty. Each MetricFamily name MUST be unique. The same label name and value SHOULD NOT appear on every Metric within a MetricSet. There is no specific ordering of MetricFamilies required within a MetricSet. An exposer MAY make an exposition easier to read for humans, for example sort alphabetically if the performance tradeoff makes sense. If present, an Info MetricFamily called "target" per the "Supporting target metadata in both push-based and pull-based systems" section below SHOULD be first. ### Metric Types #### Gauge Gauges are current measurements, such as bytes of memory currently used or the number of items in a queue. For gauges the absolute value is what is of interest to a user. A MetricPoint in a Metric with the type gauge MUST have a single value. Gauges MAY increase, decrease, or stay constant over time. Even if they only ever go in one direction, they might still be gauges and not counters. The size of a log file would usually only increase, a resource might decrease, and the limit of a queue size may be constant. A gauge MAY be used to encode an enum where the enum has many states and changes over time, it is the most efficient but least user friendly. #### Counter Counters measure discrete events. Common examples are the number of HTTP requests received, CPU seconds spent, or bytes sent. For counters how quickly they are increasing over time is what is of interest to a user. A MetricPoint in a Metric with the type Counter MUST have one value called Total. A Total is a non-NaN and MUST be monotonically non-decreasing over time, starting from 0. A MetricPoint in a Metric with the type Counter SHOULD have a Timestamp value called Created. This can help ingestors discern between new metrics and long-running ones it did not see before. A MetricPoint in a Metric's Counter's Total MAY reset to 0. If present, the corresponding Created time MUST also be set to the timestamp of the reset. A MetricPoint in a Metric's Counter's Total MAY have an exemplar. #### StateSet StateSets represent a series of related boolean values, also called a bitset. If ENUMs need to be encoded this MAY be done via StateSet. A point of a StateSet metric MAY contain multiple states and MUST contain one boolean per State. States have a name which are Strings. A StateSet Metric's LabelSet MUST NOT have a label name which is the same as the name of its MetricFamily. If encoded as a StateSet, ENUMs MUST have exactly one Boolean which is true within a MetricPoint. This is suitable where the enum value changes over time, and the number of States isn't much more than a handful. MetricFamilies of type StateSets MUST have an empty Unit string. #### Info Info metrics are used to expose textual information which SHOULD NOT change during process lifetime. Common examples are an application's version, revision control commit, and the version of a compiler. A MetricPoint of an Info Metric contains a LabelSet. An Info MetricPoint's LabelSet MUST NOT have
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.05439973622560501, -0.02534199319779873, -0.055895622819662094, 0.012365302070975304, -0.029369506984949112, 0.022361116483807564, 0.08222529292106628, 0.07044766843318939, 0.02045457437634468, -0.057215843349695206, 0.028368765488266945, -0.08585163205862045, 0.02101370319724083, 0.020...
0.151133
Unit string. #### Info Info metrics are used to expose textual information which SHOULD NOT change during process lifetime. Common examples are an application's version, revision control commit, and the version of a compiler. A MetricPoint of an Info Metric contains a LabelSet. An Info MetricPoint's LabelSet MUST NOT have a label name which is the same as the name of a label of the LabelSet of its Metric. Info MAY be used to encode ENUMs whose values do not change over time, such as the type of a network interface. MetricFamilies of type Info MUST have an empty Unit string. #### Histogram Histograms measure distributions of discrete events. Common examples are the latency of HTTP requests, function runtimes, or I/O request sizes. A Histogram MetricPoint MUST contain at least one bucket, and SHOULD contain Sum, and Created values. Every bucket MUST have a threshold and a value. Histogram MetricPoints MUST have one bucket with an +Inf threshold. Buckets MUST be cumulative. As an example for a metric representing request latency in seconds its values for buckets with thresholds 1, 2, 3, and +Inf MUST follow value\_1 <= value\_2 <= value\_3 <= value\_+Inf. If ten requests took 1 second each, the values of the 1, 2, 3, and +Inf buckets MUST equal 10. The +Inf bucket counts all requests. If present, the Sum value MUST equal the Sum of all the measured event values. Bucket thresholds within a MetricPoint MUST be unique. Semantically, Sum, and buckets values are counters so MUST NOT be NaN or negative. Negative threshold buckets MAY be used, but then the Histogram MetricPoint MUST NOT contain a sum value as it would no longer be a counter semantically. Bucket thresholds MUST NOT equal NaN. Count and bucket values MUST be integers. A Histogram MetricPoint SHOULD have a Timestamp value called Created. This can help ingestors discern between new metrics and long-running ones it did not see before. A Histogram's Metric's LabelSet MUST NOT have a "le" label name. Bucket values MAY have exemplars. Buckets are cumulative to allow monitoring systems to drop any non-+Inf bucket for performance/anti-denial-of-service reasons in a way that loses granularity but is still a valid Histogram. Each bucket covers the values less and or equal to it, and the value of the exemplar MUST be within this range. Exemplars SHOULD be put into the bucket with the highest value. A bucket MUST NOT have more than one exemplar. #### GaugeHistogram GaugeHistograms measure current distributions. Common examples are how long items have been waiting in a queue, or size of the requests in a queue. A GaugeHistogram MetricPoint MUST have one bucket with an +Inf threshold, and SHOULD contain a Gsum value. Every bucket MUST have a threshold and a value. The buckets for a GaugeHistogram follow all the same rules as for a Histogram. The bucket and Gsum of a GaugeHistogram are conceptually gauges, however bucket values MUST NOT be negative or NaN. If negative threshold buckets are present, then sum MAY be negative. Gsum MUST NOT be NaN. Bucket values MUST be integers. A GaugeHistogram's Metric's LabelSet MUST NOT have a "le" label name. Bucket values can have exemplars. Each bucket covers the values less and or equal to it, and the value of the exemplar MUST be within this range. Exemplars SHOULD be put into the bucket with the highest value. A bucket MUST NOT have more than one exemplar. #### Summary Summaries also measure distributions of discrete events and MAY be used when Histograms are too expensive and/or an average event size is sufficient. They MAY also be used
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.009236393496394157, -0.02610653080046177, -0.067385233938694, -0.06324175000190735, -0.05518033728003502, -0.016452299430966377, 0.06926886737346649, 0.04621626064181328, 0.07147146016359329, -0.03114628605544567, 0.010637112893164158, -0.06917085498571396, 0.05727797746658325, -0.00875...
0.256825
Exemplars SHOULD be put into the bucket with the highest value. A bucket MUST NOT have more than one exemplar. #### Summary Summaries also measure distributions of discrete events and MAY be used when Histograms are too expensive and/or an average event size is sufficient. They MAY also be used for backwards compatibility, because some existing instrumentation libraries expose precomputed quantiles and do not support Histograms. Precomputed quantiles SHOULD NOT be used, because quantiles are not aggregatable and the user often can not deduce what timeframe they cover. A Summary MetricPoint MAY consist of a Count, Sum, Created, and a set of quantiles. Semantically, Count and Sum values are counters so MUST NOT be NaN or negative. Count MUST be an integer. A MetricPoint in a Metric with the type Summary which contains Count or Sum values SHOULD have a Timestamp value called Created. This can help ingestors discern between new metrics and long-running ones it did not see before. Created MUST NOT relate to the collection period of quantile values. Quantiles are a map from a quantile to a value. An example is a quantile 0.95 with value 0.2 in a metric called myapp\_http\_request\_duration\_seconds which means that the 95th percentile latency is 200ms over an unknown timeframe. If there are no events in the relevant timeframe, the value for a quantile MUST be NaN. A Quantile's Metric's LabelSet MUST NOT have "quantile" label name. Quantiles MUST be between 0 and 1 inclusive. Quantile values MUST NOT be negative. Quantile values SHOULD represent the recent values. Commonly this would be over the last 5-10 minutes. #### Unknown Unknown SHOULD NOT be used. Unknown MAY be used when it is impossible to determine the types of individual metrics from 3rd party systems. A point in a metric with the unknown type MUST have a single value. # Data transmission & wire formats The text wire format MUST be supported and is the default. The protobuf wire format MAY be supported and MUST ONLY be used after negotiation. The OpenMetrics formats are Regular Chomsky Grammars, making writing quick and small parsers possible. The text format compresses well, and protobuf is already binary and efficiently encoded. Partial or invalid expositions MUST be considered erroneous in their entirety. ### Protocol Negotiation All ingestor implementations MUST be able to ingest data secured with TLS 1.2 or later. All exposers SHOULD be able to emit data secured with TLS 1.2 or later. ingestor implementations SHOULD be able to ingest data from HTTP without TLS. All implementations SHOULD use TLS to transmit data. Negotiation of what version of the OpenMetrics format to use is out-of-band. For example for pull-based exposition over HTTP standard HTTP content type negotiation is used, and MUST default to the oldest version of the standard (i.e. 1.0.0) if no newer version is requested. Push-based negotiation is inherently more complex, as the exposer typically initiates the connection. Producers MUST use the oldest version of the standard (i.e. 1.0.0) unless requested otherwise by the ingestor. ### Text format #### ABNF ABNF as per RFC 5234 "exposition" is the top level token of the ABNF. ```abnf exposition = metricset HASH SP eof [ LF ] metricset = \*metricfamily metricfamily = \*metric-descriptor \*metric metric-descriptor = HASH SP type SP metricname SP metric-type LF metric-descriptor =/ HASH SP help SP metricname SP escaped-string LF metric-descriptor =/ HASH SP unit SP metricname SP \*metricname-char LF metric = \*sample metric-type = counter / gauge / histogram / gaugehistogram / stateset metric-type =/ info / summary / unknown sample = metricname [labels] SP number [SP timestamp] [exemplar] LF exemplar = SP
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.04327964782714844, -0.002332280855625868, -0.004768273793160915, -0.06590873748064041, -0.05844195932149887, -0.004724416416138411, -0.008157334290444851, 0.038041356950998306, 0.12671314179897308, 0.004262737929821014, 0.02197784185409546, -0.09935267269611359, -0.03862081840634346, 0....
0.144196
help SP metricname SP escaped-string LF metric-descriptor =/ HASH SP unit SP metricname SP \*metricname-char LF metric = \*sample metric-type = counter / gauge / histogram / gaugehistogram / stateset metric-type =/ info / summary / unknown sample = metricname [labels] SP number [SP timestamp] [exemplar] LF exemplar = SP HASH SP labels SP number [SP timestamp] labels = "{" [label \*(COMMA label)] "}" label = label-name EQ DQUOTE escaped-string DQUOTE number = realnumber ; Case insensitive number =/ [SIGN] ("inf" / "infinity") number =/ "nan" timestamp = realnumber ; Not 100% sure this captures all float corner cases. ; Leading 0s explicitly okay realnumber = [SIGN] 1\*DIGIT realnumber =/ [SIGN] 1\*DIGIT ["." \*DIGIT] [ "e" [SIGN] 1\*DIGIT ] realnumber =/ [SIGN] \*DIGIT "." 1\*DIGIT [ "e" [SIGN] 1\*DIGIT ] ; RFC 5234 is case insensitive. ; Uppercase eof = %d69.79.70 type = %d84.89.80.69 help = %d72.69.76.80 unit = %d85.78.73.84 ; Lowercase counter = %d99.111.117.110.116.101.114 gauge = %d103.97.117.103.101 histogram = %d104.105.115.116.111.103.114.97.109 gaugehistogram = gauge histogram stateset = %d115.116.97.116.101.115.101.116 info = %d105.110.102.111 summary = %d115.117.109.109.97.114.121 unknown = %d117.110.107.110.111.119.110 BS = "\" EQ = "=" COMMA = "," HASH = "#" SIGN = "-" / "+" metricname = metricname-initial-char 0\*metricname-char metricname-char = metricname-initial-char / DIGIT metricname-initial-char = ALPHA / "\_" / ":" label-name = label-name-initial-char \*label-name-char label-name-char = label-name-initial-char / DIGIT label-name-initial-char = ALPHA / "\_" escaped-string = \*escaped-char escaped-char = normal-char escaped-char =/ BS ("n" / DQUOTE / BS) escaped-char =/ BS normal-char ; Any unicode character, except newline, double quote, and backslash normal-char = %x00-09 / %x0B-21 / %x23-5B / %x5D-D7FF / %xE000-10FFFF ``` #### Overall Structure UTF-8 MUST be used. Byte order markers (BOMs) MUST NOT be used. As an important reminder for implementers, byte 0 is valid UTF-8 while, for example, byte 255 is not. The content type MUST be: ``` application/openmetrics-text; version=1.0.0; charset=utf-8 ``` Line endings MUST be signalled with line feed (\n) and MUST NOT contain carriage returns (\r). Expositions MUST end with EOF and SHOULD end with `EOF\n`. An example of a complete exposition: ```openmetrics # TYPE acme\_http\_router\_request\_seconds summary # UNIT acme\_http\_router\_request\_seconds seconds # HELP acme\_http\_router\_request\_seconds Latency though all of ACME's HTTP request router. acme\_http\_router\_request\_seconds\_sum{path="/api/v1",method="GET"} 9036.32 acme\_http\_router\_request\_seconds\_count{path="/api/v1",method="GET"} 807283.0 acme\_http\_router\_request\_seconds\_created{path="/api/v1",method="GET"} 1605281325.0 acme\_http\_router\_request\_seconds\_sum{path="/api/v2",method="POST"} 479.3 acme\_http\_router\_request\_seconds\_count{path="/api/v2",method="POST"} 34.0 acme\_http\_router\_request\_seconds\_created{path="/api/v2",method="POST"} 1605281325.0 # TYPE go\_goroutines gauge # HELP go\_goroutines Number of goroutines that currently exist. go\_goroutines 69 # TYPE process\_cpu\_seconds counter # UNIT process\_cpu\_seconds seconds # HELP process\_cpu\_seconds Total user and system CPU time spent in seconds. process\_cpu\_seconds\_total 4.20072246e+06 # EOF ``` ##### Escaping Where the ABNF notes escaping, the following escaping MUST be applied Line feed, `\n` (0x0A) -> literally `\\n` (Bytecode 0x5c 0x6e) Double quotes -> `\\"` (Bytecode 0x5c 0x22) Backslash -> `\\\\` (Bytecode 0x5c 0x5c) A double backslash SHOULD be used to represent a backslash character. A single backslash SHOULD NOT be used for undefined escape sequences. As an example, `\\\\a` is equivalent and preferable to `\\a`. ##### Numbers Integer numbers MUST NOT have a decimal point. Examples are `23`, `0042`, and `1341298465647914`. Floating point numbers MUST be represented either with a decimal point or using scientific notation. Examples are `8903.123421` and `1.89e-7`. Floating point numbers MUST fit within the range of a 64-bit floating point value as defined by IEEE 754, but MAY require so many bits in the mantissa that results in lost precision. This MAY be used to encode nanosecond resolution timestamps. Arbitrary integer and floating point rendering of numbers MUST NOT be used for "quantile" and "le" label values as in section "Canonical Numbers". They MAY be used anywhere else numbers are used. ###### Considerations: Canonical Numbers Numbers in the "le" label values of
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.00783015601336956, 0.030823221430182457, -0.05212849751114845, -0.0682934895157814, -0.03287214785814285, 0.06834576278924942, 0.08218614757061005, 0.05089893192052841, -0.02526051737368107, -0.08920901268720627, 0.06006721034646034, -0.12128838151693344, 0.05054306611418724, -0.0132202...
0.035432
be used to encode nanosecond resolution timestamps. Arbitrary integer and floating point rendering of numbers MUST NOT be used for "quantile" and "le" label values as in section "Canonical Numbers". They MAY be used anywhere else numbers are used. ###### Considerations: Canonical Numbers Numbers in the "le" label values of histograms and "quantile" label values of summary metrics are special in that they're label values, and label values are intended to be opaque. As end users will likely directly interact with these string values, and as many monitoring systems lack the ability to deal with them as first-class numbers, it would be beneficial if a given number had the exact same text representation. Consistency is highly desirable, but real world implementations of languages and their runtimes make mandating this impractical. The most important common quantiles are 0.5, 0.95, 0.9, 0.99, 0.999 and bucket values representing values from a millisecond up to 10.0 seconds, because those cover cases like latency SLAs and Apdex for typical web services. Powers of ten are covered to try to ensure that the switch between fixed point and exponential rendering is consistent as this varies across runtimes. The target rendering is equivalent to the default Go rendering of float64 values (i.e. %g), with a .0 appended in case there is no decimal point or exponent to make clear that they are floats. Exposers MUST produce output for positive infinity as +Inf. Exposers SHOULD produce output for the values 0.0 up to 10.0 in 0.001 increments in line with the following examples: 0.0 0.001 0.002 0.01 0.1 0.9 0.95 0.99 0.999 1.0 1.7 10.0 Exposers SHOULD produce output for the values 1e-10 up to 1e+10 in powers of ten in line with the following examples: 1e-10 1e-09 1e-05 0.0001 0.1 1.0 100000.0 1e+06 1e+10 Parsers MUST NOT reject inputs which are outside of the canonical values merely because they are not consistent with the canonical values. For example 1.1e-4 must not be rejected, even though it is not the consistent rendering of 0.00011. Exposers SHOULD follow these patterns for non-canonical numbers, and the intention is by adjusting the rendering algorithm to be consistent for these values that the vast majority of other values will also have consistent rendering. Exposers using only a few particular le/quantile values could also hardcode. In languages such as C where a minimal floating point rendering algorithm such as Grisu3 is not readily available, exposers MAY use a different rendering. A warning to implementers in C and other languages that share its printf implementation: The standard precision of %f, %e and %g is only six significant digits. 17 significant digits are required for full precision, e.g. `printf("%.17g", d)`. ##### Timestamps Timestamps SHOULD NOT use exponential float rendering for timestamps if nanosecond precision is needed as rendering of a float64 does not have sufficient precision, e.g. `1604676851.123456789`. #### MetricFamily There MUST NOT be an explicit separator between MetricFamilies. The next MetricFamily MUST be signalled with either metadata or a new sample metric name which cannot be part of the previous MetricFamily. MetricFamilies MUST NOT be interleaved. ##### MetricFamily metadata There are four pieces of metadata: The MetricFamily name, TYPE, UNIT and HELP. An example of the metadata for a counter Metric called foo is: ``` # TYPE foo counter ``` If no TYPE is exposed, the MetricFamily MUST be of type Unknown. If a unit is specified it MUST be provided in a UNIT metadata line. In addition, an underscore and the unit SHOULD be the suffix of the MetricFamily name. A valid example for a foo\_seconds metric with a unit of "seconds": ```
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.04553860053420067, 0.036656610667705536, -0.0843614712357521, -0.07596807181835175, -0.03470909222960472, -0.012188156135380268, 0.040332820266485214, 0.050119612365961075, 0.08667443692684174, -0.028216881677508354, -0.009295299649238586, -0.02330821007490158, 0.02148764207959175, 0.01...
0.153555
exposed, the MetricFamily MUST be of type Unknown. If a unit is specified it MUST be provided in a UNIT metadata line. In addition, an underscore and the unit SHOULD be the suffix of the MetricFamily name. A valid example for a foo\_seconds metric with a unit of "seconds": ``` # TYPE foo\_seconds counter # UNIT foo\_seconds seconds ``` A valid, but discouraged example, where the unit is not a suffix on the name: ``` # TYPE foo counter # UNIT foo seconds ``` It is also valid to have: ``` # TYPE foo\_seconds counter ``` If the unit is known it SHOULD be provided. The value of a UNIT or HELP line MAY be empty. This MUST be treated as if no metadata line for the MetricFamily existed. ``` # TYPE foo\_seconds counter # UNIT foo\_seconds seconds # HELP foo\_seconds Some text and \n some \" escaping ``` There MUST NOT be more than one of each type of metadata line for a MetricFamily. The ordering SHOULD be TYPE, UNIT, HELP. Aside from this metadata and the EOF line at the end of the message, you MUST NOT expose lines beginning with a #. ##### Metric Metrics MUST NOT be interleaved. See the example in "Text format -> MetricPoint". Labels A sample without labels or a timestamp and the value 0 MUST be rendered either like: ``` bar\_seconds\_count 0 ``` or like ``` bar\_seconds\_count{} 0 ``` Label values MAY be any valid UTF-8 value, so escaping MUST be applied as per the ABNF. A valid example with two labels: ``` bar\_seconds\_count{a="x",b="escaping\" example \n "} 0 ``` The rendering of values for a MetricPoint can include additional labels (e.g. the "le" label for a Histogram type), which MUST be rendered in the same way as a Metric's own LabelSet. #### MetricPoint MetricPoints MUST NOT be interleaved. A correct example where there were multiple MetricPoints and Samples within a MetricFamily would be: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_sum{a="bb"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_sum{a="bb"} 0 456 foo\_seconds\_count{a="ccc"} 0 123 foo\_seconds\_sum{a="ccc"} 0 123 foo\_seconds\_count{a="ccc"} 0 456 foo\_seconds\_sum{a="ccc"} 0 456 ``` An incorrect example where Metrics are interleaved: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_count{a="ccc"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_count{a="ccc"} 0 456 ``` An incorrect example where MetricPoints are interleaved: ``` # TYPE foo\_seconds summary # UNIT foo\_seconds seconds foo\_seconds\_count{a="bb"} 0 123 foo\_seconds\_count{a="bb"} 0 456 foo\_seconds\_sum{a="bb"} 0 123 foo\_seconds\_sum{a="bb"} 0 456 ``` #### Metric types ##### Gauge The Sample MetricName for the value of a MetricPoint for a MetricFamily of type Gauge MUST NOT have a suffix. An example MetricFamily with a Metric with no labels and a MetricPoint with no timestamp: ```openmetrics-add-eof # TYPE foo gauge foo 17.0 ``` An example of a MetricFamily with two Metrics with a label and MetricPoints with no timestamp: ```openmetrics-add-eof # TYPE foo gauge foo{a="bb"} 17.0 foo{a="ccc"} 17.0 ``` An example of a MetricFamily with no Metrics: ```openmetrics-add-eof # TYPE foo gauge ``` An example with a Metric with a label and a MetricPoint with a timestamp: ```openmetrics-add-eof # TYPE foo gauge foo{a="b"} 17.0 1520879607.789 ``` An example with a Metric with no labels and MetricPoint with a timestamp: ```openmetrics-add-eof # TYPE foo gauge foo 17.0 1520879607.789 ``` An example with a Metric with no labels and two MetricPoints with timestamps: ```openmetrics-add-eof # TYPE foo gauge foo 17.0 123 foo 18.0 456 ``` ##### Counter The MetricPoint's Total Value Sample MetricName SHOULD have the suffix `\_total`. If present the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. Be aware that
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.09577789157629013, -0.03433140739798546, -0.05228801444172859, 0.030486954376101494, -0.09310076385736465, 0.05000389739871025, 0.0038966815918684006, 0.04626286402344704, 0.03761224448680878, -0.037819575518369675, 0.04098798334598541, -0.11598369479179382, 0.043808579444885254, 0.0415...
0.144736
Metric with no labels and two MetricPoints with timestamps: ```openmetrics-add-eof # TYPE foo gauge foo 17.0 123 foo 18.0 456 ``` ##### Counter The MetricPoint's Total Value Sample MetricName SHOULD have the suffix `\_total`. If present the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. Be aware that exposing metrics without `\_total` being a suffix of the MetricFamily name directly to end-users may reduce the usability due to confusion about what the metric's type is. An example with a Metric with no labels, and a MetricPoint with no timestamp and no created: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 ``` An example with a Metric with no labels, and a MetricPoint with a timestamp and no created: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 1520879607.789 ``` An example with a Metric with no labels, and a MetricPoint with no timestamp and a created: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 foo\_created 1520430000.123 ``` An example with a Metric with no labels, and a MetricPoint with a timestamp and a created: ```openmetrics-add-eof # TYPE foo counter foo\_total 17.0 1520879607.789 foo\_created 1520430000.123 1520879607.789 ``` An example with a Metric with no labels, and a MetricPoint without the `\_total` suffix and with a timestamp and a created: ```openmetrics-add-eof # TYPE foo counter foo 17.0 1520879607.789 foo\_created 1520430000.123 1520879607.789 ``` Exemplars MAY be attached to the MetricPoint's Total sample. ##### StateSet The Sample MetricName for the value of a MetricPoint for a MetricFamily of type StateSet MUST NOT have a suffix. StateSets MUST have one sample per State in the MetricPoint. Each State's sample MUST have a label with the MetricFamily name as the label name and the State name as the label value. The State sample's value MUST be 1 if the State is true and MUST be 0 if the State is false. An example with the states "a", "bb", and "ccc" in which only the value bb is enabled and the metric name is foo: ```openmetrics-add-eof # TYPE foo stateset foo{foo="a"} 0 foo{foo="bb"} 1 foo{foo="ccc"} 0 ``` An example of an "entity" label on the Metric: ```openmetrics-add-eof # TYPE foo stateset foo{entity="controller",foo="a"} 1.0 foo{entity="controller",foo="bb"} 0.0 foo{entity="controller",foo="ccc"} 0.0 foo{entity="replica",foo="a"} 1.0 foo{entity="replica",foo="bb"} 0.0 foo{entity="replica",foo="ccc"} 1.0 ``` ##### Info The Sample MetricName for the value of a MetricPoint for a MetricFamily of type Info MUST have the suffix `\_info`. The Sample value MUST always be 1. An example of a Metric with no labels, and one MetricPoint value with "name" and "version" labels: ```openmetrics-add-eof # TYPE foo info foo\_info{name="pretty name",version="8.2.7"} 1 ``` An example of a Metric with label "entity" and one MetricPoint value with “name” and “version” labels: ```openmetrics-add-eof # TYPE foo info foo\_info{entity="controller",name="pretty name",version="8.2.7"} 1.0 foo\_info{entity="replica",name="prettier name",version="8.1.9"} 1.0 ``` Metric labels and MetricPoint value labels MAY be in any order. ##### Summary If present, the MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_sum`. If present, the MetricPoint's Count Value Sample MetricName MUST have the suffix `\_count`. If present, the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. If present, the MetricPoint's Quantile Values MUST specify the quantile measured using a label with a label name of "quantile" and with a label value of the quantile measured. An example of a Metric with no labels and a MetricPoint with Sum, Count and Created values: ```openmetrics-add-eof # TYPE foo summary foo\_count 17.0 foo\_sum 324789.3 foo\_created 1520430000.123 ``` An example of a Metric with no labels and a MetricPoint with two quantiles: ```openmetrics-add-eof # TYPE foo summary foo{quantile="0.95"} 123.7 foo{quantile="0.99"} 150.0 ``` Quantiles MAY be in any order. ##### Histogram The MetricPoint's Bucket Values Sample MetricNames MUST have the
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.05481324717402458, -0.040522221475839615, -0.04260237142443657, 0.01741282269358635, -0.030592478811740875, -0.01780896633863449, 0.03380373492836952, 0.0012925759656354785, 0.132868692278862, -0.04085777699947357, 0.07683785259723663, -0.11581934988498688, 0.0317050963640213, 0.0532666...
0.080011
foo summary foo\_count 17.0 foo\_sum 324789.3 foo\_created 1520430000.123 ``` An example of a Metric with no labels and a MetricPoint with two quantiles: ```openmetrics-add-eof # TYPE foo summary foo{quantile="0.95"} 123.7 foo{quantile="0.99"} 150.0 ``` Quantiles MAY be in any order. ##### Histogram The MetricPoint's Bucket Values Sample MetricNames MUST have the suffix `\_bucket`. If present, the MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_sum`. If present, the MetricPoint's Created Value Sample MetricName MUST have the suffix `\_created`. If and only if a Sum Value is present in a MetricPoint, then the MetricPoint's +Inf Bucket value MUST also appear in a Sample with a MetricName with the suffix "\_count". Buckets MUST be sorted in number increasing order of "le", and the value of the "le" label MUST follow the rules for Canonical Numbers. An example of a Metric with no labels and a MetricPoint with Sum, Count, and Created values, and with 12 buckets. A wide and atypical but valid variety of “le” values is shown on purpose: ```openmetrics-add-eof # TYPE foo histogram foo\_bucket{le="0.0"} 0 foo\_bucket{le="1e-05"} 0 foo\_bucket{le="0.0001"} 5 foo\_bucket{le="0.1"} 8 foo\_bucket{le="1.0"} 10 foo\_bucket{le="10.0"} 11 foo\_bucket{le="100000.0"} 11 foo\_bucket{le="1e+06"} 15 foo\_bucket{le="1e+23"} 16 foo\_bucket{le="1.1e+23"} 17 foo\_bucket{le="+Inf"} 17 foo\_count 17 foo\_sum 324789.3 foo\_created 1520430000.123 ``` ###### Exemplars Exemplars without Labels MUST represent an empty LabelSet as {}. An example of Exemplars showcasing several valid cases: The "0.01" bucket has no Exemplar. The 0.1 bucket has an Exemplar with no Labels. The 1 bucket has an Exemplar with one Label. The 10 bucket has an Exemplar with a Label and a timestamp. In practice all buckets SHOULD have the same style of Exemplars. ```openmetrics-add-eof # TYPE foo histogram foo\_bucket{le="0.01"} 0 foo\_bucket{le="0.1"} 8 # {} 0.054 foo\_bucket{le="1"} 11 # {trace\_id="KOO5S4vxi0o"} 0.67 foo\_bucket{le="10"} 17 # {trace\_id="oHg5SJYRHA0"} 9.8 1520879607.789 foo\_bucket{le="+Inf"} 17 foo\_count 17 foo\_sum 324789.3 foo\_created 1520430000.123 ``` ##### GaugeHistogram The MetricPoint's Bucket Values Sample MetricNames MUST have the suffix `\_bucket`. If present, the MetricPoint's Sum Value Sample MetricName MUST have the suffix `\_gsum`. If and only if a Sum Value is present in a MetricPoint, then the MetricPoint's +Inf Bucket value MUST also appear in a Sample with a MetricName with the suffix `\_gcount`. Buckets MUST be sorted in number increasing order of "le", and the value of the "le" label MUST follow the rules for Canonical Numbers. An example of a Metric with no labels, and one MetricPoint value with no Exemplar with no Exemplars in the buckets: ```openmetrics-add-eof # TYPE foo gaugehistogram foo\_bucket{le="0.01"} 20.0 foo\_bucket{le="0.1"} 25.0 foo\_bucket{le="1"} 34.0 foo\_bucket{le="10"} 34.0 foo\_bucket{le="+Inf"} 42.0 foo\_gcount 42.0 foo\_gsum 3289.3 ``` ##### Unknown The sample metric name for the value of the MetricPoint for a MetricFamily of type Unknown MUST NOT have a suffix. An example with a Metric with no labels and a MetricPoint with no timestamp: ```openmetrics-add-eof # TYPE foo unknown foo 42.23 ``` ### Protobuf format #### Overall Structure Protobuf messages MUST be encoded in binary and MUST have `application/openmetrics-protobuf; version=1.0.0` as their content type. All payloads MUST be a single binary encoded MetricSet message, as defined by the OpenMetrics protobuf schema. ##### Version The protobuf format MUST follow the proto3 version of the protocol buffer language. ##### Strings All string fields MUST be UTF-8 encoded. ##### Timestamps Timestamp representations in the OpenMetrics protobuf schema MUST follow the published google.protobuf.Timestamp [timestamp] message. The timestamp message MUST be in Unix epoch seconds as an int64 and a non-negative fraction of a second at nanosecond resolution as an int32 that counts forward from the seconds timestamp component. It MUST be within 0 to 999,999,999 inclusive. #### Protobuf schema Protobuf schema is currently available [here](https://github.com/prometheus/OpenMetrics/blob/3bb328ab04d26b25ac548d851619f90d15090e5d/proto/openmetrics\_data\_model.proto). > NOTE: Prometheus
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.0400574691593647, -0.003635726636275649, 0.025999821722507477, -0.06502227485179901, -0.03748196363449097, -0.005522506777197123, 0.06170978397130966, 0.036035701632499695, 0.10800206661224365, -0.010376866906881332, 0.014778370968997478, -0.10703135281801224, 0.028438225388526917, -0.0...
0.05615
MUST be in Unix epoch seconds as an int64 and a non-negative fraction of a second at nanosecond resolution as an int32 that counts forward from the seconds timestamp component. It MUST be within 0 to 999,999,999 inclusive. #### Protobuf schema Protobuf schema is currently available [here](https://github.com/prometheus/OpenMetrics/blob/3bb328ab04d26b25ac548d851619f90d15090e5d/proto/openmetrics\_data\_model.proto). > NOTE: Prometheus and ecosystem does not support OpenMetrics protobuf schema, instead it uses similar `io.prometheus.client` [format](https://github.com/prometheus/client\_model/blob/master/io/prometheus/client/metrics.proto). Discussions about the future of the protobuf schema in OpenMetrics 2.0 [are in progress](https://github.com/prometheus/OpenMetrics/issues/296). ## Design Considerations ### Scope OpenMetrics is intended to provide telemetry for online systems. It runs over protocols which do not provide hard or soft real time guarantees, so it can not make any real time guarantees itself. Latency and jitter properties of OpenMetrics are as imprecise as the underlying network, operating systems, CPUs, and the like. It is sufficiently accurate for aggregations to be used as a basis for decision-making, but not to reflect individual events. Systems of all sizes should be supported, from applications that receive a few requests an hour up to monitoring bandwidth usage on a 400Gb network port. Aggregation and analysis of transmitted telemetry should be possible over arbitrary time periods. It is intended to transport snapshots of state at the time of data transmission at a regular cadence. #### Out of scope How ingestors discover which exposers exist, and vice-versa, is out of scope for and thus not defined in this standard. ### Extensions and Improvements This first version of OpenMetrics is based upon well established and de facto standard Prometheus text format 0.0.4, deliberately without adding major syntactic or semantic extensions, or optimisations on top of it. For example no attempt has been made to make the text representation of Histogram buckets more compact, relying on compression in the underlying stack to deal with their repetitive nature. This is a deliberate choice, so that the standard can take advantage of the adoption and momentum of the existing user base. This ensures a relatively easy transition from the Prometheus text format 0.0.4. It also ensures that there is a basic standard which is easy to implement. This can be built upon in future versions of the standard. The intention is that future versions of the standard will always require support for this 1.0 version, both syntactically and semantically. We want to allow monitoring systems to get usable information from an OpenMetrics exposition without undue burden. If one were to strip away all metadata and structure and just look at an OpenMetrics exposition as an unordered set of samples that should be usable on its own. As such, there are also no opaque binary types, such as sketches or t-digests which could not be expressed as a mix of gauges and counters as they would require custom parsing and handling. This principle is applied consistently throughout the standard. For example a MetricFamily's unit is duplicated in the name so that the unit is available for systems that don't understand the unit metadata. The "le" label is a normal label value, rather than getting its own special syntax, so that ingestors don't have to add special histogram handling code to ingest them. As a further example, there are no composite data types. For example, there is no geolocation type for latitude/longitude as this can be done with separate gauge metrics. ### Units and Base Units For consistency across systems and to avoid confusion, units are largely based on SI base units. Base units include seconds, bytes, joules, grams, meters, ratios, volts, amperes, and celsius. Units should be provided where they are applicable. For example, having all duration metrics
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.02093154564499855, 0.04755048081278801, -0.024869080632925034, -0.02863098494708538, 0.025112001225352287, -0.1127161905169487, -0.08064711838960648, 0.019469797611236572, 0.028825359418988228, 0.014955710619688034, 0.019274914637207985, -0.12366359680891037, -0.004178108647465706, 0.02...
0.161805
gauge metrics. ### Units and Base Units For consistency across systems and to avoid confusion, units are largely based on SI base units. Base units include seconds, bytes, joules, grams, meters, ratios, volts, amperes, and celsius. Units should be provided where they are applicable. For example, having all duration metrics in seconds, there is no risk of having to guess whether a given metric is nanoseconds, microseconds, milliseconds, seconds, minutes, hours, days or weeks nor having to deal with mixed units. By choosing unprefixed units, we avoid situations like ones in which kilomilliseconds were the result of emergent behaviour of complex systems. As values can be floating point, sub-base-unit precision is built into the standard. Similarly, mixing bits and bytes is confusing, so bytes are chosen as the base. While Kelvin is a better base unit in theory, in practice most existing hardware exposes Celsius. Kilograms are the SI base unit, however the kilo prefix is problematic so grams are chosen as the base unit. While base units SHOULD be used in all possible cases, Kelvin is a well-established unit which MAY be used instead of Celsius for use cases such as color or black body temperatures where a comparison between a Celsius and Kelvin metric are unlikely. Ratios are the base unit, not percentages. Where possible, raw data in the form of gauges or counters for the given numerator and denominator should be exposed. This has better mathematical properties for analysis and aggregation in the ingestors. Decibels are not a base unit as firstly, deci is a SI prefix and secondly, bels are logarithmic. To expose signal/energy/power ratios exposing the ratio directly would be better, or better still the raw power/energy if possible. Floating point exponents are more than sufficient to cover even extreme scientific uses. An electron volt (~1e-19 J) all the way up to the energy emitted by a supernova (~1e44 J) is 63 orders of magnitude, and a 64-bit floating point number can cover over 2000 orders of magnitude. If non-base units can not be avoided and conversion is not feasible, the actual unit should still be included in the metric name for clarity. For example, joule is the base unit for both energy and power, as watts can be expressed as a counter with a joule unit. In practice a given 3rd party system may only expose watts, so a gauge expressed in watts would be the only realistic choice in that case. Not all MetricFamilies have units. For example a count of HTTP requests wouldn't have a unit. Technically the unit would be HTTP requests, but in that sense the entire MetricFamily name is the unit. Going to that extreme would not be useful. The possibility of having good axes on graphs in downstream systems for human consumption should always be kept in mind. ### Statelessness The wire format defined by OpenMetrics is stateless across expositions. What information has been exposed before MUST have no impact on future expositions. Each exposition is a self-contained snapshot of the current state of the exposer. The same self-contained exposition MUST be provided to existing and new ingestors. A core design choice is that exposers MUST NOT exclude a metric merely because it has had no recent changes, or observations. An exposer must not make any assumptions about how often ingestors are consuming expositions. ### Exposition Across Time and Metric Evolution Metrics are most useful when their evolution over time can be analysed, so accordingly expositions must make sense over time. Thus, it is not sufficient for one single exposition on its own to be useful and valid.
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.050864510238170624, 0.008372042328119278, -0.05254049971699715, 0.01442481018602848, -0.040068719536066055, 0.010691097006201744, 0.007852158509194851, 0.041264962404966354, 0.13008777797222137, -0.046783577650785446, 0.01128186471760273, -0.11785557121038437, 0.013343066908419132, 0.04...
0.136087
how often ingestors are consuming expositions. ### Exposition Across Time and Metric Evolution Metrics are most useful when their evolution over time can be analysed, so accordingly expositions must make sense over time. Thus, it is not sufficient for one single exposition on its own to be useful and valid. Some changes to metric semantics can also break downstream users. Parsers commonly optimize by caching previous results. Thus, changing the order in which labels are exposed across expositions SHOULD be avoided even though it is technically not breaking This also tends to make writing unit tests for exposition easier. Metrics and samples SHOULD NOT appear and disappear from exposition to exposition, for example a counter is only useful if it has history. In principle, a given Metric should be present in exposition from when the process starts until the process terminates. It is often not possible to know in advance what Metrics a MetricFamily will have over the lifetime of a given process (e.g. a label value of a latency histogram is a HTTP path, which is provided by an end user at runtime), but once a counter-like Metric is exposed it should continue to be exposed until the process terminates. That a counter is not getting increments doesn't invalidate that it still has its current value. There are cases where it may make sense to stop exposing a given Metric; see the section on Missing Data. In general changing a MetricFamily's type, or adding or removing a label from its Metrics will be breaking to ingestors. A notable exception is that adding a label to the value of an Info MetricPoints is not breaking. This is so that you can add additional information to an existing Info MetricFamily where it makes sense to be, rather than being forced to create a brand new info metric with an additional label value. ingestor systems should ensure that they are resilient to such additions. Changing a MetricFamily's Help is not breaking. For values where it is possible, switching between floats and ints is not breaking. Adding a new state to a stateset is not breaking. Adding unit metadata where it doesn't change the metric name is not breaking. Histogram buckets SHOULD NOT change from exposition to exposition, as this is likely to both cause performance issues and break ingestors and cause. Similarly all expositions from any consistent binary and environment of an application SHOULD have the same buckets for a given Histogram MetricFamily, so that they can be aggregated by all ingestors without ingestors having to implement histogram merging logic for heterogeneous buckets. An exception might be occasional manual changes to buckets which are considered breaking, but may be a valid tradeoff when performance characteristics change due to a new software release. Even if changes are not technically breaking, they still carry a cost. For example frequent changes may cause performance issues for ingestors. A Help string that varies from exposition to exposition may cause each Help value to be stored. Frequently switching between int and float values could prevent efficient compression. ### NaN NaN is a number like any other in OpenMetrics, usually resulting from a division by zero such as for a summary quantile if there have been no observations recently. NaN does not have any special meaning in OpenMetrics, and in particular MUST NOT be used as a marker for missing or otherwise bad data. ### Missing Data There are valid cases when data stops being present. For example a filesystem can be unmounted and thus its Gauge Metric for free disk space no longer exists. There is no
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.06840915232896805, -0.06466188281774521, -0.021220773458480835, 0.0016315629472956061, 0.005261434707790613, 0.05855074152350426, 0.014795147813856602, -0.011153900064527988, 0.11289060860872269, -0.051124151796102524, -0.006138014607131481, -0.04797527939081192, -0.025438513606786728, ...
0.138286
and in particular MUST NOT be used as a marker for missing or otherwise bad data. ### Missing Data There are valid cases when data stops being present. For example a filesystem can be unmounted and thus its Gauge Metric for free disk space no longer exists. There is no special marker or signal for this situation. Subsequent expositions simply do not include this Metric. ### Exposition Performance Metrics are only useful if they can be collected in reasonable time frames. Metrics that take minutes to expose are not considered useful. As a rule of thumb, exposition SHOULD take no more than a second. Metrics from legacy systems serialized through OpenMetrics may take longer. For this reason, no hard performance assumptions can be made. Exposition SHOULD be of the most recent state. For example, a thread serving the exposition request SHOULD NOT rely on cached values, to the extent it is able to bypass any such caching ### Concurrency For high availability and ad-hoc access a common approach is to have multiple ingestors. To support this, concurrent expositions MUST be supported. All BCPs for concurrent systems SHOULD be followed, common pitfalls include deadlocks, race conditions, and overly-coarse grained locking preventing expositions progressing concurrently. ### Metric Naming and Namespaces We aim for a balance between understandability, avoiding clashes, and succinctness in the naming of metrics and label names. Names are separated through underscores, so metric names end up being in “snake\_case”. To take an example "http\_request\_seconds" is succinct but would clash between large numbers of applications, and it's also unclear exactly what this metric is measuring. For example, it might be before or after auth middleware in a complex system. Metric names should indicate what piece of code they come from. So a company called A Company Manufacturing Everything might prefix all metrics in their code with "acme\_", and if they had a HTTP router library measuring latency it might have a metric such as "acme\_http\_router\_request\_seconds" with a Help string indicating that it is the overall latency. It is not the aim to prevent all potential clashes across all applications, as that would require heavy handed solutions such as a global registry of metric namespaces or very long namespaces based on DNS. Rather the aim is to keep to a lightweight informal approach, so that for a given application that it is very unlikely that there is clash across its constituent libraries. Across a given deployment of a monitoring system as a whole the aim is that clashes where the same metric name means different things are uncommon. For example acme\_http\_router\_request\_seconds might end up in hundreds of different applications developed by A Company Manufacturing Everything, which is normal. If Another Corporation Making Entities also used the metric name acme\_http\_router\_request\_seconds in their HTTP router that's also fine. If applications from both companies were being monitored by the same monitoring system the clash is undesirable, but acceptable as no application is trying to expose both names and no one target is trying to (incorrectly) expose the same metric name twice. If an application wished to contain both My Example Company's and Mega Exciting Company's HTTP router libraries that would be a problem, and one of the metric names would need to be changed somehow. As a corollary, the more public a library is the better namespaced its metric names should be to reduce the risk of such scenarios arising. acme\_ is not a bad choice for internal use within a company, but these companies might for example choose the prefixes acmeverything\_ or acorpme\_ for code shared outside their company. After namespacing by company
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.03952361270785332, 0.008738142438232899, -0.01077977940440178, -0.0012662315275520086, 0.019564533606171608, 0.00790215190500021, -0.052834652364254, 0.001809348352253437, 0.08244936168193817, -0.007464622147381306, -0.008711174130439758, -0.01354011707007885, 0.006678266450762749, -0.0...
0.134723
the better namespaced its metric names should be to reduce the risk of such scenarios arising. acme\_ is not a bad choice for internal use within a company, but these companies might for example choose the prefixes acmeverything\_ or acorpme\_ for code shared outside their company. After namespacing by company or organisation, namespacing and naming should continue by library/subsystem/application fractally as needed such as the http\_router library above. The goal is that if you are familiar with the overall structure of a codebase, you could make a good guess at where the instrumentation for a given metric is given its metric name. For a common very well known existing piece of software, the name of the software itself may be sufficiently distinguishing. For example bind\_ is probably sufficient for the DNS software, even though isc\_bind\_ would be the more usual naming. Metric names prefixed by scrape\_ are used by ingestors to attach information related to individual expositions, so should not be exposed by applications directly. Metrics that have already been consumed and passed through a general purpose monitoring system may include such metric names on subsequent expositions. If an exposer wishes to provide information about an individual exposition, a metric prefix such as myexposer\_scrape\_ may be used. A common example is a gauge myexposer\_scrape\_duration\_seconds for how long that exposition took from the exposer's standpoint. Within the Prometheus ecosystem a set of per-process metrics has emerged that are consistent across all implementations, prefixed with process\_. For example for open file ulimits the MetricFamiles process\_open\_fds and process\_max\_fds gauges provide both the current and maximum value. (These names are legacy, if such metrics were defined today they would be more likely called process\_fds\_open and process\_fds\_limit). In general it is very challengings to get names with identical semantics like this, which is why different instrumentation should use different names. Avoid redundancy in metric names. Avoid substrings like "metric", "timer", "stats", "counter", "total", "float64" and so on - by virtue of being a metric with a given type (and possibly unit) exposed via OpenMetrics information like this is already implied so should not be included explicitly. You should not include label names of a metric in the metric name for the same reasons, and in addition subsequent aggregation of the metric by a monitoring system could make such information incorrect. Avoid including implementation details from other layers of your monitoring system in the metric names contained in your instrumentation. For example a MetricFamily name should not contain the string "openmetrics" merely because it happens to be currently exposed via OpenMetrics somewhere, or "prometheus" merely because your current monitoring system is Prometheus. ### Label Namespacing For label names no explicit namespacing by company or library is recommended, namespacing from the metric name is sufficient for this when considered against the length increase of the label name. However some minimal care to avoid common clashes is recommended. There are label names such as region, zone, cluster, availability\_zone, az, datacenter, dc, owner, customer, stage, service, team, job, instance, environment, and env which are highly likely to clash with labels used to identify targets which a general purpose monitoring system may add. Try to avoid them, adding minimal namespacing may be appropriate in these cases. The label name "type" is highly generic and should be avoided. For example for HTTP-related metrics "method" would be a better label name if you were distinguishing between GET, POST, and PUT requests. While there is metadata about metric names such as HELP, TYPE and UNIT there is no metadata for label names. This is as it would be bloating the format for little
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.1380939930677414, -0.13077078759670258, -0.07235757261514664, -0.015362648293375969, -0.05258661508560181, -0.09223154187202454, 0.0008439318626187742, 0.022122448310256004, 0.06766016036272049, -0.06474859267473221, -0.011052530258893967, -0.06272411346435547, 0.025843920186161995, 0.0...
0.161393
HTTP-related metrics "method" would be a better label name if you were distinguishing between GET, POST, and PUT requests. While there is metadata about metric names such as HELP, TYPE and UNIT there is no metadata for label names. This is as it would be bloating the format for little gain. Out-of-band documentation is one way for exposers could present this their ingestors. ### Metric Names versus Labels There are situations in which both using multiple Metrics within a MetricFamily or multiple MetricFamilies seem to make sense. Summing or averaging aMetricFamily should be meaningful even if it's not always useful. For example, mixing voltage and fan speed is not meaningful. As a reminder, OpenMetrics is built with the assumption that ingestors can process and perform aggregations on data. Exposing a total sum alongside other metrics is wrong, as this would result in double-counting upon aggregation in downstream ingestors. ``` wrong\_metric{label="a"} 1 wrong\_metric{label="b"} 6 wrong\_metric{label="total"} 7 ``` Labels of a Metric should be to the minimum needed to ensure uniqueness as every extra label is one more that users need to consider when determining what Labels to work with downstream. Labels which could be applied many MetricFamilies are candidates for being moved into \_info metrics similar to database {{normalization}}. If virtually all users of a Metric could be expected to want the additional label, it may be a better trade-off to add it to all MetricFamilies. For example if you had a MetricFamily relating to different SQL statements where uniqueness was provided by a label containing a hash of the full SQL statements, it would be okay to have another label with the first 500 characters of the SQL statement for human readability. Experience has shown that downstream ingestors find it easier to work with separate total and failure MetricFamiles rather than using {result="success"} and {result="failure"} Labels within one MetricFamily. Also it is usually better to expose separate read & write and send & receive MetricFamiles as full duplex systems are common and downstream ingestors are more likely to care about those values separately than in aggregate. All of this is not as easy as it may sound. It's an area where experience and engineering trade-offs by domain-specific experts in both exposition and the exposed system are required to find a good balance. Metric and Label Name Characters OpenMetrics builds on the existing widely adopted Prometheus text exposition format and the ecosystem which formed around it. Backwards compatibility is a core design goal. Expanding or contracting the set of characters that are supported by the Prometheus text format would work against that goal. Breaking backwards compatibility would have wider implications than just the wire format. In particular, the query languages created or adopted to work with data transmitted within the Prometheus ecosystem rely on these precise character sets. Label values support full UTF-8, so the format can represent multi-lingual metrics. ### Types of Metadata Metadata can come from different sources. Over the years, two main sources have emerged. While they are often functionally the same, it helps in understanding to talk about their conceptual differences. "Target metadata" is metadata commonly external to an exposer. Common examples would be data coming from service discovery, a CMDB, or similar, like information about a datacenter region, if a service is part of a particular deployment, or production or testing. This can be achieved by either the exposer or the ingestor adding labels to all Metrics that capture this metadata. Doing this through the ingestor is preferred as it is more flexible and carries less overhead. On flexibility, the hardware maintenance team might care about
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.05758367106318474, 0.0046110437251627445, -0.07515840977430344, 0.02430865168571472, -0.0565476268529892, -0.040528539568185806, 0.014734468422830105, 0.07864120602607727, 0.0710596814751625, -0.019771702587604523, 0.0026295315474271774, -0.09928617626428604, 0.05954064428806305, -0.002...
0.149752
deployment, or production or testing. This can be achieved by either the exposer or the ingestor adding labels to all Metrics that capture this metadata. Doing this through the ingestor is preferred as it is more flexible and carries less overhead. On flexibility, the hardware maintenance team might care about which server rack a machine is located in, whereas the database team using that same machine might care that it contains replica number 2 of the production database. On overhead, hardcoding or configuring this information needs an additional distribution path. "Exposer metadata" is coming from within an exposer. Common examples would be software version, compiler version, or Git commit SHA. #### Supporting Target Metadata in both Push-based and Pull-based Systems In push-based consumption, it is typical for the exposer to provide the relevant target metadata to the ingestor. In pull-based consumption the push-based approach could be taken, but more typically the ingestor already knows the metadata of the target a-priori such as from a machine database or service discovery system, and associates it with the metrics as it consumes the exposition. OpenMetrics is stateless and provides the same exposition to all ingestors, which is in conflict with the push-style approach. In addition the push-style approach would break pull-style ingestors, as unwanted metadata would be exposed. One approach would be for push-style ingestors to provide target metadata based on operator configuration out-of-band, for example as a HTTP header. While this would transport target metadata for push-style ingestors, and is not precluded by this standard, it has the disadvantage that even though pull-style ingestors should use their own target metadata, it is still often useful to have access to the metadata the exposer itself is aware of. The preferred solution is to provide this target metadata as part of the exposition, but in a way that does not impact on the exposition as a whole. Info MetricFamilies are designed for this. An exposer may include an Info MetricFamily called "target" with a single Metric with no labels with the metadata. An example in the text format might be: ``` # TYPE target info # HELP target Target metadata target\_info{env="prod",hostname="myhost",datacenter="sdc",region="europe",owner="frontend"} 1 ``` When an exposer is providing this metric for this purpose it SHOULD be first in the exposition. This is for efficiency, so that ingestors relying on it for target metadata don't have to buffer up the rest of the exposition before applying business logic based on its content. Exposers MUST NOT add target metadata labels to all Metrics from an exposition, unless explicitly configured for a specific ingestor. Exposers MUST NOT prefix MetricFamily names or otherwise vary MetricFamily names based on target metadata. Generally, the same Label should not appear on every Metric of an exposition, but there are rare cases where this can be the result of emergent behaviour. Similarly all MetricFamily names from an exposer may happen to share a prefix in very small expositions. For example an application written in the Go language by A Company Manufacturing Everything would likely include metrics with prefixes of acme\_, go\_, process\_, and metric prefixes from any 3rd party libraries in use. Exposers can expose exposer metadata as Info MetricFamilies. The above discussion is in the context of individual exposers. An exposition from a general purpose monitoring system may contain metrics from many individual targets, and thus may expose multiple target info Metrics. The metrics may already have had target metadata added to them as labels as part of ingestion. The metric names MUST NOT be varied based on target metadata. For example it would be incorrect for all metrics to end
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ 0.008640582673251629, -0.02524757571518421, -0.05844073370099068, -0.033953119069337845, 0.06658434867858887, -0.08115945011377335, -0.018603183329105377, 0.033593371510505676, 0.06615890562534332, 0.022801298648118973, 0.01441279798746109, -0.014670351520180702, 0.05919479951262474, -0.03...
0.196309
many individual targets, and thus may expose multiple target info Metrics. The metrics may already have had target metadata added to them as labels as part of ingestion. The metric names MUST NOT be varied based on target metadata. For example it would be incorrect for all metrics to end up being prefixed with staging\_ even if they all originated from targets in a staging environment). ### Client Calculations and Derived Metrics Exposers should leave any math or calculation up to ingestors. A notable exception is the Summary quantile which is unfortunately required for backwards compatibility. Exposition should be of raw values which are useful over arbitrary time periods. As an example, you should not expose a gauge with the average rate of increase of a counter over the last 5 minutes. Letting the ingestor calculate the increase over the data points they have consumed across expositions has better mathematical properties and is more resilient to scrape failures. Another example is the average event size of a histogram/summary. Exposing the average rate of increase of a counter since an application started or since a Metric was created has the problems from the earlier example and it also prevents aggregation. Standard deviation also falls into this category. Exposing a sum of squares as a counter would be the correct approach. It was not included in this standard as a Histogram value because 64bit floating point precision is not sufficient for this to work in practice. Due to the squaring only half the 53bit mantissa would be available in terms of precision. As an example a histogram observing 10k events per second would lose precision within 2 hours. Using 64bit integers would be no better due to the loss of the floating decimal point because a nanosecond resolution integer typically tracking events of a second in length would overflow after 19 observations. This design decision can be revisited when 128bit floating point numbers become common. Another example is to avoid exposing a request failure ratio, exposing separate counters for failed requests and total requests instead. ### Number Types For a counter that was incremented a million times per second it would take over a century to begin to lose precision with a float64 as it has a 53 bit mantissa. Yet a 100 Gbps network interface's octet throughput precision could begin to be lost with a float64 within around 20 hours. While losing 1KB of precision over the course of years for a 100Gbps network interface is unlikely to be a problem in practice, int64s are an option for integral data with such a high throughput. Summary quantiles must be float64, as they are estimates and thus fundamentally inaccurate. ### Exposing Timestamps One of the core assumptions of OpenMetrics is that exposers expose the most up to date snapshot of what they're exposing. While there are limited use cases for attaching timestamps to exposed data, these are very uncommon. Data which had timestamps previously attached, in particular data which has been ingested into a general purpose monitoring system may carry timestamps. Live or raw data should not carry timestamps. It is valid to expose the same metric MetricPoint value with the same timestamp across expositions, however it is invalid to do so if the underlying metric is now missing. Time synchronization is a hard problem and data should be internally consistent in each system. As such, ingestors should be able to attach the current timestamp from their perspective to data rather than based on the system time of the exposer device. With timestamped metrics it is not generally possible to detect the
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.04460291564464569, -0.015189907513558865, -0.04991165176033974, 0.015010549686849117, 0.012266509234905243, 0.012987770140171051, 0.012142285704612732, 0.04708751663565636, 0.09638528525829315, -0.00849887728691101, -0.03486165031790733, -0.09561175853013992, 0.06329146027565002, 0.0576...
0.116503
a hard problem and data should be internally consistent in each system. As such, ingestors should be able to attach the current timestamp from their perspective to data rather than based on the system time of the exposer device. With timestamped metrics it is not generally possible to detect the time when a Metric went missing across expositions. However with non-timestamped metrics the ingestor can use its own timestamp from the exposition where the Metric is no longer present. All of this is to say that, in general, MetricPoint timestamps should not be exposed, as it should be up to the ingestor to apply their own timestamps to samples they ingest. #### Tracking When Metrics Last Changed Presume you had a counter my\_counter which was initialized, and then later incremented by 1 at time 123. This would be a correct way to expose it in the text format: ``` # HELP my\_counter Good increment example # TYPE my\_counter counter my\_counter\_total 1 ``` As per the parent section, ingestors should be free to attach their own timestamps, so this would be incorrect: ``` # HELP my\_counter Bad increment example # TYPE my\_counter counter my\_counter\_total 1 123 ``` In case the specific time of the last change of a counter matters, this would be the correct way: ``` # HELP my\_counter Good increment example # TYPE my\_counter counter my\_counter\_total 1 # HELP my\_counter\_last\_increment\_timestamp\_seconds When my\_counter was last incremented # TYPE my\_counter\_last\_increment\_timestamp\_seconds gauge # UNIT my\_counter\_last\_increment\_timestamp\_seconds seconds my\_counter\_last\_increment\_timestamp\_seconds 123 ``` By putting the timestamp of last change into its own Gauge as a value, ingestors are free to attach their own timestamp to both Metrics. Experience has shown that exposing absolute timestamps (epoch is considered absolute here) is more robust than time elapsed, seconds since, or similar. In either case, they would be gauges. For example: ``` # TYPE my\_boot\_time\_seconds gauge # HELP my\_boot\_time\_seconds Boot time of the machine # UNIT my\_boot\_time\_seconds seconds my\_boot\_time\_seconds 1256060124 ``` Is better than: ``` # TYPE my\_time\_since\_boot\_seconds gauge # HELP my\_time\_since\_boot\_seconds Time elapsed since machine booted # UNIT my\_time\_since\_boot\_seconds seconds my\_time\_since\_boot\_seconds 123 ``` Conversely, there are no best practice restrictions on exemplars timestamps. Keep in mind that due to race conditions or time not being perfectly synced across devices, that an exemplar timestamp may appear to be slightly in the future relative to a ingestor's system clock or other metrics from the same exposition. Similarly it is possible that a "\_created" for a MetricPoint could appear to be slightly after an exemplar or sample timestamp for that same MetricPoint. Keep in mind that there are monitoring systems in common use which support everything from nanosecond to second resolution, so having two MetricPoints that have the same timestamp when truncated to second resolution may cause an apparent duplicate in the ingestor. In this case the MetricPoint with the earliest timestamp MUST be used. ### Thresholds Exposing desired bounds for a system can make sense, but proper care needs to be taken. For values which are universally true, it can make sense to emit Gauge metrics for such thresholds. For example, a data center HVAC system knows the current measurements, the setpoints, and the alert setpoints. It has a globally valid and correct view of the desired system state. As a counter example, some thresholds can change with scale, deployment model, or over time. A certain amount of CPU usage may be acceptable in one setting and undesirable in another. Aggregation of values can further change acceptable values. In such a system, exposing bounds could be counter-productive. For example the maximum size of a queue may be exposed alongside
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.056585296988487244, -0.011062448844313622, 0.008754223585128784, 0.031211236491799355, 0.021983418613672256, 0.04974587261676788, 0.03751073405146599, 0.04528368264436722, 0.06216725707054138, -0.0697140023112297, 0.048074107617139816, -0.09601888805627823, 0.0437573567032814, 0.0380176...
0.11741
deployment model, or over time. A certain amount of CPU usage may be acceptable in one setting and undesirable in another. Aggregation of values can further change acceptable values. In such a system, exposing bounds could be counter-productive. For example the maximum size of a queue may be exposed alongside the number of items currently in the queue like: ``` # HELP acme\_notifications\_queue\_capacity The capacity of the notifications queue. # TYPE acme\_notifications\_queue\_capacity gauge acme\_notifications\_queue\_capacity 10000 # HELP acme\_notifications\_queue\_length The number of notifications in the queue. # TYPE acme\_notifications\_queue\_length gauge acme\_notifications\_queue\_length 42 ``` ### Size Limits This standard does not prescribe any particular limits on the number of samples exposed by a single exposition, the number of labels that may be present, the number of states a stateset may have, the number of labels in an info value, or metric name/label name/label value/help character limits. Specific limits run the risk of preventing reasonable use cases, for example while a given exposition may have an appropriate number of labels after passing through a general purpose monitoring system a few target labels may have been added that would push it over the limit. Specific limits on numbers such as these would also not capture where the real costs are for general purpose monitoring systems. These guidelines are thus both to aid exposers and ingestors in understanding what is reasonable. On the other hand, an exposition which is too large in some dimension could cause significant performance problems compared to the benefit of the metrics exposed. Thus some guidelines on the size of any single exposition would be useful. ingestors may choose to impose limits themselves, for in particular to prevent attacks or outages. Still, ingestors need to consider reasonable use cases and try not to disproportionately impact them. If any single value/metric/exposition exceeds such limits then the whole exposition must be rejected. In general there are three things which impact the performance of a general purpose monitoring system ingestion time series data: the number of unique time series, the number of samples over time in those series, and the number of unique strings such as metric names, label names, label values, and HELP. ingestors can control how often they ingest, so that aspect does not need further consideration. The number of unique time series is roughly equivalent to the number of non-comment lines in the text format. As of 2020, 10 million time series in total is considered a large amount and is commonly the order of magnitude of the upper bound of any single-instance ingestor. Any single exposition should not go above 10k time series without due diligence. One common consideration is horizontal scaling: What happens if you scale your instance count by 1-2 orders of magnitude? Having a thousand top-of-rack switches in a single deployment would have been hard to imagine 30 years ago. If a target was a singleton (e.g. exposing metrics relating to an entire cluster) then several hundred thousand time series may be reasonable. It is not the number of unique MetricFamilies or the cardinality of individual labels/buckets/statesets that matters, it is the total order of magnitude of the time series. 1,000 gauges with one Metric each are as costly as a single gauge with 1,000 Metrics. If all targets of a particular type are exposing the same set of time series, then each additional targets' strings poses no incremental cost to most reasonably modern monitoring systems. If however each target has unique strings, there is such a cost. As an extreme example, a single 10k character metric name used by many targets is on its own very
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.03352685645222664, -0.030473314225673676, -0.06256023794412613, 0.038117289543151855, -0.009835979901254177, -0.009860849007964134, 0.09762632846832275, 0.03083086758852005, 0.03707412630319595, -0.002239218447357416, -0.01763230375945568, -0.021788721904158592, 0.08496803790330887, -0....
0.220106
set of time series, then each additional targets' strings poses no incremental cost to most reasonably modern monitoring systems. If however each target has unique strings, there is such a cost. As an extreme example, a single 10k character metric name used by many targets is on its own very unlikely to be a problem in practice. To the contrary, a thousand targets each exposing a unique 36 character UUID is over three times as expensive as that single 10k character metric name in terms of strings to be stored assuming modern approaches. In addition, if these strings change over time older strings will still need to be stored for at least some time, incurring extra cost. Assuming the 10 million times series from the last paragraph, 100MB of unique strings per hour might indicate a use case for then the use case may be more like event logging, not metric time series. There is a hard 128 UTF-8 character limit on exemplar length, to prevent misuse of the feature for tracing span data and other event logging. ## Security Implementors MAY choose to offer authentication, authorization, and accounting; if they so choose, this SHOULD be handled outside of OpenMetrics. All exposer implementations SHOULD be able to secure their HTTP traffic with TLS 1.2 or later. If an exposer implementation does not support encryption, operators SHOULD use reverse proxies, firewalling, and/or ACLs where feasible. Metric exposition should be independent of production services exposed to end users; as such, having a /metrics endpoint on ports like TCP/80, TCP/443, TCP/8080, and TCP/8443 is generally discouraged for publicly exposed services using OpenMetrics. ## IANA While currently most implementations of the Prometheus exposition format are using non-IANA-registered ports from an informal registry at {{PrometheusPorts}}, OpenMetrics can be found on a well-defined port. The port assigned by IANA for clients exposing data is <9099 requested for historical consistency>. If more than one metric endpoint needs to be reachable at a common IP address and port, operators might consider using a reverse proxy that communicates with exposers over localhost addresses. To ease multiplexing, endpoints SHOULD carry their own name in their path, i.e. `/node\_exporter/metrics`. Expositions SHOULD NOT be combined into one exposition, for the reasons covered under "Supporting target metadata in both push-based and pull-based systems" and to allow for independent ingestion without a single point of failure. OpenMetrics would like to register two MIME types, `application/openmetrics-text` and `application/openmetrics-proto`.
https://github.com/prometheus/docs/blob/main//docs/specs/om/open_metrics_spec_1_1.md
main
prometheus
[ -0.04912496730685234, -0.009166914038360119, -0.04275388643145561, -0.00410922197625041, 0.0010015390580520034, -0.04117578640580177, 0.04439260810613632, 0.01430037897080183, 0.0872533842921257, -0.030667398124933243, -0.011453991755843163, 0.0026024116668850183, 0.02308063954114914, 0.00...
0.109949
- Version: 1.0 - Status: Published - Date: April 2023 This document is intended to define and standardise the API, wire format, protocol and semantics of the existing, widely and organically adopted protocol, and not to propose anything new. The remote write specification is intended to document the standard for how Prometheus and Prometheus remote-write-compatible agents send data to a Prometheus or Prometheus remote-write compatible receiver. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119). > NOTE: This specification has a 2.0 version available, see [here](./remote\_write\_spec\_2\_0.md). ## Introduction ### Background The remote write protocol is designed to make it possible to reliably propagate samples in real-time from a sender to a receiver, without loss. The Remote-Write protocol is designed to be stateless; there is strictly no inter-message communication. As such the protocol is not considered "streaming". To achieve a streaming effect multiple messages should be sent over the same connection using e.g. HTTP/1.1 or HTTP/2. "Fancy" technologies such as gRPC were considered, but at the time were not widely adopted, and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB. The remote write protocol contains opportunities for batching, e.g. sending multiple samples for different series in a single request. It is not expected that multiple samples for the same series will be commonly sent in the same request, although there is support for this in the protocol. The remote write protocol is not intended for use by applications to push metrics to Prometheus remote-write-compatible receivers. It is intended that a Prometheus remote-write-compatible sender scrapes instrumented applications or exporters and sends remote write messages to a server. A test suite can be found at https://github.com/prometheus/compliance/tree/main/remotewrite/sender. ### Glossary For the purposes of this document the following definitions MUST be followed: - a "Sender" is something that sends Prometheus Remote Write data. - a "Receiver" is something that receives Prometheus Remote Write data. - a "Sample" is a pair of (timestamp, value). - a "Label" is a pair of (key, value). - a "Series" is a list of samples, identified by a unique set of labels. ## Definitions ### Protocol The Remote Write Protocol MUST consist of RPCs with the following signature: ``` func Send(WriteRequest) message WriteRequest { repeated TimeSeries timeseries = 1; // Cortex uses this field to determine the source of the write request. // We reserve it to avoid any compatibility issues. reserved 2; // Prometheus uses this field to send metadata, but this is // omitted from v1 of the spec as it is experimental. reserved 3; } message TimeSeries { repeated Label labels = 1; repeated Sample samples = 2; } message Label { string name = 1; string value = 2; } message Sample { double value = 1; int64 timestamp = 2; } ``` Remote write Senders MUST encode the Write Request in the body of a HTTP POST request and send it to the Receivers via HTTP at a provided URL path. The Receiver MAY specify any HTTP URL path to receive metrics. Timestamps MUST be int64 counted as milliseconds since the Unix epoch. Values MUST be float64. The following headers MUST be sent with the HTTP request: - `Content-Encoding: snappy` - `Content-Type: application/x-protobuf` - `User-Agent: ` - `X-Prometheus-Remote-Write-Version: 0.1.0` Clients MAY allow users to send custom HTTP headers; they MUST NOT allow users to configure them in such a way as to send reserved headers. For more info see https://github.com/prometheus/prometheus/pull/8416. The remote write request
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec.md
main
prometheus
[ -0.12627026438713074, 0.017381537705659866, -0.005345076322555542, 0.017850574105978012, 0.01238823588937521, -0.09251546859741211, -0.03756026551127434, -0.010267634876072407, -0.04074301943182945, 0.049150966107845306, 0.01830320991575718, 0.016750166192650795, 0.03441139683127403, 0.023...
0.194433
the HTTP request: - `Content-Encoding: snappy` - `Content-Type: application/x-protobuf` - `User-Agent: ` - `X-Prometheus-Remote-Write-Version: 0.1.0` Clients MAY allow users to send custom HTTP headers; they MUST NOT allow users to configure them in such a way as to send reserved headers. For more info see https://github.com/prometheus/prometheus/pull/8416. The remote write request in the body of the HTTP POST MUST be compressed with [Google’s Snappy](https://github.com/google/snappy). The block format MUST be used - the framed format MUST NOT be used. The remote write request MUST be encoded using Google Protobuf 3, and MUST use the schema defined above. Note [the Prometheus implementation](https://github.com/prometheus/prometheus/blob/v2.24.0/prompb/remote.proto) uses [gogoproto optimisations](https://github.com/gogo/protobuf) - for receivers written in languages other than Golang the gogoproto types MAY be substituted for line-level equivalents. The response body from the remote write receiver SHOULD be empty; clients MUST ignore the response body. The response body is RESERVED for future use. ### Backward and forward compatibility The protocol follows [semantic versioning 2.0](https://semver.org/): any 1.x compatible receivers MUST be able to read any 1.x compatible sender and so on. Breaking/backwards incompatible changes will result in a 2.x version of the spec. The proto format itself is forward / backward compatible, in some respects: - Removing fields from the proto will mean a major version bump. - Adding (optional) fields will be a minor version bump. Negotiation: - Senders MUST send the version number in a headers. - Receivers MAY return the highest version number they support in a response header ("X-Prometheus-Remote-Write-Version"). - Senders who wish to send in a format >1.x MUST start by sending an empty 1.x, and see if the response says the receiver supports something else. The Sender MAY use any supported version . If there is no version header in the response, senders MUST assume 1.x compatibility only. ### Labels The complete set of labels MUST be sent with each sample. Whatsmore, the label set associated with samples: - SHOULD contain a `\_\_name\_\_` label. - MUST NOT contain repeated label names. - MUST have label names sorted in lexicographical order. - MUST NOT contain any empty label names or values. Senders MUST only send valid metric names, label names, and label values: - Metric names MUST adhere to the regex `[a-zA-Z\_:]([a-zA-Z0-9\_:])\*`. - Label names MUST adhere to the regex `[a-zA-Z\_]([a-zA-Z0-9\_])\*`. - Label values MAY be any sequence of UTF-8 characters . Receivers MAY impose limits on the number and length of labels, but this will be receiver-specific and is out of scope for this document. Label names beginning with "\_\_" are RESERVED for system usage and SHOULD NOT be used, see [Prometheus Data Model](https://prometheus.io/docs/concepts/data\_model/). Remote write Receivers MAY ingest valid samples within a write request that otherwise contains invalid samples. Receivers MUST return a HTTP 400 status code ("Bad Request") for write requests that contain any invalid samples. Receivers SHOULD provide a human readable error message in the response body. Senders MUST NOT try and interpret the error message, and SHOULD log it as is. ### Ordering Prometheus Remote Write compatible senders MUST send samples for any given series in timestamp order. Prometheus Remote Write compatible Senders MAY send multiple requests for different series in parallel. ### Retries & Backoff Prometheus Remote Write compatible senders MUST retry write requests on HTTP 5xx responses and MUST use a backoff algorithm to prevent overwhelming the server. They MUST NOT retry write requests on HTTP 2xx and 4xx responses other than 429. They MAY retry on HTTP 429 responses, which could result in senders "falling behind" if the server cannot keep up. This is done to ensure data is not lost when there are server side
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec.md
main
prometheus
[ -0.1072574108839035, 0.02168719656765461, 0.0038661241997033358, -0.057221919298172, -0.03242459520697594, -0.14881983399391174, -0.09439253807067871, 0.005200520623475313, -0.054898060858249664, 0.02859068289399147, -0.018993448466062546, -0.0011318385368213058, 0.03135545179247856, -0.01...
0.033079
server. They MUST NOT retry write requests on HTTP 2xx and 4xx responses other than 429. They MAY retry on HTTP 429 responses, which could result in senders "falling behind" if the server cannot keep up. This is done to ensure data is not lost when there are server side errors, and progress is made when there are client side errors. Prometheus remote Write compatible receivers MUST respond with a HTTP 2xx status code when the write is successful. They MUST respond with HTTP status code 5xx when the write fails and SHOULD be retried. They MUST respond with HTTP status code 4xx when the request is invalid, will never be able to succeed and should not be retried. ### Stale Markers Prometheus remote write compatible senders MUST send stale markers when a time series will no longer be appended to. Stale markers MUST be signalled by the special NaN value 0x7ff0000000000002. This value MUST NOT be used otherwise. Typically the sender can detect when a time series will no longer be appended to using the following techniques: 1. Detecting, using service discovery, that the target exposing the series has gone away 1. Noticing the target is no longer exposing the time series between successive scrapes 1. Failing to scrape the target that originally exposed a time series 1. Tracking configuration and evaluation for recording and alerting rules ## Out of Scope This document does not intend to explain all the features required for a fully Prometheus-compatible monitoring system. In particular, the following areas are out of scope for the first version of the spec: \*\*The "up" metric\*\* The definition and semantics of the "up" metric are beyond the scope of the remote write protocol and should be documented separately. \*\*HTTP Path\*\* The path for HTTP handler can be anything - and MUST be provided by the sender. Generally we expect the whole URL to be specified in config. \*\*Persistence\*\* It is recommended that Prometheus Remote Write compatible senders should persistently buffer sample data in the event of outages in the receiver. \*\*Authentication & Encryption\*\* as remote write uses HTTP, we consider authentication & encryption to be a transport-layer problem. Senders and receivers should support all the usual suspects (Basic auth, TLS etc) and are free to add potentially custom authentication options. Support for custom authentication in the Prometheus remote write sender and eventual agent should not be assumed, but we will endeavour to support common and widely used auth protocols, where feasible. \*\*Remote Read\*\* this is a separate interface that has already seen some iteration, and is less widely used. \*\*Sharding\*\* the current sharding scheme in Prometheus for remote write parallelisation is very much an implementation detail, and isn’t part of the spec. When senders do implement parallelisation they MUST preserve per-series sample ordering. \*\*Backfill\*\* The specification does not place a limit on how old series can be pushed, however server/implementation specific constraints may exist. \*\*Limits\*\* Limits on the number and length of labels, batch sizes etc are beyond the scope of this document, however it is expected that implementation will impose reasonable limits. \*\*Push-based Prometheus\*\* Applications pushing metrics to Prometheus Remote Write compatible receivers was not a design goal of this system, and should be explored in a separate doc. \*\*Labels\*\* Every series MAY include a "job" and/or "instance" label, as these are typically added by service discovery in the Sender. These are not mandatory. ## Future Plans This section contains speculative plans that are not considered part of protocol specification, but are mentioned here for completeness. \*\*Transactionality\*\* Prometheus aims at being "transactional" - i.e. to never expose
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec.md
main
prometheus
[ -0.10641966015100479, -0.030628224834799767, 0.03897492587566376, 0.03463388979434967, 0.02218683995306492, -0.1381368190050125, -0.026448411867022514, -0.06387398391962051, 0.02588774636387825, 0.04127594828605652, 0.004119888413697481, 0.026504192501306534, -0.0021373520139604807, -0.027...
0.203238
"instance" label, as these are typically added by service discovery in the Sender. These are not mandatory. ## Future Plans This section contains speculative plans that are not considered part of protocol specification, but are mentioned here for completeness. \*\*Transactionality\*\* Prometheus aims at being "transactional" - i.e. to never expose a partially scraped target to a query. We intend to do the same with remote write - for instance, in the future we would like to "align" remote write with scrapes, perhaps such that all the samples, metadata and exemplars for a single scrape are sent in a single remote write request. This is yet to be designed. \*\*Metadata\*\* and Exemplars In line with above, we also send metadata (type information, help text) and exemplars along with the scraped samples. We plan to package this up in a single remote write request - future versions of the spec may insist on this. Prometheus currently has experimental support for sending metadata and exemplars. \*\*Optimizations\*\* We would like to investigate various optimizations to reduce message size by eliminating repetition of label names and values. ## Related ### Compatible Senders and Receivers The spec is intended to describe how the following components interact (as of April 2023): - [Prometheus](https://github.com/prometheus/prometheus/tree/master/storage/remote) (as both a "sender" and a "receiver") - [Avalanche](https://github.com/prometheus-community/avalanche) (as a "sender") - A Load Testing Tool Prometheus Metrics. - [Cortex](https://github.com/cortexproject/cortex/blob/master/pkg/util/push/push.go#L20) (as a "receiver") - [Elastic Agent](https://docs.elastic.co/integrations/prometheus#prometheus-server-remote-write) (as a "receiver") - [Grafana Agent](https://github.com/grafana/agent) (as both a "sender" and a "receiver") - [GreptimeDB](https://github.com/greptimeTeam/greptimedb) (as a ["receiver"](https://docs.greptime.com/user-guide/ingest-data/for-observerbility/prometheus)) - InfluxData’s Telegraf agent. ([as a sender](https://github.com/influxdata/telegraf/tree/master/plugins/serializers/prometheusremotewrite), and [as a receiver](https://github.com/influxdata/telegraf/pull/8967)) - [M3](https://m3db.io/docs/integrations/prometheus/#prometheus-configuration) (as a "receiver") - [Mimir](https://github.com/grafana/mimir) (as a "receiver") - [Oodle](https://docs.oodle.ai/integrations/metrics/prometheus/) (as a "receiver") - [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-releases/) (as a ["sender"](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusremotewriteexporter#readme) and eventually as a "receiver") - [Thanos](https://thanos.io/tip/components/receive.md/) (as a "receiver") - Vector (as a ["sender"](https://vector.dev/docs/reference/configuration/sinks/prometheus\_remote\_write/) and a ["receiver"](https://vector.dev/docs/reference/configuration/sources/prometheus\_remote\_write/)) - [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics) (as a ["receiver"](https://docs.victoriametrics.com/#prometheus-setup)) ### FAQ \*\*Why did you not use gRPC?\*\* Funnily enough we initially used gRPC, but switched to Protos atop HTTP as in 2016 it was hard to get them past ELBs: https://github.com/prometheus/prometheus/issues/1982 \*\*Why not streaming protobuf messages?\*\* If you use persistent HTTP/1.1 connections, they are pretty close to streaming… Of course headers have to be re-sent, but yes that is less expensive than a new TCP set up. \*\*Why do we send samples in order?\*\* The in-order constraint comes from the encoding we use for time series data in Prometheus, the implementation of which is append only. It is possible to remove this constraint, for instance by buffering samples and reordering them before encoding. We can investigate this in future versions of the protocol. \*\*How can we parallelise requests with the in-order constraint?\*\* Samples must be in-order \_for a given series\_. Remote write requests can be sent in parallel as long as they are for different series. In Prometheus, we shard the samples by their labels into separate queues, and then writes happen sequentially in each queue. This guarantees samples for the same series are delivered in order, but samples for different series are sent in parallel - and potentially "out of order" between different series. We believe this is necessary as, even if the receiver could support out-of-order samples, we can't have agents sending out of order as they would never be able to send to Prometheus, Cortex and Thanos. We’re doing this to ensure the integrity of the ecosystem and to prevent confusing/forking the community into "prometheus-agents-that-can-write-to-prometheus" and those that can’t.
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec.md
main
prometheus
[ -0.12412522733211517, 0.04059860482811928, 0.014755924232304096, 0.02817573770880699, -0.012025220319628716, -0.1018570140004158, 0.024579836055636406, -0.03718238323926926, -0.00967281311750412, 0.009413903579115868, -0.013803262263536453, -0.07500036060810089, 0.04295427352190018, 0.0004...
0.157227
to ensure the integrity of the ecosystem and to prevent confusing/forking the community into "prometheus-agents-that-can-write-to-prometheus" and those that can’t.
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec.md
main
prometheus
[ -0.08416544646024704, 0.048389505594968796, 0.02130708657205105, -0.0006652945885434747, 0.05735423415899277, -0.1072070300579071, 0.012420912273228168, -0.013109838590025902, 0.07981771975755692, 0.021985750645399094, 0.02358202449977398, -0.012897753156721592, -0.032643791288137436, -0.0...
0.349499
\* Version: 2.0-rc.4 \* Status: \*\*Experimental\*\* \* Date: May 2024 The Remote-Write specification, in general, is intended to document the standard for how Prometheus and Prometheus Remote-Write compatible senders send data to Prometheus or Prometheus Remote-Write compatible receivers. This document is intended to define a second version of the [Prometheus Remote-Write](/docs/specs/remote\_write\_spec) API with minor changes to protocol and semantics. This second version adds a new Protobuf Message with new features enabling more use cases and wider adoption on top of performance and cost savings. The second version also deprecates the previous Protobuf Message from a [1.0 Remote-Write specification](/docs/specs/remote\_write\_spec/#protocol) and adds mandatory [`X-Prometheus-Remote-Write-\*-Written` HTTP response headers](#required-written-response-headers) for reliability purposes. Finally, this spec outlines how to implement backwards-compatible senders and receivers (even under a single endpoint) using existing basic content negotiation request headers. More advanced, automatic content negotiation mechanisms might come in a future minor version if needed. For the rationales behind the 2.0 specification, see [the formal proposal](https://github.com/prometheus/proposals/pull/35). The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119). > NOTE: This is a release candidate for Remote-Write 2.0 specification. This means that this specification is currently in an experimental state--no major changes are expected, but we reserve the right to break the compatibility if it's necessary, based on the early adopters' feedback. The potential feedback, questions and suggestions should be added as comments to the [PR with the open proposal](https://github.com/prometheus/proposals/pull/35). ## Introduction ### Background The Remote-Write protocol is designed to make it possible to reliably propagate samples in real-time from a sender to a receiver, without loss. The Remote-Write protocol is designed to be stateless; there is strictly no inter-message communication. As such the protocol is not considered "streaming". To achieve a streaming effect multiple messages should be sent over the same connection using e.g. HTTP/1.1 or HTTP/2. "Fancy" technologies such as gRPC were considered, but at the time were not widely adopted, and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB. The Remote-Write protocol contains opportunities for batching, e.g. sending multiple samples for different series in a single request. It is not expected that multiple samples for the same series will be commonly sent in the same request, although there is support for this in the Protobuf Message. Compliance tests can be found at: \* sender: https://github.com/prometheus/compliance/tree/main/remotewrite/sender \* receiver: https://github.com/prometheus/compliance/tree/main/remotewrite/receiver ### Glossary In this document, the following definitions are followed: \* `Remote-Write` is the name of this Prometheus protocol. \* a `Protocol` is a communication specification that enables the client and server to transfer metrics. \* a `Protobuf Message` (or Proto Message) refers to the [content type](https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type) definition of the data structure for this Protocol. Since the specification uses [Google Protocol Buffers ("protobuf")](https://protobuf.dev/) exclusively, the schema is defined in a ["proto" file](https://protobuf.dev/programming-guides/proto3/) and represented by a single Protobuf ["message"](https://protobuf.dev/programming-guides/proto3/#simple). \* a `Wire Format` is the format of the data as it travels on the wire (i.e. in a network). In the case of Remote-Write, this is always the compressed binary protobuf format. \* a `Sender` is something that sends Remote-Write data. \* a `Receiver` is something that receives (writes) Remote-Write data. The meaning of `Written` is up to the Receiver e.g. usually it means storing received data in a database, but also just validating, splitting or enhancing it. \* `Written` refers to data the `Receiver` has received and is accepting. Whether or not it has ingested this data to persistent storage, written it to a WAL, etc. is up to
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.06802971661090851, 0.02528221160173416, 0.03595561534166336, -0.0039005805738270283, 0.03920230269432068, -0.11377301812171936, -0.05900757759809494, 0.03999771922826767, -0.03908455744385719, 0.06181204691529274, 0.007319889031350613, -0.0016187271103262901, 0.01910504326224327, -0.000...
0.153088
e.g. usually it means storing received data in a database, but also just validating, splitting or enhancing it. \* `Written` refers to data the `Receiver` has received and is accepting. Whether or not it has ingested this data to persistent storage, written it to a WAL, etc. is up to the `Receiver`. The only distinction is that the `Receiver` has accepted this data rather than explicitly rejecting it with an error response. \* a `Sample` is a triplet of (start timestamp, timestamp, value). \* a `Histogram` is a triplet of (start timestamp, timestamp, [histogram value](https://github.com/prometheus/docs/blob/b9657b5f5b264b81add39f6db2f1df36faf03efe/content/docs/concepts/native\_histograms.md)). \* a `Label` is a pair of (key, value). \* a `Series` is a list of samples (or histograms), identified by a unique set of labels. ## Definitions ### Protocol The Remote-Write Protocol MUST consist of RPCs with the request body serialized using a Google Protocol Buffers and then compressed. The protobuf serialization MUST use either of the following Protobuf Messages: \* The `prometheus.WriteRequest` introduced in [the Remote-Write 1.0 specification](./remote\_write\_spec.md#protocol). As of 2.0, this message is deprecated. It SHOULD be used only for compatibility reasons. Senders and Receivers MAY NOT support the `prometheus.WriteRequest`. \* The `io.prometheus.write.v2.Request` introduced in this specification and defined [below](#protobuf-message). Senders and Receivers SHOULD use this message when possible. Senders and Receivers MUST support the `io.prometheus.write.v2.Request`. Protobuf Message MUST use binary Wire Format. Then, MUST be compressed with [Google’s Snappy](https://github.com/google/snappy). Snappy's [block format](https://github.com/google/snappy/blob/2c94e11145f0b7b184b831577c93e5a41c4c0346/format\_description.txt) MUST be used -- [the framed format](https://github.com/google/snappy/blob/2c94e11145f0b7b184b831577c93e5a41c4c0346/framing\_format.txt) MUST NOT be used. Senders MUST send a serialized and compressed Protobuf Message in the body of an HTTP POST request and send it to the Receiver via HTTP at the provided URL path. Receivers MAY specify any HTTP URL path to receive metrics. Senders MUST send the following reserved headers with the HTTP request: - `Content-Encoding` - `Content-Type` - `X-Prometheus-Remote-Write-Version` - `User-Agent` Senders MAY allow users to add custom HTTP headers; they MUST NOT allow users to configure them in such a way as to send reserved headers. #### Content-Encoding ``` Content-Encoding: ``` Content encoding request header MUST follow [the RFC 9110](https://www.rfc-editor.org/rfc/rfc9110.html#name-content-encoding). Senders MUST use the `snappy` value. Receivers MUST support `snappy` compression. New, optional compression algorithms might come in 2.x or beyond. #### Content-Type ``` Content-Type: application/x-protobuf Content-Type: application/x-protobuf;proto= ``` Content type request header MUST follow [the RFC 9110](https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type). Senders MUST use `application/x-protobuf` as the only media type. Senders MAY add `;proto=` parameter to the header's value to indicate the fully qualified name of the Protobuf Message that was used, from the two mentioned above. As a result, Senders MUST send any of the three supported header values: For the deprecated message introduced in PRW 1.0, identified by `prometheus.WriteRequest`: \* `Content-Type: application/x-protobuf` \* `Content-Type: application/x-protobuf;proto=prometheus.WriteRequest` For the message introduced in PRW 2.0, identified by `io.prometheus.write.v2.Request`: \* `Content-Type: application/x-protobuf;proto=io.prometheus.write.v2.Request` When talking to 1.x Receivers, Senders SHOULD use `Content-Type: application/x-protobuf` for backward compatibility. Otherwise, Senders SHOULD use `Content-Type: application/x-protobuf;proto=io.prometheus.write.v2.Request`. More Protobuf Messages might come in 2.x or beyond. Receivers MUST use the content type header to identify the Protobuf Message schema to use. Accidental wrong schema choices may result in non-deterministic behaviour (e.g. corruptions). > NOTE: Thanks to reserved fields in [`io.prometheus.write.v2.Request`](#protobuf-message), Receiver accidental use of wrong schema with `prometheus.WriteRequest` will result in empty message. This is generally for convenience to avoid surprising errors, but don't rely on it -- future Protobuf Messages might not have this feature. #### X-Prometheus-Remote-Write-Version ``` X-Prometheus-Remote-Write-Version: ``` When talking to 1.x Receivers, Senders MUST use `X-Prometheus-Remote-Write-Version: 0.1.0` for backward compatibility. Otherwise, Senders SHOULD use the newest Remote-Write version it is compatible with e.g. `X-Prometheus-Remote-Write-Version: 2.0.0`. #### User-Agent ``` User-Agent: ``` Senders MUST include a user agent
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.0902414545416832, 0.026555592194199562, -0.08878970891237259, -0.008014948107302189, -0.04011351242661476, -0.01914300210773945, 0.04465542733669281, 0.029492970556020737, 0.032132938504219055, 0.011720990762114525, 0.015397853218019009, 0.02266153320670128, 0.09975966811180115, -0.0459...
0.289543
Messages might not have this feature. #### X-Prometheus-Remote-Write-Version ``` X-Prometheus-Remote-Write-Version: ``` When talking to 1.x Receivers, Senders MUST use `X-Prometheus-Remote-Write-Version: 0.1.0` for backward compatibility. Otherwise, Senders SHOULD use the newest Remote-Write version it is compatible with e.g. `X-Prometheus-Remote-Write-Version: 2.0.0`. #### User-Agent ``` User-Agent: ``` Senders MUST include a user agent header that SHOULD follow [the RFC 9110 User-Agent header format](https://www.rfc-editor.org/rfc/rfc9110.html#name-user-agent). ### Response Receivers that written all data successfully MUST return a [success 2xx HTTP status code](https://www.rfc-editor.org/rfc/rfc9110.html#name-successful-2xx). In such a successful case, the response body from the Receiver SHOULD be empty and the status code SHOULD be [204 HTTP No Content](https://www.rfc-editor.org/rfc/rfc9110.html#name-204-no-content); Senders MUST ignore the response body. The response body is RESERVED for future use. Receivers MUST NOT return a 2xx HTTP status code if any of the pieces of sent data known to the Receiver (e.g. Samples, Histograms, Exemplars) were NOT written successfully (both [partial write](#partial-write) or full write rejection). In such a case, the Receiver MUST provide a human-readable error message in the response body. The Receiver's error SHOULD contain information about the amount of the samples being rejected and for what reasons. Senders MUST NOT try and interpret the error message and SHOULD log it as is. The following subsections specify Sender and Receiver semantics around headers and different write error cases. #### Required `Written` Response Headers Upon a successful content negotiation, Receivers process (write) the received batch of data. Once completed (with success or failure) for each important piece of data (currently Samples, Histograms and Exemplars) Receivers MUST send a dedicated HTTP `X-Prometheus-Remote-Write-\*-Written` response header with the precise number of successfully written elements. Each header value MUST be a single 64-bit integer. The header names MUST be as follows: ``` X-Prometheus-Remote-Write-Samples-Written X-Prometheus-Remote-Write-Histograms-Written X-Prometheus-Remote-Write-Exemplars-Written ``` Upon receiving a 2xx or a 4xx status code, Senders CAN assume that any missing `X-Prometheus-Remote-Write-\*-Written` response header means no element from this category (e.g. Sample) was written by the Receiver (count of `0`). Senders MUST NOT assume the same when using the deprecated `prometheus.WriteRequest` Protobuf Message due to the risk of hitting 1.0 Receiver without this feature. Senders MAY use those headers to confirm which parts of data were successfully written by the Receiver. Common use cases: \* Better handling of the [Partial Write](#partial-write) failure situations: Senders MAY use those headers for more accurate client instrumentation and error handling. \* Detecting broken 1.0 Receiver implementations: Senders SHOULD assume [415 HTTP Unsupported Media Type](https://www.rfc-editor.org/rfc/rfc9110.html#name-415-unsupported-media-type) status code when sending the data using `io.prometheus.write.v2.Request` request and receiving 2xx HTTP status code, but none of the `X-Prometheus-Remote-Write-\*-Written` response headers from the Receiver. This is a common issue for the 1.0 Receivers that do not check the `Content-Type` request header; accidental decoding of the `io.prometheus.write.v2.Request` payload with `prometheus.WriteRequest` schema results in empty result and no decoding errors. \* Detecting other broken implementations or issues: Senders MAY use those headers to detect broken Sender and Receiver implementations or other problems. Senders MUST NOT assume what Remote Write specification version the Receiver implements from the remote write response headers. More (optional) headers might come in the future, e.g. when more entities or fields are added and worth confirming. #### Partial Write Senders SHOULD use Remote-Write to send samples for multiple series in a single request. As a result, Receivers MAY write valid samples within a write request that also contains some invalid or otherwise unwritten samples, which represents a partial write case. In such a case, the Receiver MUST return non-2xx status code following the [Invalid Samples](#invalid-samples) and [Retry on Partial Writes](#retries-on-partial-writes) sections. #### Unsupported Request Content Receivers MUST return [415 HTTP Unsupported Media Type](https://www.rfc-editor.org/rfc/rfc9110.html#name-415-unsupported-media-type) status code if they
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.037799105048179626, 0.022984975948929787, 0.030659107491374016, -0.004910647869110107, -0.004453206434845924, -0.12550818920135498, -0.012595669366419315, -0.02765188366174698, -0.010859964415431023, 0.016814041882753372, 0.004218072630465031, 0.001440740656107664, 0.04783153161406517, ...
0.10637
contains some invalid or otherwise unwritten samples, which represents a partial write case. In such a case, the Receiver MUST return non-2xx status code following the [Invalid Samples](#invalid-samples) and [Retry on Partial Writes](#retries-on-partial-writes) sections. #### Unsupported Request Content Receivers MUST return [415 HTTP Unsupported Media Type](https://www.rfc-editor.org/rfc/rfc9110.html#name-415-unsupported-media-type) status code if they don't support a given content type or encoding provided by Senders. Senders SHOULD expect [400 HTTP Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) for the above reasons from 1.x Receivers, for backwards compatibility. #### Invalid Samples Receivers MAY NOT support certain metric types or samples (e.g. a Receiver might reject sample without metadata type specified or without start timestamp, while another Receiver might accept such sample.). It’s up to the Receiver what sample is invalid. Receivers MUST return a [400 HTTP Bad Request](https://www.rfc-editor.org/rfc/rfc9110.html#name-400-bad-request) status code for write requests that contain any invalid samples unless the [partial retriable write](#retries-on-partial-writes) occurs. Senders MUST NOT retry on a 4xx HTTP status codes (other than [429](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429)), which MUST be used by Receivers to indicate that the write operation will never be able to succeed and should not be retried. Senders MAY retry on the 415 HTTP status code with a different content type or encoding to see if the Receiver supports it. ### Retries & Backoff Receivers MAY return a [429 HTTP Too Many Requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429) status code to indicate the overloaded server situation. Receivers MAY return [the Retry-After](https://www.rfc-editor.org/rfc/rfc9110.html#name-retry-after) header to indicate the time for the next write attempt. Receivers MAY return a 5xx HTTP status code to represent internal server errors. Senders MAY retry on a 429 HTTP status code. Senders MUST retry write requests on 5xx HTTP. Senders MUST use a backoff algorithm to prevent overwhelming the server. Senders MAY handle [the Retry-After response header](https://www.rfc-editor.org/rfc/rfc9110.html#name-retry-after) to estimate the next retry time. The difference between 429 vs 5xx handling is due to the potential situation of a Sender “falling behind” when the Receiver cannot keep up with the request volume, or the Receiver choosing to rate limit the Sender to protect its availability. As a result, Senders has the option to NOT retry on 429, which allows progress to be made when there are Sender side errors (e.g. too much traffic), while the data is not lost when there are Receiver side errors (5xx). #### Retries on Partial Writes Receivers MAY return a 5xx HTTP or 429 HTTP status code on partial write or [partial invalid sample cases](#partial-write) when it expects Senders to retry the whole request. In that case, the Receiver MUST support idempotency as Senders MAY retry with the same request. ### Backward and Forward Compatibility The protocol follows [semantic versioning 2.0](https://semver.org/): any 2.x compatible Receiver MUST be able to read any 2.x compatible Senders and vice versa. Breaking or backwards incompatible changes will result in a 3.x version of the spec. The Protobuf Messages (in Wire Format) themselves are forward / backward compatible, in some respects: \* Removing fields from the Protobuf Message requires a major version bump. \* Adding (optional) fields can be done in a minor version bump. In other words, this means that future minor versions of 2.x MAY add new optional fields to `io.prometheus.write.v2.Request`, new compressions, Protobuf Messages and negotiation mechanisms, as long as they are backwards compatible (e.g. optional to both Receiver and Sender). #### 2.x vs 1.x Compatibility The 2.x protocol is breaking compatibility with 1.x by introducing a new, mandatory `io.prometheus.write.v2.Request` Protobuf Message and deprecating the `prometheus.WriteRequest`. 2.x Senders MAY support 1.x Receivers by allowing users to configure what content type Senders should use. 2.x Senders also MAY automatically fall back to different content types, if the Receiver returns
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.010667910799384117, -0.033004503697156906, 0.04352925345301628, -0.057211242616176605, 0.020074578002095222, -0.0604611299932003, -0.05510096997022629, -0.003512165043503046, 0.04487134888768196, 0.0551249235868454, -0.036025624722242355, 0.010708479210734367, 0.03557989373803139, 0.013...
0.024999
protocol is breaking compatibility with 1.x by introducing a new, mandatory `io.prometheus.write.v2.Request` Protobuf Message and deprecating the `prometheus.WriteRequest`. 2.x Senders MAY support 1.x Receivers by allowing users to configure what content type Senders should use. 2.x Senders also MAY automatically fall back to different content types, if the Receiver returns 415 HTTP status code. ## Protobuf Message ### `io.prometheus.write.v2.Request` The `io.prometheus.write.v2.Request` references the new Protobuf Message that's meant to replace and deprecate the Remote-Write 1.0's `prometheus.WriteRequest` message. The full schema and source of the truth is in Prometheus repository in [`prompb/io/prometheus/write/v2/types.proto`](https://github.com/prometheus/prometheus/blob/main/prompb/io/prometheus/write/v2/types.proto). The `gogo` dependency and options CAN be ignored ([will be removed eventually](https://github.com/prometheus/prometheus/issues/11908)). They are not part of the specification as they don't impact the serialized format. The simplified version of the new `io.prometheus.write.v2.Request` is presented below. ``` message Request { reserved 1 to 3; // symbols contains a de-duplicated array of string elements used for various // items in a Request message, like labels and metadata items. For the sender's convenience // around empty values for optional fields like unit\_ref, symbols array MUST start with // empty string. // // To decode each of the symbolized strings, referenced, by "ref(s)" suffix, you // need to lookup the actual string by index from symbols array. The order of // strings is up to the sender. The receiver should not assume any particular encoding. repeated string symbols = 4; // timeseries represents an array of distinct series with 0 or more samples. repeated TimeSeries timeseries = 5; } // TimeSeries represents a single series. message TimeSeries { reserved 6; // labels\_refs is a list of label name-value pair references, encoded // as indices to the Request.symbols array. This list's length is always // a multiple of two, and the underlying labels should be sorted lexicographically. // // Note that there might be multiple TimeSeries objects in the same // Requests with the same labels e.g. for different exemplars, metadata // or start timestamp. repeated uint32 labels\_refs = 1; // Timeseries messages can either specify samples or (native) histogram samples // (histogram field), but not both. For a typical sender (real-time metric // streaming), in healthy cases, there will be only one sample or histogram. // // Samples and histograms are sorted by timestamp (older first). repeated Sample samples = 2; repeated Histogram histograms = 3; // exemplars represents an optional set of exemplars attached to this series' samples. repeated Exemplar exemplars = 4; // metadata represents the metadata associated with the given series' samples. Metadata metadata = 5; } // Exemplar is an additional information attached to some series' samples. // It is typically used to attach an example trace or request ID associated with // the metric changes. message Exemplar { // labels\_refs is an optional list of label name-value pair references, encoded // as indices to the Request.symbols array. This list's len is always // a multiple of 2, and the underlying labels should be sorted lexicographically. // If the exemplar references a trace it should use the `trace\_id` label name, as a best practice. repeated uint32 labels\_refs = 1; // value represents an exact example value. This can be useful when the exemplar // is attached to a histogram, which only gives an estimated value through buckets. double value = 2; // timestamp represents the timestamp of the exemplar in ms. // // For Go, see github.com/prometheus/prometheus/model/timestamp/timestamp.go // for conversion from/to time.Time to Prometheus timestamp. int64 timestamp = 3; } // Sample represents series sample. message Sample { // value of the sample. double value = 1; // timestamp represents timestamp of the sample in ms. // //
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.057784173637628555, -0.005897320806980133, 0.06061524525284767, -0.0022035026922822, 0.011847474612295628, -0.1378539651632309, -0.02530479058623314, -0.024406764656305313, -0.029419170692563057, 0.041262634098529816, 0.0032395843882113695, -0.026568280532956123, 0.02263452671468258, 0....
0.117793
exemplar in ms. // // For Go, see github.com/prometheus/prometheus/model/timestamp/timestamp.go // for conversion from/to time.Time to Prometheus timestamp. int64 timestamp = 3; } // Sample represents series sample. message Sample { // value of the sample. double value = 1; // timestamp represents timestamp of the sample in ms. // // For Go, see github.com/prometheus/prometheus/model/timestamp/timestamp.go // for conversion from/to time.Time to Prometheus timestamp. int64 timestamp = 2; // start\_timestamp represents an optional start timestamp for the sample, // in ms format. This information is typically used for counter, histogram (cumulative) // or delta type metrics. // // For cumulative metrics, the start timestamp represents the time when the // counter started counting (sometimes referred to as created timestamp), which // can increase the accuracy of certain processing and query semantics (e.g. rates). // // Note: // \* That some receivers might require start timestamps for certain metric // types; rejecting such samples within the Request as a result. // \* start timestamp is the same as "created timestamp" name Prometheus used in the past. // // For Go, see github.com/prometheus/prometheus/model/timestamp/timestamp.go // for conversion from/to time.Time to Prometheus timestamp. // // Note that the "optional" keyword is omitted due to efficiency and consistency. // Zero value means value not set. If you need to use exactly zero value for // the timestamp, use 1 millisecond before or after. int64 start\_timestamp = 3; } // Metadata represents the metadata associated with the given series' samples. message Metadata { enum MetricType { METRIC\_TYPE\_UNSPECIFIED = 0; METRIC\_TYPE\_COUNTER = 1; METRIC\_TYPE\_GAUGE = 2; METRIC\_TYPE\_HISTOGRAM = 3; METRIC\_TYPE\_GAUGEHISTOGRAM = 4; METRIC\_TYPE\_SUMMARY = 5; METRIC\_TYPE\_INFO = 6; METRIC\_TYPE\_STATESET = 7; } MetricType type = 1; // help\_ref is a reference to the Request.symbols array representing help // text for the metric. Help is optional, reference should point to an empty string in // such a case. uint32 help\_ref = 3; // unit\_ref is a reference to the Request.symbols array representing a unit // for the metric. Unit is optional, reference should point to an empty string in // such a case. uint32 unit\_ref = 4; } // A native histogram message, supporting // \* sparse exponential bucketing, custom bucketing. // \* float or integer histograms. // // See the full spec: https://prometheus.io/docs/specs/native\_histograms/ message Histogram { ... } ``` All timestamps MUST be int64 counted as milliseconds since the Unix epoch. Sample's values MUST be float64. For every `TimeSeries` message: \* `labels\_refs` MUST be provided. \* At least one element in `samples` or in `histograms` MUST be provided. A `TimeSeries` MUST NOT include both `samples` and `histograms`. For series which (rarely) would mix float and histogram samples, a separate `TimeSeries` message MUST be used. \* `metadata` sub-fields SHOULD be provided. Receivers MAY reject series with unspecified `Metadata.type`. \* Exemplars SHOULD be provided if they exist for a series. The following subsections define some schema elements in detail. #### Symbols The `io.prometheus.write.v2.Request` Protobuf Message is designed to [intern all strings](https://en.wikipedia.org/wiki/String\_interning) for the proven additional compression and memory efficiency gains on top of the standard compressions. The `symbols` table MUST be provided, and it MUST contain deduplicated strings used in series, exemplar labels, and metadata strings. The first element of the `symbols` table MUST be an empty string, which is used to represent empty or unspecified values such as when `Metadata.unit\_ref` or `Metadata.help\_ref` are not provided. References MUST point to the existing index in the `symbols` string array. #### Series Labels The complete set of labels MUST be sent with each `Sample` or `Histogram` sample. Additionally, the label set associated with samples: \* SHOULD contain a `\_\_name\_\_` label. \* MUST NOT
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.07986880093812943, 0.023419992998242378, -0.04406045004725456, 0.0022092582657933235, -0.007180177606642246, -0.07305485755205154, -0.03345957025885582, 0.07896945625543594, 0.06207594275474548, -0.0567086786031723, 0.030684402212500572, -0.05484975874423981, -0.02092786878347397, 0.009...
0.187299
or `Metadata.help\_ref` are not provided. References MUST point to the existing index in the `symbols` string array. #### Series Labels The complete set of labels MUST be sent with each `Sample` or `Histogram` sample. Additionally, the label set associated with samples: \* SHOULD contain a `\_\_name\_\_` label. \* MUST NOT contain repeated label names. \* MUST have label names sorted in lexicographical order. \* MUST NOT contain any empty label names or values. Metric names, label names, and label values MUST be any sequence of UTF-8 characters. Metric names SHOULD adhere to the regex `[a-zA-Z\_:]([a-zA-Z0-9\_:])\*`. Label names SHOULD adhere to the regex `[a-zA-Z\_]([a-zA-Z0-9\_])\*`. Names that do not adhere to the above, might be harder to use for PromQL users (see [the UTF-8 proposal for more details](https://github.com/prometheus/proposals/blob/main/proposals/2023-08-21-utf8.md)). Label names beginning with "\_\_" are RESERVED for system usage and SHOULD NOT be used, see [Prometheus Data Model](https://prometheus.io/docs/concepts/data\_model/). Receivers also MAY impose limits on the number and length of labels, but this is receiver-specific and is out of the scope of this document. #### Samples and Histogram Samples Senders MUST send `samples` (or `histograms`) for any given `TimeSeries` in timestamp order. Senders MAY send multiple requests for different series in parallel. Sample's or Histogram's `start\_timestamp` SHOULD be provided for types that follow counter semantics (e.g. counters and counter histograms). Receivers MAY reject those series without `start\_timestamp` being set. Given optionality, the 0 value MUST be treated by receivers as unset value. To represent the unlikely 0 unix timestamp in milliseconds, "1" or "-1" value MUST be used. Senders SHOULD send stale markers when a time series will no longer be appended to. Senders MUST send stale markers if the discontinuation of time series is possible to detect, for example: \* For series that were pulled (scraped), unless explicit timestamp was used. \* For series that is resulted by a recording rule evaluation. Generally, not sending stale markers for series that are discontinued can lead to the Receiver [non-trivial query time alignment issues](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness). Stale markers MUST be signalled by the special NaN value `0x7ff0000000000002`. This value MUST NOT be used otherwise. Typically, Senders can detect when a time series will no longer be appended using the following techniques: 1. Detecting, using service discovery, that the target exposing the series has gone away. 1. Noticing the target is no longer exposing the time series between successive scrapes. 1. Failing to scrape the target that originally exposed a time series. 1. Tracking configuration and evaluation for recording and alerting rules. 1. Tracking discontinuation of metrics for non-scrape source of metric (e.g. in k6 when the benchmark has finished for series per benchmark, it could emit a stale marker). #### Metadata Metadata SHOULD follow the official Prometheus guidelines for [Type](https://prometheus.io/docs/instrumenting/writing\_exporters/#types) and [Help](https://prometheus.io/docs/instrumenting/writing\_exporters/#help-strings). Metadata MAY follow the official OpenMetrics guidelines for [Unit](https://github.com/prometheus/OpenMetrics/blob/v1.0.0/specification/OpenMetrics.md#unit). #### Exemplars Each exemplar, if attached to a `TimeSeries`: \* MUST contain a value. \* MAY contain labels e.g. referencing trace or request ID. If the exemplar references a trace it SHOULD use the `trace\_id` label name, as a best practice. \* MUST contain a timestamp. While exemplar timestamps are optional in Prometheus/Open Metrics exposition formats, the assumption is that a timestamp is assigned at scrape time in the same way a timestamp is assigned to the scrape sample. Receivers require exemplar timestamps to reliably handle (e.g. deduplicate) incoming exemplars. ## Out of Scope The same as in [1.0](./remote\_write\_spec.md#out-of-scope). ## Future Plans This section contains speculative plans that are not considered part of protocol specification yet but are mentioned here for completeness. Note that 2.0 specification completed [2 of 3 future plans in the 1.0](./remote\_write\_spec.md#future-plans). \*
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.05626453831791878, 0.016417615115642548, -0.06783279776573181, -0.04921405762434006, -0.08327396214008331, 0.03693149611353874, 0.13011600077152252, 0.014413261786103249, -0.0023691921960562468, -0.043157245963811874, 0.019518215209245682, -0.09336040914058685, 0.05920892953872681, 0.03...
0.104376
(e.g. deduplicate) incoming exemplars. ## Out of Scope The same as in [1.0](./remote\_write\_spec.md#out-of-scope). ## Future Plans This section contains speculative plans that are not considered part of protocol specification yet but are mentioned here for completeness. Note that 2.0 specification completed [2 of 3 future plans in the 1.0](./remote\_write\_spec.md#future-plans). \* \*\*Transactionality\*\* There is still no transactionality defined for 2.0 specification, mostly because it makes a scalable Sender implementation difficult. Prometheus Sender aims at being "transactional" - i.e. to never expose a partially scraped target to a query. We intend to do the same with Remote-Write -- for instance, in the future we would like to "align" Remote-Write with scrapes, perhaps such that all the samples, metadata and exemplars for a single scrape are sent in a single Remote-Write request. However, Remote-Write 2.0 specification solves an important transactionality problem for [the classic histogram buckets](https://docs.google.com/document/d/1mpcSWH1B82q-BtJza-eJ8xMLlKt6EJ9oFGH325vtY1Q/edit#heading=h.ueg7q07wymku). This is done thanks to the native histograms supporting custom bucket-ing possible with the `io.prometheus.write.v2.Request` wire format. Senders might translate all classic histograms to native histograms this way, but it's out of this specification to mandate this. However, for this reason, Receivers MAY ignore certain metric types (e.g. classic histograms). \* \*\*Alternative wire formats\*\*. The OpenTelemetry community has shown the validity of Apache Arrow (and potentially other columnar formats) for over-wire data transfer with their OTLP protocol. We would like to do experiments to confirm the compatibility of a similar format with Prometheus’ data model and include benchmarks of any resource usage changes. We would potentially maintain both a protobuf and columnar format long term for compatibility reasons and use our content negotiation to add different Protobuf Messages for this purpose. \* \*\*Global symbols\*\*. Pre-defined string dictionary for interning The protocol could pre-define a static dictionary of ref->symbol that includes strings that are considered common, e.g. “namespace”, “le”, “job”, “seconds”, “bytes”, etc. Senders could refer to these without the need to include them in the request’s symbols table. This dictionary could incrementally grow with minor version releases of this protocol. ## Related ### FAQ \*\*Why did you not use gRPC?\*\* Because the 1.0 protocol does not use gRPC, breaking it would increase friction in the adoption. See 1.0 [reason](./remote\_write\_spec.md#faq). \*\*Why not stream protobuf messages?\*\* If you use persistent HTTP/1.1 connections, they are pretty close to streaming. Of course, headers have to be re-sent, but that is less expensive than a new TCP set up. \*\*Why do we send samples in order?\*\* The in-order constraint comes from the encoding we use for time series data in Prometheus, the implementation of which is optimized for append-only workloads. However, this requirement is also shared across many other databases and vendors in the ecosystem. In fact, [Prometheus with OOO feature enabled](https://youtu.be/qYsycK3nTSQ?t=1321), allows out-of-order writes, but with the performance penalty, thus reserved for rare events. To sum up, Receivers may support out-of-order write, though it is not permitted by the specification. In the future e.g. 2.x spec versions, we could extend content type to negotiate the out-of-order writes, if needed. \*\*How can we parallelise requests with the in-order constraint?\*\* Samples must be in-order \_for a given series\_. However, even if a Receiver does not support out-of-order write, the Remote-Write requests can be sent in parallel as long as they are for different series. Prometheus shards the samples by their labels into separate queues, and then writes happen sequentially in each queue. This guarantees samples for the same series are delivered in order, but samples for different series are sent in parallel - and potentially "out of order" between different series. \*\*What are the differences between Remote-Write 2.0 and OpenTelemetry's OTLP protocol?\*\* [OpenTelemetry
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.1081060916185379, 0.03380798548460007, 0.05387163534760475, 0.000035516986827133223, -0.003034243592992425, -0.13981427252292633, -0.058259084820747375, 0.013129185885190964, -0.006632656790316105, 0.052637916058301926, -0.003930505830794573, -0.01651529222726822, 0.01777774840593338, 0...
0.088839
queues, and then writes happen sequentially in each queue. This guarantees samples for the same series are delivered in order, but samples for different series are sent in parallel - and potentially "out of order" between different series. \*\*What are the differences between Remote-Write 2.0 and OpenTelemetry's OTLP protocol?\*\* [OpenTelemetry OTLP](https://github.com/open-telemetry/opentelemetry-proto/blob/a05597bff803d3d9405fcdd1e1fb1f42bed4eb7a/docs/specification.md) is a protocol for transporting of telemetry data (such as metrics, logs, traces and profiles) between telemetry sources, intermediate nodes and telemetry backends. The recommended transport involves gRPC with protobuf, but HTTP with protobuf or JSON are also described. It was designed from scratch with the intent to support a variety of different observability signals, data types and extra information. For [metrics](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/metrics/v1/metrics.proto) that means additional non-identifying labels, flags, temporal aggregations types, resource or scoped metrics, schema URLs and more. OTLP also requires [the semantic convention](https://opentelemetry.io/docs/concepts/semantic-conventions/) to be used. Remote-Write was designed for simplicity, efficiency and organic growth. The first version was officially released in 2023, when already [dozens of battle-tested adopters in the CNCF ecosystem](./remote\_write\_spec.md#compatible-senders-and-receivers) had been using this protocol for years. Remote-Write 2.0 iterates on the previous protocol by adding a few new elements (metadata, exemplars, start timestamp and native histograms) and string interning. Remote-Write 2.0 is always stateless, focuses only on metrics and is opinionated; as such it is scoped down to elements that the Prometheus community considers enough to have a robust metric solution. The intention is to ensure the Remote-Write is a stable protocol that is cheaper and simpler to adopt and use than the alternatives in the observability ecosystem.
https://github.com/prometheus/docs/blob/main//docs/specs/prw/remote_write_spec_2_0.md
main
prometheus
[ -0.04853193834424019, -0.014019817113876343, -0.009404624812304974, 0.008999197743833065, -0.07501833140850067, -0.1251135766506195, -0.0402723029255867, -0.008326179347932339, 0.08511830866336823, 0.015426955185830593, -0.001103030750527978, 0.02436155267059803, -0.02983861416578293, -0.0...
0.177445
## What is Prometheus? [Prometheus](https://github.com/prometheus) is an open-source systems monitoring and alerting toolkit originally built at [SoundCloud](http://soundcloud.com). Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user [community](/community). It is now a standalone open source project and maintained independently of any company. To emphasize this, and to clarify the project's governance structure, Prometheus joined the [Cloud Native Computing Foundation](https://cncf.io/) in 2016 as the second hosted project, after [Kubernetes](http://kubernetes.io/). Prometheus collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels. For more elaborate overviews of Prometheus, see the resources linked from the [media](/docs/introduction/media/) section. ### Features Prometheus's main features are: \* a multi-dimensional [data model](/docs/concepts/data\_model/) with time series data identified by metric name and key/value pairs \* PromQL, a [flexible query language](/docs/prometheus/latest/querying/basics/) to leverage this dimensionality \* no reliance on distributed storage; single server nodes are autonomous \* time series collection happens via a pull model over HTTP \* [pushing time series](/docs/instrumenting/pushing/) is supported via an intermediary gateway \* targets are discovered via service discovery or static configuration \* multiple modes of graphing and dashboarding support ### What are metrics? Metrics are numerical measurements in layperson terms. The term time series refers to the recording of changes over time. What users want to measure differs from application to application. For a web server, it could be request times; for a database, it could be the number of active connections or active queries, and so on. Metrics play an important role in understanding why your application is working in a certain way. Let's assume you are running a web application and discover that it is slow. To learn what is happening with your application, you will need some information. For example, when the number of requests is high, the application may become slow. If you have the request count metric, you can determine the cause and increase the number of servers to handle the load. ### Components The Prometheus ecosystem consists of multiple components, many of which are optional: \* the main [Prometheus server](https://github.com/prometheus/prometheus) which scrapes and stores time series data \* [client libraries](/docs/instrumenting/clientlibs/) for instrumenting application code \* a [push gateway](https://github.com/prometheus/pushgateway) for supporting short-lived jobs \* special-purpose [exporters](/docs/instrumenting/exporters/) for services like HAProxy, StatsD, Graphite, etc. \* an [alertmanager](https://github.com/prometheus/alertmanager) to handle alerts \* various support tools Most Prometheus components are written in [Go](https://golang.org/), making them easy to build and deploy as static binaries. ### Architecture This diagram illustrates the architecture of Prometheus and some of its ecosystem components: ![Prometheus architecture](/assets/docs/architecture.svg) Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts. [Grafana](https://grafana.com/) or other API consumers can be used to visualize the collected data. ## When does it fit? Prometheus works well for recording any purely numeric time series. It fits both machine-centric monitoring as well as monitoring of highly dynamic service-oriented architectures. In a world of microservices, its support for multi-dimensional data collection and querying is a particular strength. Prometheus is designed for reliability, to be the system you go to during an outage to allow you to quickly diagnose problems. Each Prometheus server is standalone, not depending on network storage or other remote services. You can rely on it when other parts of your infrastructure are broken, and you do not need to setup extensive infrastructure to use
https://github.com/prometheus/docs/blob/main//docs/introduction/overview.md
main
prometheus
[ -0.08597481995820999, -0.0132818054407835, -0.008691787719726562, -0.002795935608446598, 0.04465567693114281, -0.09295661747455597, -0.00480693532153964, -0.03561750054359436, 0.08952896296977997, -0.01847437210381031, -0.026146873831748962, -0.02567369118332863, 0.0015038152923807502, -0....
0.270323
go to during an outage to allow you to quickly diagnose problems. Each Prometheus server is standalone, not depending on network storage or other remote services. You can rely on it when other parts of your infrastructure are broken, and you do not need to setup extensive infrastructure to use it. ## When does it not fit? Prometheus values reliability. You can always view what statistics are available about your system, even under failure conditions. If you need 100% accuracy, such as for per-request billing, Prometheus is not a good choice as the collected data will likely not be detailed and complete enough. In such a case you would be best off using some other system to collect and analyze the data for billing, and Prometheus for the rest of your monitoring.
https://github.com/prometheus/docs/blob/main//docs/introduction/overview.md
main
prometheus
[ -0.031075824052095413, 0.017427988350391388, 0.00864025391638279, 0.01736881397664547, -0.023667629808187485, -0.1335451304912567, -0.029957963153719902, 0.025159478187561035, -0.01780412718653679, -0.0045721204951405525, -0.03145924583077431, -0.010336154140532017, -0.0008428739383816719, ...
0.105944
## Prometheus vs. Graphite ### Scope [Graphite](http://graphite.readthedocs.org/en/latest/) focuses on being a passive time series database with a query language and graphing features. Any other concerns are addressed by external components. Prometheus is a full monitoring and trending system that includes built-in and active scraping, storing, querying, graphing, and alerting based on time series data. It has knowledge about what the world should look like (which endpoints should exist, what time series patterns mean trouble, etc.), and actively tries to find faults. ### Data model Graphite stores numeric samples for named time series, much like Prometheus does. However, Prometheus's metadata model is richer: while Graphite metric names consist of dot-separated components which implicitly encode dimensions, Prometheus encodes dimensions explicitly as key-value pairs, called labels, attached to a metric name. This allows easy filtering, grouping, and matching by these labels via the query language. Further, especially when Graphite is used in combination with [StatsD](https://github.com/etsy/statsd/), it is common to store only aggregated data over all monitored instances, rather than preserving the instance as a dimension and being able to drill down into individual problematic instances. For example, storing the number of HTTP requests to API servers with the response code `500` and the method `POST` to the `/tracks` endpoint would commonly be encoded like this in Graphite/StatsD: ``` stats.api-server.tracks.post.500 -> 93 ``` In Prometheus the same data could be encoded like this (assuming three api-server instances): ``` api\_server\_http\_requests\_total{method="POST",handler="/tracks",status="500",instance=""} -> 34 api\_server\_http\_requests\_total{method="POST",handler="/tracks",status="500",instance=""} -> 28 api\_server\_http\_requests\_total{method="POST",handler="/tracks",status="500",instance=""} -> 31 ``` ### Storage Graphite stores time series data on local disk in the [Whisper](http://graphite.readthedocs.org/en/latest/whisper.html) format, an RRD-style database that expects samples to arrive at regular intervals. Every time series is stored in a separate file, and new samples overwrite old ones after a certain amount of time. Prometheus also creates one local file per time series, but allows storing samples at arbitrary intervals as scrapes or rule evaluations occur. Since new samples are simply appended, old data may be kept arbitrarily long. Prometheus also works well for many short-lived, frequently changing sets of time series. ### Summary Prometheus offers a richer data model and query language, in addition to being easier to run and integrate into your environment. If you want a clustered solution that can hold historical data long term, Graphite may be a better choice. ## Prometheus vs. InfluxDB [InfluxDB](https://influxdata.com/) is an open-source time series database, with a commercial option for scaling and clustering. The InfluxDB project was released almost a year after Prometheus development began, so we were unable to consider it as an alternative at the time. Still, there are significant differences between Prometheus and InfluxDB, and both systems are geared towards slightly different use cases. ### Scope For a fair comparison, we must also consider [Kapacitor](https://github.com/influxdata/kapacitor) together with InfluxDB, as in combination they address the same problem space as Prometheus and the Alertmanager. The same scope differences as in the case of [Graphite](#prometheus-vs-graphite) apply here for InfluxDB itself. In addition InfluxDB offers continuous queries, which are equivalent to Prometheus recording rules. Kapacitor’s scope is a combination of Prometheus recording rules, alerting rules, and the Alertmanager's notification functionality. Prometheus offers [a more powerful query language for graphing and alerting](https://www.robustperception.io/translating-between-monitoring-languages/). The Prometheus Alertmanager additionally offers grouping, deduplication and silencing functionality. ### Data model / storage Like Prometheus, the InfluxDB data model has key-value pairs as labels, which are called tags. In addition, InfluxDB has a second level of labels called fields, which are more limited in use. InfluxDB supports timestamps with up to nanosecond resolution, and float64, int64, bool, and string data types. Prometheus, by contrast, supports the float64 data type with limited support
https://github.com/prometheus/docs/blob/main//docs/introduction/comparison.md
main
prometheus
[ -0.11456006020307541, -0.006496779154986143, 0.012721355073153973, 0.02273395285010338, 0.03467169776558876, -0.11998394876718521, -0.031242715194821358, 0.03351248800754547, -0.01667981781065464, 0.03692391514778137, -0.023274002596735954, -0.005040992517024279, 0.0018251502187922597, 0.0...
0.253056
as labels, which are called tags. In addition, InfluxDB has a second level of labels called fields, which are more limited in use. InfluxDB supports timestamps with up to nanosecond resolution, and float64, int64, bool, and string data types. Prometheus, by contrast, supports the float64 data type with limited support for strings, and millisecond resolution timestamps. InfluxDB uses a variant of a [log-structured merge tree for storage with a write ahead log](https://docs.influxdata.com/influxdb/v1.7/concepts/storage\_engine/), sharded by time. This is much more suitable to event logging than Prometheus's append-only file per time series approach. [Logs and Metrics and Graphs, Oh My!](https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/) describes the differences between event logging and metrics recording. ### Architecture Prometheus servers run independently of each other and only rely on their local storage for their core functionality: scraping, rule processing, and alerting. The open source version of InfluxDB is similar. The commercial InfluxDB offering is, by design, a distributed storage cluster with storage and queries being handled by many nodes at once. This means that the commercial InfluxDB will be easier to scale horizontally, but it also means that you have to manage the complexity of a distributed storage system from the beginning. Prometheus will be simpler to run, but at some point you will need to shard servers explicitly along scalability boundaries like products, services, datacenters, or similar aspects. Independent servers (which can be run redundantly in parallel) may also give you better reliability and failure isolation. Kapacitor's open-source release has no built-in distributed/redundant options for rules, alerting, or notifications. The open-source release of Kapacitor can be scaled via manual sharding by the user, similar to Prometheus itself. Influx offers [Enterprise Kapacitor](https://docs.influxdata.com/enterprise\_kapacitor), which supports an HA/redundant alerting system. Prometheus and the Alertmanager by contrast offer a fully open-source redundant option via running redundant replicas of Prometheus and using the Alertmanager's [High Availability](https://github.com/prometheus/alertmanager#high-availability) mode. ### Summary There are many similarities between the systems. Both have labels (called tags in InfluxDB) to efficiently support multi-dimensional metrics. Both use basically the same data compression algorithms. Both have extensive integrations, including with each other. Both have hooks allowing you to extend them further, such as analyzing data in statistical tools or performing automated actions. Where InfluxDB is better: \* If you're doing event logging. \* Commercial option offers clustering for InfluxDB, which is also better for long term data storage. \* Eventually consistent view of data between replicas. Where Prometheus is better: \* If you're primarily doing metrics. \* More powerful query language, alerting, and notification functionality. \* Higher availability and uptime for graphing and alerting. InfluxDB is maintained by a single commercial company following the open-core model, offering premium features like closed-source clustering, hosting and support. Prometheus is a [fully open source and independent project](/community/), maintained by a number of companies and individuals, some of whom also offer commercial services and support. ## Prometheus vs. OpenTSDB [OpenTSDB](http://opentsdb.net/) is a distributed time series database based on [Hadoop](http://hadoop.apache.org/) and [HBase](http://hbase.apache.org/). ### Scope The same scope differences as in the case of [Graphite](/docs/introduction/comparison/#prometheus-vs-graphite) apply here. ### Data model OpenTSDB's data model is almost identical to Prometheus's: time series are identified by a set of arbitrary key-value pairs (OpenTSDB tags are Prometheus labels). All data for a metric is [stored together](http://opentsdb.net/docs/build/html/user\_guide/writing/index.html#time-series-cardinality), limiting the cardinality of metrics. There are minor differences though: Prometheus allows arbitrary characters in label values, while OpenTSDB is more restrictive. OpenTSDB also lacks a full query language, only allowing simple aggregation and math via its API. ### Storage [OpenTSDB](http://opentsdb.net/)'s storage is implemented on top of [Hadoop](http://hadoop.apache.org/) and [HBase](http://hbase.apache.org/). This means that it is easy to scale OpenTSDB horizontally, but you have to accept
https://github.com/prometheus/docs/blob/main//docs/introduction/comparison.md
main
prometheus
[ -0.07058680802583694, 0.01975681446492672, -0.008315197192132473, 0.021525397896766663, 0.018117748200893402, -0.08915097266435623, -0.014189889654517174, 0.0011877118377014995, 0.0873064175248146, 0.0023750714026391506, -0.013666825369000435, -0.020084552466869354, 0.003157681552693248, 0...
0.177449
label values, while OpenTSDB is more restrictive. OpenTSDB also lacks a full query language, only allowing simple aggregation and math via its API. ### Storage [OpenTSDB](http://opentsdb.net/)'s storage is implemented on top of [Hadoop](http://hadoop.apache.org/) and [HBase](http://hbase.apache.org/). This means that it is easy to scale OpenTSDB horizontally, but you have to accept the overall complexity of running a Hadoop/HBase cluster from the beginning. Prometheus will be simpler to run initially, but will require explicit sharding once the capacity of a single node is exceeded. ### Summary Prometheus offers a much richer query language, can handle higher cardinality metrics, and forms part of a complete monitoring system. If you're already running Hadoop and value long term storage over these benefits, OpenTSDB is a good choice. ## Prometheus vs. Nagios [Nagios](https://www.nagios.org/) is a monitoring system that originated in the 1990s as NetSaint. ### Scope Nagios is primarily about alerting based on the exit codes of scripts. These are called “checks”. There is silencing of individual alerts, however no grouping, routing or deduplication. There are a variety of plugins. For example, piping the few kilobytes of perfData plugins are allowed to return [to a time series database such as Graphite](https://github.com/shawn-sterling/graphios) or using NRPE to [run checks on remote machines](https://exchange.nagios.org/directory/Addons/Monitoring-Agents/NRPE--2D-Nagios-Remote-Plugin-Executor/details). ### Data model Nagios is host-based. Each host can have one or more services and each service can perform one check. There is no notion of labels or a query language. ### Storage Nagios has no storage per-se, beyond the current check state. There are plugins which can store data such as [for visualisation](https://docs.pnp4nagios.org/). ### Architecture Nagios servers are standalone. All configuration of checks is via file. ### Summary Nagios is suitable for basic monitoring of small and/or static systems where blackbox probing is sufficient. If you want to do whitebox monitoring, or have a dynamic or cloud based environment, then Prometheus is a good choice. ## Prometheus vs. Sensu [Sensu](https://sensu.io) is an open source monitoring and observability pipeline with a commercial distribution which offers additional features for scalability. It can reuse existing Nagios plugins. ### Scope Sensu is an observability pipeline that focuses on processing and alerting of observability data as a stream of [Events](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/). It provides an extensible framework for event [filtering](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-filter/), aggregation, [transformation](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-transform/), and [processing](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-process/) – including sending alerts to other systems and storing events in third-party systems. Sensu's event processing capabilities are similar in scope to Prometheus alerting rules and Alertmanager. ### Data model Sensu [Events](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/) represent service health and/or [metrics](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/#metric-attributes) in a structured data format identified by an [entity](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-entities/entities/) name (e.g. server, cloud compute instance, container, or service), an event name, and optional [key-value metadata](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/#metadata-attributes) called "labels" or "annotations". The Sensu Event payload may include one or more metric [`points`](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-events/events/#points-attributes), represented as a JSON object containing a `name`, `tags` (key/value pairs), `timestamp`, and `value` (always a float). ### Storage Sensu stores current and recent event status information and real-time inventory data in an embedded database (etcd) or an external RDBMS (PostgreSQL). ### Architecture All components of a Sensu deployment can be clustered for high availability and improved event-processing throughput. ### Summary Sensu and Prometheus have a few capabilities in common, but they take very different approaches to monitoring. Both offer extensible discovery mechanisms for dynamic cloud-based environments and ephemeral compute platforms, though the underlying mechanisms are quite different. Both provide support for collecting multi-dimensional metrics via labels and annotations. Both have extensive integrations, and Sensu natively supports collecting metrics from all Prometheus exporters. Both are capable of forwarding observability data to third-party data platforms (e.g. event stores or TSDBs). Where Sensu and Prometheus differ the most is in their use cases.
https://github.com/prometheus/docs/blob/main//docs/introduction/comparison.md
main
prometheus
[ -0.018589027225971222, 0.017520340159535408, -0.05166777968406677, 0.044181399047374725, -0.01164232101291418, -0.06853767484426498, -0.04377429932355881, 0.08548645675182343, 0.007885885424911976, -0.009150044061243534, -0.07015787065029144, -0.017106585204601288, 0.03292640671133995, -0....
0.201227
for collecting multi-dimensional metrics via labels and annotations. Both have extensive integrations, and Sensu natively supports collecting metrics from all Prometheus exporters. Both are capable of forwarding observability data to third-party data platforms (e.g. event stores or TSDBs). Where Sensu and Prometheus differ the most is in their use cases. Where Sensu is better: - If you're collecting and processing hybrid observability data (including metrics \_and/or\_ events) - If you're consolidating multiple monitoring tools and need support for metrics \_and\_ Nagios-style plugins or check scripts - More powerful event-processing platform Where Prometheus is better: - If you're primarily collecting and evaluating metrics - If you're monitoring homogeneous Kubernetes infrastructure (if 100% of the workloads you're monitoring are in K8s, Prometheus offers better K8s integration) - More powerful query language, and built-in support for historical data analysis Sensu is maintained by a single commercial company following the open-core business model, offering premium features like closed-source event correlation and aggregation, federation, and support. Prometheus is a fully open source and independent project, maintained by a number of companies and individuals, some of whom also offer commercial services and support.
https://github.com/prometheus/docs/blob/main//docs/introduction/comparison.md
main
prometheus
[ -0.0523071363568306, -0.06557667255401611, -0.01602456159889698, 0.02554710954427719, 0.029772885143756866, -0.06577031314373016, 0.0006380414706654847, 0.04226433113217354, 0.052679311484098434, 0.05923180654644966, -0.01902424730360508, -0.09316529333591461, -0.04421241208910942, 0.06298...
0.325361
Prometheus LTS are selected releases of Prometheus that receive bugfixes for an extended period of time. Every 6 weeks, a new Prometheus minor release cycle begins. After those 6 weeks, minor releases generally no longer receive bugfixes. If a user is impacted by a bug in a minor release, they often need to upgrade to the latest Prometheus release. Upgrading Prometheus should be straightforward thanks to our [API stability guarantees][stab]. However, there is a risk that new features and enhancements could also bring regressions, requiring another upgrade. Prometheus LTS only receive bug, security, and documentation fixes, but over a time window of one year. The build toolchain will also be kept up-to-date. This allows companies that rely on Prometheus to limit the upgrade risks while still having a Prometheus server maintained by the community. ## List of LTS releases | Release | Date | End of support | | --- | --- | --- | | Prometheus 2.37 | 2022-07-14 | 2023-07-31 | | Prometheus 2.45 | 2023-06-23 | 2024-07-31 | | Prometheus 2.53 | 2024-06-16 | 2025-07-31 | | Prometheus 3.5 | 2025-07-14 | 2026-07-31 | ## Limitations of LTS support Some features are excluded from LTS support: - Things listed as unstable in our [API stability guarantees][stab]. - [Experimental features][fflag]. - OpenBSD support. [stab]:https://prometheus.io/docs/prometheus/latest/stability/ [fflag]:https://prometheus.io/docs/prometheus/latest/feature\_flags/
https://github.com/prometheus/docs/blob/main//docs/introduction/release-cycle.md
main
prometheus
[ -0.09433530271053314, -0.005882890895009041, 0.08408952504396439, -0.01225019246339798, 0.021858923137187958, -0.1184353157877922, -0.09030942618846893, -0.02011989802122116, -0.0676836371421814, -0.011081667616963387, 0.027138618752360344, 0.0598946139216423, 0.005804151296615601, -0.0177...
0.130451
### Alert An alert is the outcome of an alerting rule in Prometheus that is actively firing. Alerts are sent from Prometheus to the Alertmanager. ### Alertmanager The [Alertmanager](/docs/alerting/latest/overview/) takes in alerts, aggregates them into groups, de-duplicates, applies silences, throttles, and then sends out notifications to email, Pagerduty, Slack etc. ### Bridge A bridge is a component that takes samples from a client library and exposes them to a non-Prometheus monitoring system. For example, the Python, Go, and Java clients can export metrics to Graphite. ### Client library A client library is a library in some language (e.g. Go, Java, Python, Ruby) that makes it easy to directly instrument your code, write custom collectors to pull metrics from other systems and expose the metrics to Prometheus. ### Collector A collector is a part of an exporter that represents a set of metrics. It may be a single metric if it is part of direct instrumentation, or many metrics if it is pulling metrics from another system. ### Direct instrumentation Direct instrumentation is instrumentation added inline as part of the source code of a program, using a [client library](#client-library). ### Endpoint A source of metrics that can be scraped, usually corresponding to a single process. ### Exporter An exporter is a binary running alongside the application you want to obtain metrics from. The exporter exposes Prometheus metrics, commonly by converting metrics that are exposed in a non-Prometheus format into a format that Prometheus supports. ### Instance An instance is a label that uniquely identifies a target in a job. ### Job A collection of targets with the same purpose, for example monitoring a group of like processes replicated for scalability or reliability, is called a job. ### Mixin A mixin is a reusable and extensible set of Prometheus alerts, recording rules, and Grafana dashboards for a specific component or system. Mixins are typically packaged using [Jsonnet](https://jsonnet.org/) and can be combined to create comprehensive monitoring configurations. They enable standardized monitoring across similar infrastructure components. ### Notification A notification represents a group of one or more alerts, and is sent by the Alertmanager to email, Pagerduty, Slack etc. ### Promdash Promdash was a native dashboard builder for Prometheus. It has been deprecated and replaced by [Grafana](../visualization/grafana.md). ### Prometheus Prometheus usually refers to the core binary of the Prometheus system. It may also refer to the Prometheus monitoring system as a whole. ### PromQL [PromQL](/docs/prometheus/latest/querying/basics/) is the Prometheus Query Language. It allows for a wide range of operations including aggregation, slicing and dicing, prediction and joins. ### Pushgateway The [Pushgateway](../instrumenting/pushing.md) persists the most recent push of metrics from batch jobs. This allows Prometheus to scrape their metrics after they have terminated. ### Recording Rules Recording rules precompute frequently needed or computationally expensive expressions and save their results as a new set of time series. ### Remote Read Remote read is a Prometheus feature that allows transparent reading of time series from other systems (such as long term storage) as part of queries. ### Remote Read Adapter Not all systems directly support remote read. A remote read adapter sits between Prometheus and another system, converting time series requests and responses between them. ### Remote Read Endpoint A remote read endpoint is what Prometheus talks to when doing a remote read. ### Remote Write Remote write is a Prometheus feature that allows sending ingested samples on the fly to other systems, such as long term storage. ### Remote Write Adapter Not all systems directly support remote write. A remote write adapter sits between Prometheus and another system, converting the samples in the remote write into a
https://github.com/prometheus/docs/blob/main//docs/introduction/glossary.md
main
prometheus
[ -0.08557609468698502, 0.007567031309008598, -0.01990678533911705, 0.016307413578033447, 0.026291154325008392, -0.10358461737632751, 0.06213643029332161, 0.015139307826757431, 0.06921569257974625, -0.013134032487869263, -0.043012794107198715, -0.03799626976251602, 0.07806293666362762, 0.002...
0.310303
is a Prometheus feature that allows sending ingested samples on the fly to other systems, such as long term storage. ### Remote Write Adapter Not all systems directly support remote write. A remote write adapter sits between Prometheus and another system, converting the samples in the remote write into a format the other system can understand. ### Remote Write Endpoint A remote write endpoint is what Prometheus talks to when doing a remote write. ### Sample A sample is a single value at a point in time in a time series. In Prometheus, each sample consists of a float64 value and a millisecond-precision timestamp. ### Silence A silence in the Alertmanager prevents alerts, with labels matching the silence, from being included in notifications. ### Target A target is the definition of an object to scrape. For example, what labels to apply, any authentication required to connect, or other information that defines how the scrape will occur. ### Time Series The Prometheus time series are streams of timestamped values belonging to the same metric and the same set of labeled dimensions. Prometheus stores all data as time series.
https://github.com/prometheus/docs/blob/main//docs/introduction/glossary.md
main
prometheus
[ -0.08468729257583618, 0.006299886386841536, -0.002434821566566825, 0.029919501394033432, 0.04821792617440224, -0.11987574398517609, 0.031105902045965195, -0.0249394029378891, 0.06455647200345993, -0.02151060663163662, -0.002115484792739153, 0.0068140109069645405, 0.05441746488213539, -0.02...
0.265869
## General ### What is Prometheus? Prometheus is an open-source systems monitoring and alerting toolkit with an active ecosystem. It is the only system directly supported by [Kubernetes](https://kubernetes.io/) and the de facto standard across the [cloud native ecosystem](https://landscape.cncf.io/). See the [overview](/docs/introduction/overview/). ### How does Prometheus compare against other monitoring systems? See the [comparison](/docs/introduction/comparison/) page. ### What dependencies does Prometheus have? The main Prometheus server runs standalone as a single monolithic binary and has no external dependencies. #### Is this cloud native? Yes. Cloud native is a flexible operating model, breaking up old service boundaries to allow for more flexible and scalable deployments. Prometheus's [service discovery](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) integrates with most tools and clouds. Its dimensional data model and scale into the tens of millions of active series allows it to monitor large cloud-native deployments. There are always trade-offs to make when running services, and Prometheus values reliably getting alerts out to humans above all else. ### Can Prometheus be made highly available? Yes, run identical Prometheus servers on two or more separate machines. Identical alerts will be deduplicated by the [Alertmanager](https://github.com/prometheus/alertmanager). Alertmanager supports [high availability](https://github.com/prometheus/alertmanager#high-availability) by interconnecting multiple Alertmanager instances to build an Alertmanager cluster. Instances of a cluster communicate using a gossip protocol managed via [HashiCorp's Memberlist](https://github.com/hashicorp/memberlist) library. ### I was told Prometheus “doesn't scale”. This is often more of a marketing claim than anything else. A single instance of Prometheus can be more performant than some systems positioning themselves as long term storage solution for Prometheus. You can run Prometheus reliably with tens of millions of active series. If you need more than that, there are several options. [Scaling and Federating Prometheus](https://www.robustperception.io/scaling-and-federating-prometheus/) on the Robust Perception blog is a good starting point, as are the long storage systems listed on our [integrations page](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage). ### What language is Prometheus written in? Most Prometheus components are written in Go. Some are also written in Java, Python, and Ruby. ### How stable are Prometheus features, storage formats, and APIs? All repositories in the Prometheus GitHub organization that have reached version 1.0.0 broadly follow [semantic versioning](http://semver.org/). Breaking changes are indicated by increments of the major version. Exceptions are possible for experimental components, which are clearly marked as such in announcements. Even repositories that have not yet reached version 1.0.0 are, in general, quite stable. We aim for a proper release process and an eventual 1.0.0 release for each repository. In any case, breaking changes will be pointed out in release notes (marked by `[CHANGE]`) or communicated clearly for components that do not have formal releases yet. ### Why do you pull rather than push? Pulling over HTTP offers a number of advantages: \* You can start extra monitoring instances as needed, e.g. on your laptop when developing changes. \* You can more easily and reliably tell if a target is down. \* You can manually go to a target and inspect its health with a web browser. Overall, we believe that pulling is slightly better than pushing, but it should not be considered a major point when considering a monitoring system. For cases where you must push, we offer the [Pushgateway](/docs/instrumenting/pushing/). ### How to feed logs into Prometheus? Short answer: Don't! Use something like [Grafana Loki](https://grafana.com/oss/loki/) or [OpenSearch](https://opensearch.org/) instead. Longer answer: Prometheus is a system to collect and process metrics, not an event logging system. The Grafana blog post [Logs and Metrics and Graphs, Oh My!](https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/) provides more details about the differences between logs and metrics. If you want to extract Prometheus metrics from application logs, Grafana Loki is designed for just that. See Loki's [metric queries](https://grafana.com/docs/loki/latest/logql/metric\_queries/) documentation. ### Who wrote Prometheus? Prometheus was
https://github.com/prometheus/docs/blob/main//docs/introduction/faq.md
main
prometheus
[ -0.05490702763199806, 0.014795856550335884, 0.03679685294628143, 0.012803459540009499, 0.04469121992588043, -0.13479134440422058, 0.006387431174516678, -0.029778234660625458, 0.02376280538737774, 0.020304368808865547, -0.03637233003973961, -0.0403827466070652, 0.022220635786652565, -0.0065...
0.261104
system. The Grafana blog post [Logs and Metrics and Graphs, Oh My!](https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/) provides more details about the differences between logs and metrics. If you want to extract Prometheus metrics from application logs, Grafana Loki is designed for just that. See Loki's [metric queries](https://grafana.com/docs/loki/latest/logql/metric\_queries/) documentation. ### Who wrote Prometheus? Prometheus was initially started privately by [Matt T. Proud](http://www.matttproud.com) and [Julius Volz](http://juliusv.com). The majority of its initial development was sponsored by [SoundCloud](https://soundcloud.com). It's now maintained and extended by a wide range of [companies](https://prometheus.devstats.cncf.io/d/5/companies-table?orgId=1) and [individuals](https://prometheus.io/governance). ### What license is Prometheus released under? Prometheus is released under the [Apache 2.0](https://github.com/prometheus/prometheus/blob/main/LICENSE) license. ### What is the plural of Prometheus? After [extensive research](https://youtu.be/B\_CDeYrqxjQ), it has been determined that the correct plural of 'Prometheus' is 'Prometheis'. If you can not remember this, "Prometheus instances" is a good workaround. ### Can I reload Prometheus's configuration? Yes, sending `SIGHUP` to the Prometheus process or an HTTP POST request to the `/-/reload` endpoint will reload and apply the configuration file. The various components attempt to handle failing changes gracefully. ### Can I send alerts? Yes, with the [Alertmanager](https://github.com/prometheus/alertmanager). We support sending alerts through [email, various native integrations](https://prometheus.io/docs/alerting/latest/configuration/), and a [webhook system anyone can add integrations to](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver). ### Can I create dashboards? Yes, we recommend [Grafana](/docs/visualization/grafana/) for production usage. There are also [Console templates](/docs/visualization/consoles/). ### Can I change the timezone? Why is everything in UTC? To avoid any kind of timezone confusion, especially when the so-called daylight saving time is involved, we decided to exclusively use Unix time internally and UTC for display purposes in all components of Prometheus. A carefully done timezone selection could be introduced into the UI. Contributions are welcome. See [issue #500](https://github.com/prometheus/prometheus/issues/500) for the current state of this effort. ## Instrumentation ### Which languages have instrumentation libraries? There are a number of client libraries for instrumenting your services with Prometheus metrics. See the [client libraries](/docs/instrumenting/clientlibs/) documentation for details. If you are interested in contributing a client library for a new language, see the [exposition formats](/docs/instrumenting/exposition\_formats/). ### Can I monitor machines? Yes, the [Node Exporter](https://github.com/prometheus/node\_exporter) exposes an extensive set of machine-level metrics on Linux and other Unix systems such as CPU usage, memory, disk utilization, filesystem fullness, and network bandwidth. ### Can I monitor network devices? Yes, the [SNMP Exporter](https://github.com/prometheus/snmp\_exporter) allows monitoring of devices that support SNMP. For industrial networks, there's also a [Modbus exporter](https://github.com/RichiH/modbus\_exporter). ### Can I monitor batch jobs? Yes, using the [Pushgateway](/docs/instrumenting/pushing/). See also the [best practices](/docs/practices/instrumentation/#batch-jobs) for monitoring batch jobs. ### What applications can Prometheus monitor out of the box? See [the list of exporters and integrations](/docs/instrumenting/exporters/). ### Can I monitor JVM applications via JMX? Yes, for applications that you cannot instrument directly with the Java client, you can use the [JMX Exporter](https://github.com/prometheus/jmx\_exporter) either standalone or as a Java Agent. ### What is the performance impact of instrumentation? Performance across client libraries and languages may vary. For Java, [benchmarks](https://github.com/prometheus/client\_java/blob/main/benchmarks/README.md) indicate that incrementing a counter/gauge with the Java client will take 12-17ns, depending on contention. This is negligible for all but the most latency-critical code. ## Implementation ### Why are all sample values 64-bit floats? We restrained ourselves to 64-bit floats to simplify the design. The [IEEE 754 double-precision binary floating-point format](http://en.wikipedia.org/wiki/Double-precision\_floating-point\_format) supports integer precision for values up to 253. Supporting native 64 bit integers would (only) help if you need integer precision above 253 but below 263. In principle, support for different sample value types (including some kind of big integer, supporting even more than 64 bit) could be implemented, but it is not a priority right now. A counter, even if incremented one million times per second, will only run into precision
https://github.com/prometheus/docs/blob/main//docs/introduction/faq.md
main
prometheus
[ -0.09883362054824829, -0.026297636330127716, -0.0329730361700058, -0.014098123647272587, -0.025091230869293213, -0.12468792498111725, -0.0016969074495136738, 0.032494232058525085, 0.014342566952109337, 0.024300184100866318, -0.03523597493767738, -0.03729286044836044, -0.0003215202013961971, ...
0.204361
253 but below 263. In principle, support for different sample value types (including some kind of big integer, supporting even more than 64 bit) could be implemented, but it is not a priority right now. A counter, even if incremented one million times per second, will only run into precision issues after over 285 years.
https://github.com/prometheus/docs/blob/main//docs/introduction/faq.md
main
prometheus
[ 0.011185874231159687, -0.07957983762025833, -0.09085790067911148, -0.06676825135946274, -0.08231686800718307, 0.023503361269831657, -0.019619448110461235, 0.03139055520296097, 0.0015405256999656558, 0.014718159101903439, -0.06614252924919128, -0.036477550864219666, 0.007982345297932625, -0...
0.080128
See the [github.com/prometheus/proposals](https://github.com/prometheus/proposals) repository to see all the past and current proposals for the Prometheus Ecosystem. If you are interested in creating a new proposal, read our [proposal process](https://github.com/prometheus/proposals#proposal-process). ## Problem statements and exploratory documents Sometimes we're looking even further into potential futures. The documents in this section are largely exploratory. They should be taken as informing our collective thoughts, not as anything concrete or specific. | Document | Initial date | |--------------------------------------------------------------------------------------------------------------------------------------|--------------| | [Prometheus is not feature complete](https://docs.google.com/document/d/1lEP7pGYM2-5GT9fAIDqrOecG86VRU8-1qAV8b6xZ29Q) | 2020-05 | | [Thoughts about timestamps and durations in PromQL](https://docs.google.com/document/d/1jMeDsLvDfO92Qnry\_JLAXalvMRzMSB1sBr9V7LolpYM) | 2020-10 | | [Prometheus, OpenMetrics & OTLP](https://docs.google.com/document/d/1hn-u6WKLHxIsqYT1\_u6eh94lyQeXrFaAouMshJcQFXs) | 2021-03 | | [Prometheus Sparse Histograms and PromQL](https://docs.google.com/document/d/1ch6ru8GKg03N02jRjYriurt-CZqUVY09evPg6yKTA1s/edit) | 2021-10 | | [Quoting Prometheus names](https://docs.google.com/document/d/1yFj5QSd1AgCYecZ9EJ8f2t4OgF2KBZgJYVde-uzVEtI/edit) | 2023-01 |
https://github.com/prometheus/docs/blob/main//docs/introduction/design-doc.md
main
prometheus
[ -0.06075328588485718, 0.02074912190437317, 0.037803489714860916, -0.016389362514019012, 0.023731183260679245, -0.10701010376214981, -0.142466738820076, 0.017117032781243324, 0.008684000931680202, 0.015162668190896511, -0.07396457344293594, -0.0065230028703808784, -0.012129269540309906, -0....
0.156011
Welcome to Prometheus! Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets. This guide will show you how to install, configure and monitor our first resource with Prometheus. You'll download, install and run Prometheus. You'll also download and install an exporter, tools that expose time series data on hosts and services. Our first exporter will be Prometheus itself, which provides a wide variety of host-level metrics about memory usage, garbage collection, and more. ## Downloading Prometheus [Download the latest release](/download) of Prometheus for your platform, then extract it: ```language-bash tar xvfz prometheus-\*.tar.gz cd prometheus-\* ``` The Prometheus server is a single binary called `prometheus` (or `prometheus.exe` on Microsoft Windows). We can run the binary and see help on its options by passing the `--help` flag. ```language-bash ./prometheus --help usage: prometheus [] The Prometheus monitoring server . . . ``` Before starting Prometheus, let's configure it. ## Configuring Prometheus Prometheus configuration is [YAML](https://yaml.org/). The Prometheus download comes with a sample configuration in a file called `prometheus.yml` that is a good place to get started. We've stripped out most of the comments in the example file to make it more succinct (comments are the lines prefixed with a `#`). ```language-yaml global: scrape\_interval: 15s evaluation\_interval: 15s rule\_files: # - "first.rules" # - "second.rules" scrape\_configs: - job\_name: prometheus static\_configs: - targets: ['localhost:9090'] ``` There are three blocks of configuration in the example configuration file: `global`, `rule\_files`, and `scrape\_configs`. The `global` block controls the Prometheus server's global configuration. We have two options present. The first, `scrape\_interval`, controls how often Prometheus will scrape targets. You can override this for individual targets. In this case the global setting is to scrape every 15 seconds. The `evaluation\_interval` option controls how often Prometheus will evaluate rules. Prometheus uses rules to create new time series and to generate alerts. The `rule\_files` block specifies the location of any rules we want the Prometheus server to load. For now we've got no rules. The last block, `scrape\_configs`, controls what resources Prometheus monitors. Since Prometheus also exposes data about itself as an HTTP endpoint it can scrape and monitor its own health. In the default configuration there is a single job, called `prometheus`, which scrapes the time series data exposed by the Prometheus server. The job contains a single, statically configured, target, the `localhost` on port `9090`. Prometheus expects metrics to be available on targets on a path of `/metrics`. So this default job is scraping via the URL: http://localhost:9090/metrics. The time series data returned will detail the state and performance of the Prometheus server. For a complete specification of configuration options, see the [configuration documentation](/docs/operating/configuration). ## Starting Prometheus To start Prometheus with our newly created configuration file, change to the directory containing the Prometheus binary and run: ```language-bash ./prometheus --config.file=prometheus.yml ``` Prometheus should start up. You should also be able to browse to a status page about itself at http://localhost:9090. Give it about 30 seconds to collect data about itself from its own HTTP metrics endpoint. You can also verify that Prometheus is serving metrics about itself by navigating to its own metrics endpoint: http://localhost:9090/metrics. ## Using the expression browser Let us try looking at some data that Prometheus has collected about itself. To use Prometheus's built-in expression browser, navigate to http://localhost:9090/graph and choose the "Table" view within the "Graph" tab. As you can gather from http://localhost:9090/metrics, one metric that Prometheus exports about itself is called `promhttp\_metric\_handler\_requests\_total` (the total number of `/metrics` requests the Prometheus server has served). Go ahead and enter this into the expression console: ``` promhttp\_metric\_handler\_requests\_total ``` This should
https://github.com/prometheus/docs/blob/main//docs/introduction/first_steps.md
main
prometheus
[ -0.011091533116996288, 0.015371989458799362, -0.08834008127450943, 0.030206743627786636, 0.04170955717563629, -0.11057576537132263, 0.006511716637760401, 0.019586661830544472, 0.013881971128284931, 0.019326617941260338, -0.04346970468759537, -0.044138289988040924, -0.036948103457689285, -0...
0.20917
and choose the "Table" view within the "Graph" tab. As you can gather from http://localhost:9090/metrics, one metric that Prometheus exports about itself is called `promhttp\_metric\_handler\_requests\_total` (the total number of `/metrics` requests the Prometheus server has served). Go ahead and enter this into the expression console: ``` promhttp\_metric\_handler\_requests\_total ``` This should return a number of different time series (along with the latest value recorded for each), all with the metric name `promhttp\_metric\_handler\_requests\_total`, but with different labels. These labels designate different requests statuses. If we were only interested in requests that resulted in HTTP code `200`, we could use this query to retrieve that information: ``` promhttp\_metric\_handler\_requests\_total{code="200"} ``` To count the number of returned time series, you could write: ``` count(promhttp\_metric\_handler\_requests\_total) ``` For more about the expression language, see the [expression language documentation](/docs/querying/basics/). ## Using the graphing interface To graph expressions, navigate to http://localhost:9090/graph and use the "Graph" tab. For example, enter the following expression to graph the per-second HTTP request rate returning status code 200 happening in the self-scraped Prometheus: ``` rate(promhttp\_metric\_handler\_requests\_total{code="200"}[1m]) ``` You can experiment with the graph range parameters and other settings. ## Monitoring other targets Collecting metrics from Prometheus alone isn't a great representation of Prometheus' capabilities. To get a better sense of what Prometheus can do, we recommend exploring documentation about other exporters. The [Monitoring Linux or macOS host metrics using a node exporter](/docs/guides/node-exporter) guide is a good place to start. ## Summary In this guide, you installed Prometheus, configured a Prometheus instance to monitor resources, and learned some basics of working with time series data in Prometheus' expression browser. To continue learning about Prometheus, check out the [Overview](/docs/introduction/overview) for some ideas about what to explore next.
https://github.com/prometheus/docs/blob/main//docs/introduction/first_steps.md
main
prometheus
[ -0.049320220947265625, -0.016143858432769775, -0.060567792505025864, 0.03900573402643204, -0.04713572934269905, -0.0871112197637558, -0.01685129851102829, 0.006061614491045475, 0.021192340180277824, 0.0064310720190405846, -0.03266379237174988, -0.0782318115234375, 0.015397364273667336, 0.0...
0.105371
The following is only a selection of some of the major features we plan to implement in the near future. To get a more complete overview of planned features and current work, see the issue trackers for the various repositories, for example, the [Prometheus server](https://github.com/prometheus/prometheus/issues). ### Server-side metric metadata support At this time, metric types and other metadata are only used in the client libraries and in the exposition format, but not persisted or utilized in the Prometheus server. We plan on making use of this metadata in the future. The first step is to aggregate this data in-memory in Prometheus and provide it via an experimental API endpoint. ### Adopt OpenMetrics The OpenMetrics working group is developing a new standard for metric exposition. We plan to support this format in our client libraries and Prometheus itself. ### Retroactive rule evaluations Add support for retroactive rule evaluations making use of backfill. ### TLS and authentication in HTTP serving endpoints TLS and authentication are currently being rolled out to the Prometheus, Alertmanager, and the official exporters. Adding this support will make it easier for people to deploy Prometheus components securely without requiring a reverse proxy to add those features externally. ### Support the Ecosystem Prometheus has a range of client libraries and exporters. There are always more languages that could be supported, or systems that would be useful to export metrics from. We will support the ecosystem in creating and expanding these.
https://github.com/prometheus/docs/blob/main//docs/introduction/roadmap.md
main
prometheus
[ -0.08999001234769821, 0.04301057010889053, -0.022052355110645294, 0.004593526944518089, -0.007911288179457188, -0.11779878288507462, -0.032061174511909485, -0.007020608521997929, 0.013159925118088722, 0.008606119081377983, -0.02518051490187645, -0.028953278437256813, 0.048374127596616745, ...
0.214927
There is a [subreddit](https://www.reddit.com/r/prometheusmonitoring) collecting all Prometheus-related resources on the internet. The following selection of resources are particularly useful to get started with Prometheus. [Awesome Prometheus](https://github.com/roaldnefs/awesome-prometheus) contains a more comprehensive community-maintained list of resources. ## Blogs \* This site has its own [blog](/blog/). \* [SoundCloud's blog post announcing Prometheus](https://developers.soundcloud.com/blog/prometheus-monitoring-at-soundcloud) – a more elaborate overview than the one given on this site. \* Prometheus-related posts on the [Robust Perception blog](https://www.robustperception.io/tag/prometheus/). ## Tutorials \* [Instructions and example code for a Prometheus workshop](https://github.com/juliusv/prometheus\_workshop). \* [How To Install Prometheus using Docker on Ubuntu 14.04](https://www.digitalocean.com/community/tutorials/how-to-install-prometheus-using-docker-on-ubuntu-14-04). ## Podcasts and interviews \* [Prometheus on FLOSS Weekly 357](https://twit.tv/shows/floss-weekly/episodes/357) - Julius Volz on the [FLOSS Weekly TWiT.tv](https://twit.tv/shows/floss-weekly/) show. \* [Prometheus and Service Monitoring](https://changelog.com/podcast/168) - Julius Volz on the [Changelog](https://changelog.com/) podcast. ## Recorded talks \* [Prometheus: A Next-Generation Monitoring System](https://www.usenix.org/conference/srecon15europe/program/presentation/rabenstein) – Julius Volz and Björn Rabenstein at SREcon15 Europe, Dublin. \* [Prometheus: A Next-Generation Monitoring System](https://www.youtube.com/watch?v=cwRmXqXKGtk) - Brian Brazil at FOSDEM 2016 ([slides](http://www.slideshare.net/brianbrazil/prometheus-a-next-generation-monitoring-system-fosdem-2016)). \* [What is your application doing right now?](http://youtu.be/Z0LlilNpX1U) – Matthias Gruter, Transmode, at DevOps Stockholm Meetup. \* [Prometheus workshop](https://vimeo.com/131581353) – Jamie Wilkinson at Monitorama PDX 2015 ([slides](https://docs.google.com/presentation/d/1X1rKozAUuF2MVc1YXElFWq9wkcWv3Axdldl8LOH9Vik/edit)). \* [Monitoring Hadoop with Prometheus](https://www.youtube.com/watch?v=qs2sqOLNGtw) – Brian Brazil at the Hadoop User Group Ireland ([slides](http://www.slideshare.net/brianbrazil/monitoring-hadoop-with-prometheus-hadoop-user-group-ireland-december-2015)). \* In German: [Monitoring mit Prometheus](https://media.ccc.de/v/eh16-43-monitoring\_mit\_prometheus#video&t=2804) – Michael Stapelberg at [Easterhegg 2016](https://eh16.easterhegg.eu/). \* In German: [Prometheus in der Praxis](https://media.ccc.de/v/MRMCD16-7754-prometheus\_in\_der\_praxis) – Jonas Große Sundrup at [MRMCD 2016](https://2016.mrmcd.net/) ## Presentation slides ### General \* [Prometheus Overview](http://www.slideshare.net/brianbrazil/prometheus-overview) – by Brian Brazil. \* [Systems Monitoring with Prometheus](http://www.slideshare.net/brianbrazil/devops-ireland-systems-monitoring-with-prometheus) – Brian Brazil at Devops Ireland Meetup, Dublin. \* [OMG! Prometheus](https://www.dropbox.com/s/0l7kxhjqjbabtb0/prometheus%20site-ops%20preso.pdf?dl=0) – Benjamin Staffin, Fitbit Site Operations, explains the case for Prometheus to his team. ### Docker \* [Prometheus and Docker](http://www.slideshare.net/brianbrazil/prometheus-and-docker-docker-galway-november-2015) – Brian Brazil at Docker Galway. ### Python \* [Better Monitoring for Python](http://www.slideshare.net/brianbrazil/better-monitoring-for-python-inclusive-monitoring-with-prometheus-pycon-ireland-lightning-talk) – Brian Brazil at Pycon Ireland. \* [Monitoring your Python with Prometheus](http://www.slideshare.net/brianbrazil/python-ireland-monitoring-your-python-with-prometheus) – Brian Brazil at Python Ireland Meetup, Dublin.
https://github.com/prometheus/docs/blob/main//docs/introduction/media.md
main
prometheus
[ -0.1077973023056984, -0.060717128217220306, 0.010691646486520767, -0.011937092989683151, 0.034004468470811844, -0.09112885594367981, -0.011142204515635967, -0.027609914541244507, -0.05431016534566879, -0.027348343282938004, -0.0255705788731575, -0.028116270899772644, 0.023228183388710022, ...
0.221582
Prometheus has the ability to log all the queries run by the engine to a log file, as of 2.16.0. This guide demonstrates how to use that log file, which fields it contains, and provides advanced tips about how to operate the log file. ## Enable the query log The query log can be toggled at runtime. It can therefore be activated when you want to investigate slownesses or high load on your Prometheus instance. To enable or disable the query log, two steps are needed: 1. Adapt the configuration to add or remove the query log configuration. 1. Reload the Prometheus server configuration. ### Logging all the queries to a file This example demonstrates how to log all the queries to a file called `/prometheus/query.log`. We will assume that `/prometheus` is the data directory and that Prometheus has write access to it. First, adapt the `prometheus.yml` configuration file: ```yaml global: scrape\_interval: 15s evaluation\_interval: 15s query\_log\_file: /prometheus/query.log scrape\_configs: - job\_name: 'prometheus' static\_configs: - targets: ['localhost:9090'] ``` Then, [reload](/docs/prometheus/latest/management\_api/#reload) the Prometheus configuration: ```shell $ curl -X POST http://127.0.0.1:9090/-/reload ``` Or, if Prometheus is not launched with `--web.enable-lifecycle`, and you're not running on Windows, you can trigger the reload by sending a SIGHUP to the Prometheus process. The file `/prometheus/query.log` should now exist and all the queries will be logged to that file. To disable the query log, repeat the operation but remove `query\_log\_file` from the configuration. ## Verifying if the query log is enabled Prometheus conveniently exposes metrics that indicates if the query log is enabled and working: ``` # HELP prometheus\_engine\_query\_log\_enabled State of the query log. # TYPE prometheus\_engine\_query\_log\_enabled gauge prometheus\_engine\_query\_log\_enabled 0 # HELP prometheus\_engine\_query\_log\_failures\_total The number of query log failures. # TYPE prometheus\_engine\_query\_log\_failures\_total counter prometheus\_engine\_query\_log\_failures\_total 0 ``` The first metric, `prometheus\_engine\_query\_log\_enabled` is set to 1 of the query log is enabled, and 0 otherwise. The second one, `prometheus\_engine\_query\_log\_failures\_total`, indicates the number of queries that could not be logged. ## Format of the query log The query log is a JSON-formatted log. Here is an overview of the fields present for a query: ``` { "params": { "end": "2020-02-08T14:59:50.368Z", "query": "up == 0", "start": "2020-02-08T13:59:50.368Z", "step": 5 }, "stats": { "timings": { "evalTotalTime": 0.000447452, "execQueueTime": 7.599e-06, "execTotalTime": 0.000461232, "innerEvalTime": 0.000427033, "queryPreparationTime": 1.4177e-05, "resultSortTime": 6.48e-07 } }, "ts": "2020-02-08T14:59:50.387Z" } ``` - `params`: The query. The start and end timestamp, the step and the actual query statement. - `stats`: Statistics. Currently, it contains internal engine timers. - `ts`: The timestamp when the query ended. Additionally, depending on what triggered the request, you will have additional fields in the JSON lines. ### API Queries and consoles HTTP requests contain the client IP, the method, and the path: ``` { "httpRequest": { "clientIP": "127.0.0.1", "method": "GET", "path": "/api/v1/query\_range" } } ``` The path will contain the web prefix if it is set, and can also point to a console. The client IP is the network IP address and does not take into consideration the headers like `X-Forwarded-For`. If you wish to log the original caller behind a proxy, you need to do so in the proxy itself. ### Recording rules and alerts Recording rules and alerts contain a ruleGroup element which contains the path of the file and the name of the group: ``` { "ruleGroup": { "file": "rules.yml", "name": "partners" } } ``` ## Rotating the query log Prometheus will not rotate the query log itself. Instead, you can use external tools to do so. One of those tools is logrotate. It is enabled by default on most Linux distributions. Here is an example of file you can add as `/etc/logrotate.d/prometheus`: ```
https://github.com/prometheus/docs/blob/main//docs/guides/query-log.md
main
prometheus
[ -0.005958943162113428, 0.028088737279176712, 0.011187826283276081, 0.05999673530459404, 0.017139332368969917, -0.15893951058387756, 0.009931282140314579, -0.019419699907302856, -0.011540981009602547, 0.011111492291092873, -0.03772079199552536, -0.027782928198575974, 0.009242943488061428, -...
0.147992
``` ## Rotating the query log Prometheus will not rotate the query log itself. Instead, you can use external tools to do so. One of those tools is logrotate. It is enabled by default on most Linux distributions. Here is an example of file you can add as `/etc/logrotate.d/prometheus`: ``` /prometheus/query.log { daily rotate 7 compress delaycompress postrotate killall -HUP prometheus endscript } ``` That will rotate your file daily and keep one week of history.
https://github.com/prometheus/docs/blob/main//docs/guides/query-log.md
main
prometheus
[ -0.04193637892603874, 0.060661863535642624, 0.003927378915250301, 0.03616829216480255, -0.019411582499742508, -0.07902400940656662, -0.02583712339401245, -0.01832893118262291, 0.005229400470852852, 0.03357605263590813, -0.013754122890532017, 0.04638804495334625, 0.025188369676470757, -0.00...
0.081289
Prometheus supports [basic authentication](https://en.wikipedia.org/wiki/Basic\_access\_authentication) (aka "basic auth") for connections to the Prometheus [expression browser](/docs/visualization/browser) and [HTTP API](/docs/prometheus/latest/querying/api). NOTE: This tutorial covers basic auth connections \*to\* Prometheus instances. Basic auth is also supported for connections \*from\* Prometheus instances to [scrape targets](/docs/prometheus/latest/configuration/configuration/#scrape\_config). ## Hashing a password Let's say that you want to require a username and password from all users accessing the Prometheus instance. For this example, use `admin` as the username and choose any password you'd like. First, generate a [bcrypt](https://en.wikipedia.org/wiki/Bcrypt) hash of the password. To generate a hashed password, we will use python3-bcrypt. Let's install it by running `apt install python3-bcrypt`, assuming you are running a debian-like distribution. Other alternatives exist to generate hashed passwords; for testing you can also use [bcrypt generators on the web](https://bcrypt-generator.com/). Here is a python script which uses python3-bcrypt to prompt for a password and hash it: ```python import getpass import bcrypt password = getpass.getpass("password: ") hashed\_password = bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt()) print(hashed\_password.decode()) ``` Save that script as `gen-pass.py` and run it: ```shell $ python3 gen-pass.py ``` That should prompt you for a password: ``` password: $2b$12$hNf2lSsxfm0.i4a.1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay ``` In this example, I used "test" as password. Save that password somewhere, we will use it in the next steps! ## Creating web.yml Let's create a web.yml file ([documentation](https://prometheus.io/docs/prometheus/latest/configuration/https/)), with the following content: ```yaml basic\_auth\_users: admin: $2b$12$hNf2lSsxfm0.i4a.1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay ``` You can validate that file with `promtool check web-config web.yml` ```shell $ promtool check web-config web.yml web.yml SUCCESS ``` You can add multiple users to the file. ## Launching Prometheus You can launch prometheus with the web configuration file as follows: ```shell $ prometheus --web.config.file=web.yml ``` ## Testing You can use cURL to interact with your setup. Try this request: ```bash curl --head http://localhost:9090/graph ``` This will return a `401 Unauthorized` response because you've failed to supply a valid username and password. To successfully access Prometheus endpoints using basic auth, for example the `/metrics` endpoint, supply the proper username using the `-u` flag and supply the password when prompted: ```bash curl -u admin http://localhost:9090/metrics Enter host password for user 'admin': ``` That should return Prometheus metrics output, which should look something like this: ``` # HELP go\_gc\_duration\_seconds A summary of the GC invocation durations. # TYPE go\_gc\_duration\_seconds summary go\_gc\_duration\_seconds{quantile="0"} 0.0001343 go\_gc\_duration\_seconds{quantile="0.25"} 0.0002032 go\_gc\_duration\_seconds{quantile="0.5"} 0.0004485 ... ``` ## Summary In this guide, you stored a username and a hashed password in a `web.yml` file, launched prometheus with the parameter required to use the credentials in that file to authenticate users accessing Prometheus' HTTP endpoints.
https://github.com/prometheus/docs/blob/main//docs/guides/basic-auth.md
main
prometheus
[ -0.11404556781053543, 0.013533986173570156, -0.07934417575597763, -0.009403443895280361, -0.043312832713127136, -0.15540902316570282, 0.03584231808781624, 0.0025307978503406048, -0.035598330199718475, -0.026681432500481606, -0.012046818621456623, -0.01909218169748783, 0.07787299901247025, ...
0.123664
This guide will introduce you to the multi-target exporter pattern. To achieve this we will: \* describe the multi-target exporter pattern and why it is used, \* run the [blackbox](https://github.com/prometheus/blackbox\_exporter) exporter as an example of the pattern, \* configure a custom query module for the blackbox exporter, \* let the blackbox exporter run basic metric queries against the Prometheus [website](https://prometheus.io), \* examine a popular pattern of configuring Prometheus to scrape exporters using relabeling. ## The multi-target exporter pattern? By multi-target [exporter](/docs/instrumenting/exporters/) pattern we refer to a specific design, in which: \* the exporter will get the target’s metrics via a network protocol. \* the exporter does not have to run on the machine the metrics are taken from. \* the exporter gets the targets and a query config string as parameters of Prometheus’ GET request. \* the exporter subsequently starts the scrape after getting Prometheus’ GET requests and once it is done with scraping. \* the exporter can query multiple targets. This pattern is only used for certain exporters, such as the [blackbox](https://github.com/prometheus/blackbox\_exporter) and the [SNMP exporter](https://github.com/prometheus/snmp\_exporter). The reason is that we either can’t run an exporter on the targets, e.g. network gear speaking SNMP, or that we are explicitly interested in the distance, e.g. latency and reachability of a website from a specific point outside of our network, a common use case for the [blackbox](https://github.com/prometheus/blackbox\_exporter) exporter. ## Running multi-target exporters Multi-target exporters are flexible regarding their environment and can be run in many ways. As regular programs, in containers, as background services, on baremetal, on virtual machines. Because they are queried and do query over network they do need appropriate open ports. Otherwise they are frugal. Now let’s try it out for yourself! Use [Docker](https://www.docker.com/) to start a blackbox exporter container by running this in a terminal. Depending on your system configuration you might need to prepend the command with a `sudo`: ```bash docker run -p 9115:9115 prom/blackbox-exporter ``` You should see a few log lines and if everything went well the last one should report `msg="Listening on address"` as seen here: ``` level=info ts=2018-10-17T15:41:35.4997596Z caller=main.go:324 msg="Listening on address" address=:9115 ``` ## Basic querying of multi-target exporters There are two ways of querying: 1. Querying the exporter itself. It has its own metrics, usually available at `/metrics`. 1. Querying the exporter to scrape another target. Usually available at a "descriptive" endpoint, e.g. `/probe`. This is likely what you are primarily interested in, when using multi-target exporters. You can manually try the first query type with curl in another terminal or use this [link](http://localhost:9115/metrics): ```bash curl 'localhost:9115/metrics' ``` The response should be something like this: ``` # HELP blackbox\_exporter\_build\_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which blackbox\_exporter was built. # TYPE blackbox\_exporter\_build\_info gauge blackbox\_exporter\_build\_info{branch="HEAD",goversion="go1.10",revision="4a22506cf0cf139d9b2f9cde099f0012d9fcabde",version="0.12.0"} 1 # HELP go\_gc\_duration\_seconds A summary of the GC invocation durations. # TYPE go\_gc\_duration\_seconds summary go\_gc\_duration\_seconds{quantile="0"} 0 go\_gc\_duration\_seconds{quantile="0.25"} 0 go\_gc\_duration\_seconds{quantile="0.5"} 0 go\_gc\_duration\_seconds{quantile="0.75"} 0 go\_gc\_duration\_seconds{quantile="1"} 0 go\_gc\_duration\_seconds\_sum 0 go\_gc\_duration\_seconds\_count 0 # HELP go\_goroutines Number of goroutines that currently exist. # TYPE go\_goroutines gauge go\_goroutines 9 […] # HELP process\_cpu\_seconds\_total Total user and system CPU time spent in seconds. # TYPE process\_cpu\_seconds\_total counter process\_cpu\_seconds\_total 0.05 # HELP process\_max\_fds Maximum number of open file descriptors. # TYPE process\_max\_fds gauge process\_max\_fds 1.048576e+06 # HELP process\_open\_fds Number of open file descriptors. # TYPE process\_open\_fds gauge process\_open\_fds 7 # HELP process\_resident\_memory\_bytes Resident memory size in bytes. # TYPE process\_resident\_memory\_bytes gauge process\_resident\_memory\_bytes 7.8848e+06 # HELP process\_start\_time\_seconds Start time of the process since unix epoch in seconds. # TYPE process\_start\_time\_seconds gauge process\_start\_time\_seconds 1.54115492874e+09 # HELP process\_virtual\_memory\_bytes Virtual memory size in bytes. # TYPE process\_virtual\_memory\_bytes gauge
https://github.com/prometheus/docs/blob/main//docs/guides/multi-target-exporter.md
main
prometheus
[ -0.0863836407661438, -0.004055566154420376, -0.07572845369577408, 0.032623618841171265, 0.011734803207218647, -0.07057487964630127, -0.01784924976527691, -0.00751826586201787, -0.03356665372848511, -0.04502281919121742, -0.029890067875385284, -0.10588066279888153, 0.04114013537764549, -0.0...
0.182081
TYPE process\_open\_fds gauge process\_open\_fds 7 # HELP process\_resident\_memory\_bytes Resident memory size in bytes. # TYPE process\_resident\_memory\_bytes gauge process\_resident\_memory\_bytes 7.8848e+06 # HELP process\_start\_time\_seconds Start time of the process since unix epoch in seconds. # TYPE process\_start\_time\_seconds gauge process\_start\_time\_seconds 1.54115492874e+09 # HELP process\_virtual\_memory\_bytes Virtual memory size in bytes. # TYPE process\_virtual\_memory\_bytes gauge process\_virtual\_memory\_bytes 1.5609856e+07 ``` Those are metrics in the Prometheus [format](/docs/instrumenting/exposition\_formats/#text-format-example). They come from the exporter’s [instrumentation](/docs/practices/instrumentation/) and tell us about the state of the exporter itself while it is running. This is called whitebox monitoring and very useful in daily ops practice. If you are curious, try out our guide on how to [instrument your own applications](https://prometheus.io/docs/guides/go-application/). For the second type of querying we need to provide a target and module as parameters in the HTTP GET Request. The target is a URI or IP and the module must defined in the exporter’s configuration. The blackbox exporter container comes with a meaningful default configuration. We will use the target `prometheus.io` and the predefined module `http\_2xx`. It tells the exporter to make a GET request like a browser would if you go to `prometheus.io` and to expect a [200 OK](https://en.wikipedia.org/wiki/List\_of\_HTTP\_status\_codes#2xx\_Success) response. You can now tell your blackbox exporter to query `prometheus.io` in the terminal with curl: ```bash curl 'localhost:9115/probe?target=prometheus.io&module=http\_2xx' ``` This will return a lot of metrics: ``` # HELP probe\_dns\_lookup\_time\_seconds Returns the time taken for probe dns lookup in seconds # TYPE probe\_dns\_lookup\_time\_seconds gauge probe\_dns\_lookup\_time\_seconds 0.061087943 # HELP probe\_duration\_seconds Returns how long the probe took to complete in seconds # TYPE probe\_duration\_seconds gauge probe\_duration\_seconds 0.065580871 # HELP probe\_failed\_due\_to\_regex Indicates if probe failed due to regex # TYPE probe\_failed\_due\_to\_regex gauge probe\_failed\_due\_to\_regex 0 # HELP probe\_http\_content\_length Length of http content response # TYPE probe\_http\_content\_length gauge probe\_http\_content\_length 0 # HELP probe\_http\_duration\_seconds Duration of http request by phase, summed over all redirects # TYPE probe\_http\_duration\_seconds gauge probe\_http\_duration\_seconds{phase="connect"} 0 probe\_http\_duration\_seconds{phase="processing"} 0 probe\_http\_duration\_seconds{phase="resolve"} 0.061087943 probe\_http\_duration\_seconds{phase="tls"} 0 probe\_http\_duration\_seconds{phase="transfer"} 0 # HELP probe\_http\_redirects The number of redirects # TYPE probe\_http\_redirects gauge probe\_http\_redirects 0 # HELP probe\_http\_ssl Indicates if SSL was used for the final redirect # TYPE probe\_http\_ssl gauge probe\_http\_ssl 0 # HELP probe\_http\_status\_code Response HTTP status code # TYPE probe\_http\_status\_code gauge probe\_http\_status\_code 0 # HELP probe\_http\_version Returns the version of HTTP of the probe response # TYPE probe\_http\_version gauge probe\_http\_version 0 # HELP probe\_ip\_protocol Specifies whether probe ip protocol is IP4 or IP6 # TYPE probe\_ip\_protocol gauge probe\_ip\_protocol 6 # HELP probe\_success Displays whether or not the probe was a success # TYPE probe\_success gauge probe\_success 0 ``` Notice that almost all metrics have a value of `0`. The last one reads `probe\_success 0`. This means the prober could not successfully reach `prometheus.io`. The reason is hidden in the metric `probe\_ip\_protocol` with the value `6`. By default the prober uses [IPv6](https://en.wikipedia.org/wiki/IPv6) until told otherwise. But the Docker daemon blocks IPv6 until told otherwise. Hence our blackbox exporter running in a Docker container can’t connect via IPv6. We could now either tell Docker to allow IPv6 or the blackbox exporter to use IPv4. In the real world both can make sense and as so often the answer to the question "what is to be done?" is "it depends". Because this is an exporter guide we will change the exporter and take the opportunity to configure a custom module. ## Configuring modules The modules are predefined in a file inside the docker container called `config.yml` which is a copy of [blackbox.yml](https://github.com/prometheus/blackbox\_exporter/blob/master/blackbox.yml) in the github repo. We will copy this file, [adapt](https://github.com/prometheus/blackbox\_exporter/blob/master/CONFIGURATION.md) it to our own needs and tell the exporter to use our config file instead of the one included in the container. First download the file using curl
https://github.com/prometheus/docs/blob/main//docs/guides/multi-target-exporter.md
main
prometheus
[ -0.02011239156126976, 0.04606717452406883, -0.1025824174284935, 0.044537656009197235, 0.030223289504647255, -0.09366254508495331, 0.03990292176604271, 0.10842816531658173, 0.0704093649983406, -0.004330567549914122, -0.024394575506448746, -0.05267831310629845, -0.03531459718942642, -0.01819...
0.267
inside the docker container called `config.yml` which is a copy of [blackbox.yml](https://github.com/prometheus/blackbox\_exporter/blob/master/blackbox.yml) in the github repo. We will copy this file, [adapt](https://github.com/prometheus/blackbox\_exporter/blob/master/CONFIGURATION.md) it to our own needs and tell the exporter to use our config file instead of the one included in the container. First download the file using curl or your browser: ```bash curl -o blackbox.yml https://raw.githubusercontent.com/prometheus/blackbox\_exporter/master/blackbox.yml ``` Open it in an editor. The first few lines look like this: ```yaml modules: http\_2xx: prober: http http\_post\_2xx: prober: http http: method: POST ``` [YAML](https://en.wikipedia.org/wiki/YAML) uses whitespace indentation to express hierarchy, so you can recognise that two `modules` named `http\_2xx` and `http\_post\_2xx` are defined, and that they both have a prober `http` and for one the method value is specifically set to `POST`. You will now change the module `http\_2xx` by setting the `preferred\_ip\_protocol` of the prober `http` explicitly to the string `ip4`. ```yaml modules: http\_2xx: prober: http http: preferred\_ip\_protocol: "ip4" http\_post\_2xx: prober: http http: method: POST ``` If you want to know more about the available probers and options check out the [documentation](https://github.com/prometheus/blackbox\_exporter/blob/master/CONFIGURATION.md). Now we need to tell the blackbox exporter to use our freshly changed file. You can do that with the flag `--config.file="blackbox.yml"`. But because we are using Docker, we first must make this file [available](https://docs.docker.com/storage/bind-mounts/) inside the container using the `--mount` command. NOTE: If you are using macOS you first need to allow the Docker daemon to access the directory in which your `blackbox.yml` is. You can do that by clicking on the little Docker whale in menu bar and then on `Preferences`->`File Sharing`->`+`. Afterwards press `Apply & Restart`. First you stop the old container by changing into its terminal and press `ctrl+c`. Make sure you are in the directory containing your `blackbox.yml`. Then you run this command. It is long, but we will explain it: ```bash docker \ run -p 9115:9115 \ --mount type=bind,source="$(pwd)"/blackbox.yml,target=/blackbox.yml,readonly \ prom/blackbox-exporter \ --config.file="/blackbox.yml" ``` With this command, you told `docker` to: 1. `run` a container with the port `9115` outside the container mapped to the port `9115` inside of the container. 1. `mount` from your current directory (`$(pwd)` stands for print working directory) the file `blackbox.yml` into `/blackbox.yml` in `readonly` mode. 1. use the image `prom/blackbox-exporter` from [Docker hub](https://hub.docker.com/r/prom/blackbox-exporter/). 1. run the blackbox-exporter with the flag `--config.file` telling it to use `/blackbox.yml` as config file. If everything is correct, you should see something like this: ``` level=info ts=2018-10-19T12:40:51.650462756Z caller=main.go:213 msg="Starting blackbox\_exporter" version="(version=0.12.0, branch=HEAD, revision=4a22506cf0cf139d9b2f9cde099f0012d9fcabde)" level=info ts=2018-10-19T12:40:51.653357722Z caller=main.go:220 msg="Loaded config file" level=info ts=2018-10-19T12:40:51.65349635Z caller=main.go:324 msg="Listening on address" address=:9115 ``` Now you can try our new IPv4-using module `http\_2xx` in a terminal: ```bash curl 'localhost:9115/probe?target=prometheus.io&module=http\_2xx' ``` Which should return Prometheus metrics like this: ``` # HELP probe\_dns\_lookup\_time\_seconds Returns the time taken for probe dns lookup in seconds # TYPE probe\_dns\_lookup\_time\_seconds gauge probe\_dns\_lookup\_time\_seconds 0.02679421 # HELP probe\_duration\_seconds Returns how long the probe took to complete in seconds # TYPE probe\_duration\_seconds gauge probe\_duration\_seconds 0.461619124 # HELP probe\_failed\_due\_to\_regex Indicates if probe failed due to regex # TYPE probe\_failed\_due\_to\_regex gauge probe\_failed\_due\_to\_regex 0 # HELP probe\_http\_content\_length Length of http content response # TYPE probe\_http\_content\_length gauge probe\_http\_content\_length -1 # HELP probe\_http\_duration\_seconds Duration of http request by phase, summed over all redirects # TYPE probe\_http\_duration\_seconds gauge probe\_http\_duration\_seconds{phase="connect"} 0.062076202999999996 probe\_http\_duration\_seconds{phase="processing"} 0.23481845699999998 probe\_http\_duration\_seconds{phase="resolve"} 0.029594103 probe\_http\_duration\_seconds{phase="tls"} 0.163420078 probe\_http\_duration\_seconds{phase="transfer"} 0.002243199 # HELP probe\_http\_redirects The number of redirects # TYPE probe\_http\_redirects gauge probe\_http\_redirects 1 # HELP probe\_http\_ssl Indicates if SSL was used for the final redirect # TYPE probe\_http\_ssl gauge probe\_http\_ssl 1 # HELP probe\_http\_status\_code Response HTTP status code # TYPE probe\_http\_status\_code gauge probe\_http\_status\_code 200 # HELP probe\_http\_uncompressed\_body\_length Length of uncompressed response body # TYPE probe\_http\_uncompressed\_body\_length gauge probe\_http\_uncompressed\_body\_length 14516 # HELP probe\_http\_version Returns the version
https://github.com/prometheus/docs/blob/main//docs/guides/multi-target-exporter.md
main
prometheus
[ -0.0005993319791741669, 0.028755279257893562, -0.06623563170433044, 0.03296617791056633, 0.04934310540556908, -0.07506456971168518, -0.0743294209241867, -0.01727929897606373, -0.017468344420194626, -0.03888747841119766, -0.04117470979690552, -0.04840948432683945, 0.004278924781829119, -0.0...
0.086327
HELP probe\_http\_ssl Indicates if SSL was used for the final redirect # TYPE probe\_http\_ssl gauge probe\_http\_ssl 1 # HELP probe\_http\_status\_code Response HTTP status code # TYPE probe\_http\_status\_code gauge probe\_http\_status\_code 200 # HELP probe\_http\_uncompressed\_body\_length Length of uncompressed response body # TYPE probe\_http\_uncompressed\_body\_length gauge probe\_http\_uncompressed\_body\_length 14516 # HELP probe\_http\_version Returns the version of HTTP of the probe response # TYPE probe\_http\_version gauge probe\_http\_version 1.1 # HELP probe\_ip\_protocol Specifies whether probe ip protocol is IP4 or IP6 # TYPE probe\_ip\_protocol gauge probe\_ip\_protocol 4 # HELP probe\_ssl\_earliest\_cert\_expiry Returns earliest SSL cert expiry in unixtime # TYPE probe\_ssl\_earliest\_cert\_expiry gauge probe\_ssl\_earliest\_cert\_expiry 1.581897599e+09 # HELP probe\_success Displays whether or not the probe was a success # TYPE probe\_success gauge probe\_success 1 # HELP probe\_tls\_version\_info Contains the TLS version used # TYPE probe\_tls\_version\_info gauge probe\_tls\_version\_info{version="TLS 1.3"} 1 ``` You can see that the probe was successful and get many useful metrics, like latency by phase, status code, ssl status or certificate expiry in [Unix time](https://en.wikipedia.org/wiki/Unix\_time). The blackbox exporter also offers a tiny web interface at [localhost:9115](http://localhost:9115) for you to check out the last few probes, the loaded config and debug information. It even offers a direct link to probe `prometheus.io`. Handy if you are wondering why something does not work. ## Querying multi-target exporters with Prometheus So far, so good. Congratulate yourself. The blackbox exporter works and you can manually tell it to query a remote target. You are almost there. Now you need to tell Prometheus to do the queries for us. Below you find a minimal prometheus config. It is telling Prometheus to scrape the exporter itself as we did [before](#query-exporter) using `curl 'localhost:9115/metrics'`: NOTE: If you use Docker for Mac or Docker for Windows, you can’t use `localhost:9115` in the last line, but must use `host.docker.internal:9115`. This has to do with the virtual machines used to implement Docker on those operating systems. You should not use this in production. `prometheus.yml` for Linux: ```yaml global: scrape\_interval: 5s scrape\_configs: - job\_name: blackbox # To get metrics about the exporter itself metrics\_path: /metrics static\_configs: - targets: - localhost:9115 ``` `prometheus.yml` for macOS and Windows: ```yaml global: scrape\_interval: 5s scrape\_configs: - job\_name: blackbox # To get metrics about the exporter itself metrics\_path: /metrics static\_configs: - targets: - host.docker.internal:9115 ``` Now run a Prometheus container and tell it to mount our config file from above. Because of the way networking on the host is addressable from the container you need to use a slightly different command on Linux than on MacOS and Windows.: Run Prometheus on Linux (don’t use `--network="host"` in production): ```bash docker \ run --network="host"\ --mount type=bind,source="$(pwd)"/prometheus.yml,target=/prometheus.yml,readonly \ prom/prometheus \ --config.file="/prometheus.yml" ``` Run Prometheus on MacOS and Windows: ```bash docker \ run -p 9090:9090 \ --mount type=bind,source="$(pwd)"/prometheus.yml,target=/prometheus.yml,readonly \ prom/prometheus \ --config.file="/prometheus.yml" ``` This command works similarly to [running the blackbox exporter using a config file](#run-exporter). If everything worked, you should be able to go to [localhost:9090/targets](http://localhost:9090/targets) and see under `blackbox` an endpoint with the state `UP` in green. If you get a red `DOWN` make sure that the blackbox exporter you started [above](#run-exporter) is still running. If you see nothing or a yellow `UNKNOWN` you are really fast and need to give it a few more seconds before reloading your browser’s tab. To tell Prometheus to query `"localhost:9115/probe?target=prometheus.io&module=http\_2xx"` you add another scrape job `blackbox-http` where you set the `metrics\_path` to `/probe` and the parameters under `params:` in the Prometheus config file `prometheus.yml`: ```yaml global: scrape\_interval: 5s scrape\_configs: - job\_name: blackbox # To get metrics about the exporter itself metrics\_path: /metrics static\_configs: - targets: - localhost:9115 # For Windows and macOS replace with - host.docker.internal:9115 - job\_name: blackbox-http #
https://github.com/prometheus/docs/blob/main//docs/guides/multi-target-exporter.md
main
prometheus
[ -0.05731205642223358, 0.052011191844940186, -0.04961185157299042, 0.0318681001663208, 0.012906293384730816, -0.06364616006612778, -0.02824266441166401, -0.07909481227397919, 0.025734495371580124, -0.038162700831890106, 0.020114919170737267, -0.04387823864817619, 0.003142748260870576, 0.042...
0.066581
`metrics\_path` to `/probe` and the parameters under `params:` in the Prometheus config file `prometheus.yml`: ```yaml global: scrape\_interval: 5s scrape\_configs: - job\_name: blackbox # To get metrics about the exporter itself metrics\_path: /metrics static\_configs: - targets: - localhost:9115 # For Windows and macOS replace with - host.docker.internal:9115 - job\_name: blackbox-http # To get metrics about the exporter’s targets metrics\_path: /probe params: module: [http\_2xx] target: [prometheus.io] static\_configs: - targets: - localhost:9115 # For Windows and macOS replace with - host.docker.internal:9115 ``` After saving the config file switch to the terminal with your Prometheus docker container and stop it by pressing `ctrl+C` and start it again to reload the configuration by using the existing [command](#run-prometheus). The terminal should return the message `"Server is ready to receive web requests."` and after a few seconds you should start to see colourful graphs in [your Prometheus](http://localhost:9090/graph?g0.range\_input=5m&g0.stacked=0&g0.expr=probe\_http\_duration\_seconds&g0.tab=0). This works, but it has a few disadvantages: 1. The actual targets are up in the param config, which is very unusual and hard to understand later. 1. The `instance` label has the value of the blackbox exporter’s address which is technically true, but not what we are interested in. 1. We can’t see which URL we probed. This is unpractical and will also mix up different metrics into one if we probe several URLs. To fix this, we will use [relabeling](/docs/prometheus/latest/configuration/configuration/#relabel\_config). Relabeling is useful here because behind the scenes many things in Prometheus are configured with internal labels. The details are complicated and out of scope for this guide. Hence we will limit ourselves to the necessary. But if you want to know more check out this [talk](https://www.youtube.com/watch?v=b5-SvvZ7AwI). For now it suffices if you understand this: \* All labels starting with `\_\_` are dropped after the scrape. Most internal labels start with `\_\_`. \* You can set internal labels that are called `\_\_param\_`. Those set URL parameter with the key `` for the scrape request. \* There is an internal label `\_\_address\_\_` which is set by the `targets` under `static\_configs` and whose value is the hostname for the scrape request. By default it is later used to set the value for the label `instance`, which is attached to each metric and tells you where the metrics came from. Here is the config you will use to do that. Don’t worry if this is a bit much at once, we will go through it step by step: ```yaml global: scrape\_interval: 5s scrape\_configs: - job\_name: blackbox # To get metrics about the exporter itself metrics\_path: /metrics static\_configs: - targets: - localhost:9115 # For Windows and macOS replace with - host.docker.internal:9115 - job\_name: blackbox-http # To get metrics about the exporter’s targets metrics\_path: /probe params: module: [http\_2xx] static\_configs: - targets: - http://prometheus.io # Target to probe with http - https://prometheus.io # Target to probe with https - http://example.com:8080 # Target to probe with http on port 8080 relabel\_configs: - source\_labels: [\_\_address\_\_] target\_label: \_\_param\_target - source\_labels: [\_\_param\_target] target\_label: instance - target\_label: \_\_address\_\_ replacement: localhost:9115 # The blackbox exporter’s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115 ``` So what is new compared to the [last config](#prometheus-config)? `params` does not include `target` anymore. Instead we add the actual targets under `static configs:` `targets`. We also use several because we can do that now: ```yaml params: module: [http\_2xx] static\_configs: - targets: - http://prometheus.io # Target to probe with http - https://prometheus.io # Target to probe with https - http://example.com:8080 # Target to probe with http on port 8080 ``` `relabel\_configs` contains the new relabeling rules: ```yaml relabel\_configs: - source\_labels: [\_\_address\_\_] target\_label: \_\_param\_target - source\_labels: [\_\_param\_target] target\_label: instance - target\_label: \_\_address\_\_ replacement: localhost:9115 #
https://github.com/prometheus/docs/blob/main//docs/guides/multi-target-exporter.md
main
prometheus
[ -0.04759059101343155, 0.025485754013061523, -0.01666887477040291, 0.05516517534852028, 0.03272964432835579, -0.13296861946582794, -0.028223708271980286, -0.015842106193304062, -0.022030746564269066, -0.042351868003606796, -0.04182611405849457, -0.08224841207265854, 0.0014397810446098447, -...
0.107151
# Target to probe with http - https://prometheus.io # Target to probe with https - http://example.com:8080 # Target to probe with http on port 8080 ``` `relabel\_configs` contains the new relabeling rules: ```yaml relabel\_configs: - source\_labels: [\_\_address\_\_] target\_label: \_\_param\_target - source\_labels: [\_\_param\_target] target\_label: instance - target\_label: \_\_address\_\_ replacement: localhost:9115 # The blackbox exporter’s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115 ``` Before applying the relabeling rules, the URI of a request Prometheus would make would look like this: `"http://prometheus.io/probe?module=http\_2xx"`. After relabeling it will look like this `"http://localhost:9115/probe?target=http://prometheus.io&module=http\_2xx"`. Now let us explore how each rule does that: First we take the values from the label `\_\_address\_\_` (which contain the values from `targets`) and write them to a new label `\_\_param\_target` which will add a parameter `target` to the Prometheus scrape requests: ```yaml relabel\_configs: - source\_labels: [\_\_address\_\_] target\_label: \_\_param\_target ``` After this our imagined Prometheus request URI has now a target parameter: `"http://prometheus.io/probe?target=http://prometheus.io&module=http\_2xx"`. Then we take the values from the label `\_\_param\_target` and create a label instance with the values. ```yaml relabel\_configs: - source\_labels: [\_\_param\_target] target\_label: instance ``` Our request will not change, but the metrics that come back from our request will now bear a label `instance="http://prometheus.io"`. After that we write the value `localhost:9115` (the URI of our exporter) to the label `\_\_address\_\_`. This will be used as the hostname and port for the Prometheus scrape requests. So that it queries the exporter and not the target URI directly. ```yaml relabel\_configs: - target\_label: \_\_address\_\_ replacement: localhost:9115 # The blackbox exporter’s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115 ``` Our request is now `"localhost:9115/probe?target=http://prometheus.io&module=http\_2xx"`. This way we can have the actual targets there, get them as `instance` label values while letting Prometheus make a request against the blackbox exporter. Often people combine these with a specific service discovery. Check out the [configuration documentation](/docs/prometheus/latest/configuration/configuration) for more information. Using them is no problem, as these write into the `\_\_address\_\_` label just like `targets` defined under `static\_configs`. That is it. Restart the Prometheus docker container and look at your [metrics](http://localhost:9090/graph?g0.range\_input=30m&g0.stacked=0&g0.expr=probe\_http\_duration\_seconds&g0.tab=0). Pay attention that you selected the period of time when the metrics were actually collected. # Summary In this guide, you learned how the multi-target exporter pattern works, how to run a blackbox exporter with a customised module, and to configure Prometheus using relabeling to scrape metrics with prober labels.
https://github.com/prometheus/docs/blob/main//docs/guides/multi-target-exporter.md
main
prometheus
[ -0.01881244219839573, 0.0669308453798294, -0.0343388095498085, -0.0051567815244197845, 0.010884505696594715, -0.09261874109506607, -0.0029996957164257765, -0.06353715807199478, 0.014621051028370857, -0.042153678834438324, 0.026375800371170044, -0.07016868144273758, 0.0239899642765522, -0.0...
0.060856
Prometheus supports [Transport Layer Security](https://en.wikipedia.org/wiki/Transport\_Layer\_Security) (TLS) encryption for connections to Prometheus instances (i.e. to the expression browser or [HTTP API](/docs/prometheus/latest/querying/api/)). If you would like to enforce TLS for those connections, you would need to create a specific web configuration file. NOTE: This guide is about TLS connections \*to\* Prometheus instances. TLS is also supported for connections \*from\* Prometheus instances to [scrape targets](/docs/prometheus/latest/configuration/configuration/#tls\_config). ## Pre-requisites Let's say that you already have a Prometheus instance up and running, and you want to adapt it. We will not cover the initial Prometheus setup in this guide. Let's say that you want to run a Prometheus instance served with TLS, available at the `example.com` domain (which you own). Let's also say that you've generated the following using [OpenSSL](https://www.digitalocean.com/community/tutorials/openssl-essentials-working-with-ssl-certificates-private-keys-and-csrs) or an analogous tool: \* an SSL certificate at `/home/prometheus/certs/example.com/example.com.crt` \* an SSL key at `/home/prometheus/certs/example.com/example.com.key` You can generate a self-signed certificate and private key using this command: ```bash mkdir -p /home/prometheus/certs/example.com && cd /home/prometheus/certs/certs/example.com openssl req \ -x509 \ -newkey rsa:4096 \ -nodes \ -keyout example.com.key \ -out example.com.crt ``` Fill out the appropriate information at the prompts, and make sure to enter `example.com` at the `Common Name` prompt. ## Prometheus configuration Below is an example [`web-config.yml`](https://prometheus.io/docs/prometheus/latest/configuration/https/) configuration file. With this configuration, Prometheus will serve all its endpoints behind TLS. ```yaml tls\_server\_config: cert\_file: /home/prometheus/certs/example.com/example.com.crt key\_file: /home/prometheus/certs/example.com/example.com.key ``` To make Prometheus use this config, you will need to call it with the flag `--web.config.file`. ```bash prometheus \ --config.file=/path/to/prometheus.yml \ --web.config.file=/path/to/web-config.yml \ --web.external-url=https://example.com/ ``` The `--web.external-url=` flag is optional here. ## Testing If you'd like to test out TLS locally using the `example.com` domain, you can add an entry to your `/etc/hosts` file that re-routes `example.com` to `localhost`: ``` 127.0.0.1 example.com ``` You can then use cURL to interact with your local Prometheus setup: ```bash curl --cacert /home/prometheus/certs/example.com/example.com.crt \ https://example.com/api/v1/label/job/values ``` You can connect to the Prometheus server without specifying certs using the `--insecure` or `-k` flag: ```bash curl -k https://example.com/api/v1/label/job/values ```
https://github.com/prometheus/docs/blob/main//docs/guides/tls-encryption.md
main
prometheus
[ -0.142243891954422, 0.053788162767887115, -0.01871166191995144, 0.005222248379141092, -0.02052859589457512, -0.1608119010925293, 0.03808188438415527, -0.07145524770021439, 0.016511695459485054, -0.05264989659190178, -0.0313398577272892, -0.049022480845451355, 0.07147364318370819, 0.0063426...
0.094603
[cAdvisor](https://github.com/google/cadvisor) (short for \*\*c\*\*ontainer \*\*Advisor\*\*) analyzes and exposes resource usage and performance data from running containers. cAdvisor exposes Prometheus metrics out of the box. In this guide, we will: \* create a local multi-container [Docker Compose](https://docs.docker.com/compose/) installation that includes containers running Prometheus, cAdvisor, and a [Redis](https://redis.io/) server, respectively \* examine some container metrics produced by the Redis container, collected by cAdvisor, and scraped by Prometheus ## Prometheus configuration First, you'll need to [configure Prometheus](/docs/prometheus/latest/configuration/configuration) to scrape metrics from cAdvisor. Create a `prometheus.yml` file and populate it with this configuration: ```yaml scrape\_configs: - job\_name: cadvisor scrape\_interval: 5s static\_configs: - targets: - cadvisor:8080 ``` ## Docker Compose configuration Now we'll need to create a Docker Compose [configuration](https://docs.docker.com/compose/compose-file/) that specifies which containers are part of our installation as well as which ports are exposed by each container, which volumes are used, and so on. In the same folder where you created the [`prometheus.yml`](#prometheus-configuration) file, create a `docker-compose.yml` file and populate it with this Docker Compose configuration: ### Using Bind Mounts ```yaml version: '3.2' services: prometheus: image: prom/prometheus:latest container\_name: prometheus ports: - 9090:9090 command: - --config.file=/etc/prometheus/prometheus.yml volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro depends\_on: - cadvisor cadvisor: image: gcr.io/cadvisor/cadvisor:latest container\_name: cadvisor ports: - 8080:8080 volumes: - /:/rootfs:ro - /var/run:/var/run:rw - /sys:/sys:ro - /var/lib/docker/:/var/lib/docker:ro depends\_on: - redis redis: image: redis:latest container\_name: redis ports: - 6379:6379 ``` This configuration instructs Docker Compose to run three services, each of which corresponds to a [Docker](https://docker.com) container: 1. The `prometheus` service uses the local `prometheus.yml` configuration file (imported into the container by the `volumes` parameter). 1. The `cadvisor` service exposes port 8080 (the default port for cAdvisor metrics) and relies on a variety of local volumes (`/`, `/var/run`, etc.). 1. The `redis` service is a standard Redis server. cAdvisor will gather container metrics from this container automatically, i.e. without any further configuration. To run the installation: ```bash docker-compose up ``` If Docker Compose successfully starts up all three containers, you should see output like this: ``` prometheus | level=info ts=2018-07-12T22:02:40.5195272Z caller=main.go:500 msg="Server is ready to receive web requests." ``` You can verify that all three containers are running using the [`ps`](https://docs.docker.com/compose/reference/ps/) command: ```bash docker-compose ps ``` Your output will look something like this: ``` Name Command State Ports ---------------------------------------------------------------------------- cadvisor /usr/bin/cadvisor -logtostderr Up 8080/tcp prometheus /bin/prometheus --config.f ... Up 0.0.0.0:9090->9090/tcp redis docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp ``` ### Alternative: Using Inline Docker Configs (Remote Deployments) If you're managing a remote Docker host and prefer to keep all configuration within the docker-compose.yml file (avoiding the need to manage separate config files on the host), you can use Docker's configs feature: ```yaml version: '3.8' configs: prometheus\_config: content: | scrape\_configs: - job\_name: 'cadvisor' scrape\_interval: 5s static\_configs: - targets: ['cadvisor:8080'] services: prometheus: image: prom/prometheus:latest container\_name: prometheus ports: - 9090:9090 command: - --config.file=/etc/prometheus/prometheus.yml configs: - source: prometheus\_config target: /etc/prometheus/prometheus.yml uid: "65534" # Required: numeric UID for 'nobody' user gid: "65534" # Required: numeric GID for 'nobody' group mode: 0400 # Required: read-only permissions depends\_on: - cadvisor cadvisor: image: gcr.io/cadvisor/cadvisor:latest container\_name: cadvisor ports: - 8080:8080 volumes: - /:/rootfs:ro - /var/run:/var/run:rw - /sys:/sys:ro - /var/lib/docker/:/var/lib/docker:ro depends\_on: - redis redis: image: redis:latest container\_name: redis ports: - 6379:6379 ``` #### Important Notes ⚠️ \*\*Required Fields\*\*: When using Docker `configs`, you \*\*must\*\* explicitly specify `uid`, `gid`, and `mode` as numeric values: - `uid: "65534"` - The numeric user ID (65534 = `nobody` user in the Prometheus image) - `gid: "65534"` - The numeric group ID (65534 = `nobody` group) - `mode: 0400` - File permissions (read-only for owner) Omitting these fields or using string values like `"nobody"` will cause the following error: ``` strconv.Atoi: parsing "nobody": invalid syntax ``` ## Exploring the cAdvisor
https://github.com/prometheus/docs/blob/main//docs/guides/cadvisor.md
main
prometheus
[ -0.020971082150936127, 0.008398809470236301, -0.061955392360687256, 0.026203235611319542, 0.009815460070967674, -0.161782905459404, -0.021081283688545227, 0.015784593299031258, -0.036386266350746155, -0.012311195954680443, -0.07374022901058197, -0.07508820295333862, 0.014293895103037357, 0...
0.208266
in the Prometheus image) - `gid: "65534"` - The numeric group ID (65534 = `nobody` group) - `mode: 0400` - File permissions (read-only for owner) Omitting these fields or using string values like `"nobody"` will cause the following error: ``` strconv.Atoi: parsing "nobody": invalid syntax ``` ## Exploring the cAdvisor web UI You can access the cAdvisor [web UI](https://github.com/google/cadvisor/blob/master/docs/web.md) at `http://localhost:8080`. You can explore stats and graphs for specific Docker containers in our installation at `http://localhost:8080/docker/`. Metrics for the Redis container, for example, can be accessed at `http://localhost:8080/docker/redis`, Prometheus at `http://localhost:8080/docker/prometheus`, and so on. ## Exploring metrics in the expression browser cAdvisor's web UI is a useful interface for exploring the kinds of things that cAdvisor monitors, but it doesn't provide an interface for exploring container \*metrics\*. For that we'll need the Prometheus [expression browser](/docs/visualization/browser), which is available at `http://localhost:9090/graph`. You can enter Prometheus expressions into the expression bar, which looks like this: ![Prometheus expression bar](/assets/docs/prometheus-expression-bar.png) Let's start by exploring the `container\_start\_time\_seconds` metric, which records the start time of containers (in seconds). You can select for specific containers by name using the `name=""` expression. The container name corresponds to the `container\_name` parameter in the Docker Compose configuration. The [`container\_start\_time\_seconds{name="redis"}`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=container\_start\_time\_seconds%7Bname%3D%22redis%22%7D&g0.tab=1) expression, for example, shows the start time for the `redis` container. NOTE: A full listing of cAdvisor-gathered container metrics exposed to Prometheus can be found in the [cAdvisor documentation](https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md). ## Other expressions The table below lists some other example expressions Expression | Description | For :----------|:------------|:--- [`rate(container\_cpu\_usage\_seconds\_total{name="redis"}[1m])`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=rate(container\_cpu\_usage\_seconds\_total%7Bname%3D%22redis%22%7D%5B1m%5D)&g0.tab=1) | The [cgroup](https://en.wikipedia.org/wiki/Cgroups)'s CPU usage in the last minute | The `redis` container [`container\_memory\_usage\_bytes{name="redis"}`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=container\_memory\_usage\_bytes%7Bname%3D%22redis%22%7D&g0.tab=1) | The cgroup's total memory usage (in bytes) | The `redis` container [`rate(container\_network\_transmit\_bytes\_total[1m])`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=rate(container\_network\_transmit\_bytes\_total%5B1m%5D)&g0.tab=1) | Bytes transmitted over the network by the container per second in the last minute | All containers [`rate(container\_network\_receive\_bytes\_total[1m])`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=rate(container\_network\_receive\_bytes\_total%5B1m%5D)&g0.tab=1) | Bytes received over the network by the container per second in the last minute | All containers ## Summary In this guide, we ran three separate containers in a single installation using Docker Compose: a Prometheus container scraped metrics from a cAdvisor container which, in turns, gathered metrics produced by a Redis container. We then explored a handful of cAdvisor container metrics using the Prometheus expression browser.
https://github.com/prometheus/docs/blob/main//docs/guides/cadvisor.md
main
prometheus
[ -0.002525622956454754, 0.04948493465781212, -0.0695938840508461, -0.04656873270869255, 0.020707570016384125, -0.1627519726753235, -0.028521213680505753, 0.004230585880577564, -0.025266915559768677, 0.02503209374845028, 0.02425380051136017, -0.08388391137123108, 0.04328448697924614, 0.01668...
0.126256
The Prometheus [\*\*Node Exporter\*\*](https://github.com/prometheus/node\_exporter) exposes a wide variety of hardware- and kernel-related metrics. In this guide, you will: \* Start up a Node Exporter on `localhost` \* Start up a Prometheus instance on `localhost` that's configured to scrape metrics from the running Node Exporter NOTE: While the Prometheus Node Exporter is for \*nix systems, there is the [Windows exporter](https://github.com/prometheus-community/windows\_exporter) for Windows that serves an analogous purpose. ## Installing and running the Node Exporter The Prometheus Node Exporter is a single static binary that you can install [via tarball](#tarball-installation). Once you've downloaded it from the Prometheus [downloads page](/download#node\_exporter) extract it, and run it: ```bash # NOTE: Replace the URL with one from the above mentioned "downloads" page. # Node Exporter is available for multiple OS targets and architectures. # Downloads are available at: # https://github.com/prometheus/node\_exporter/releases/download/v/node\_exporter-.-.tar.gz # # , , and are placeholders: # - : Release version (e.g., 1.10.2) # - : Operating system (e.g., linux, darwin, freebsd) # - : Architecture (e.g., amd64, arm64, 386) # # For this example, we will use Node Exporter version 1.10.2 for a Linux system with amd64 architecture. wget https://github.com/prometheus/node\_exporter/releases/download/v1.10.2/node\_exporter-1.10.2.linux-amd64.tar.gz tar xvfz node\_exporter-1.10.2.linux-amd64.tar.gz cd node\_exporter-1.10.2.linux-amd64 ./node\_exporter ``` You should see output like this indicating that the Node Exporter is now running and exposing metrics on port 9100: ``` INFO[0000] Starting node\_exporter (version=0.16.0, branch=HEAD, revision=d42bd70f4363dced6b77d8fc311ea57b63387e4f) source="node\_exporter.go:82" INFO[0000] Build context (go=go1.9.6, user=root@a67a9bc13a69, date=20180515-15:53:28) source="node\_exporter.go:83" INFO[0000] Enabled collectors: source="node\_exporter.go:90" INFO[0000] - boottime source="node\_exporter.go:97" ... INFO[0000] Listening on :9100 source="node\_exporter.go:111" ``` ## Node Exporter metrics Once the Node Exporter is installed and running, you can verify that metrics are being exported by cURLing the `/metrics` endpoint: ```bash curl http://localhost:9100/metrics ``` You should see output like this: ``` # HELP go\_gc\_duration\_seconds A summary of the GC invocation durations. # TYPE go\_gc\_duration\_seconds summary go\_gc\_duration\_seconds{quantile="0"} 3.8996e-05 go\_gc\_duration\_seconds{quantile="0.25"} 4.5926e-05 go\_gc\_duration\_seconds{quantile="0.5"} 5.846e-05 # etc. ``` Success! The Node Exporter is now exposing metrics that Prometheus can scrape, including a wide variety of system metrics further down in the output (prefixed with `node\_`). To view those metrics (along with help and type information): ```bash curl http://localhost:9100/metrics | grep "node\_" ``` ## Configuring your Prometheus instances Your locally running Prometheus instance needs to be properly configured in order to access Node Exporter metrics. The following [`prometheus.yml`](/docs/prometheus/latest/configuration/configuration/) example configuration file will tell the Prometheus instance to scrape, and how frequently, from the Node Exporter via `localhost:9100`: ```yaml global: scrape\_interval: 15s scrape\_configs: - job\_name: node static\_configs: - targets: ['localhost:9100'] ``` To install Prometheus, [download the latest release](/download) for your platform and untar it: ```bash wget https://github.com/prometheus/prometheus/releases/download/v\*/prometheus-\*.\*-amd64.tar.gz tar xvf prometheus-\*.\*-amd64.tar.gz cd prometheus-\*.\* ``` Once Prometheus is installed you can start it up, using the `--config.file` flag to point to the Prometheus configuration that you created [above](#config): ```bash ./prometheus --config.file=./prometheus.yml ``` ## Exploring Node Exporter metrics through the Prometheus expression browser Now that Prometheus is scraping metrics from a running Node Exporter instance, you can explore those metrics using the Prometheus UI (aka the [expression browser](/docs/visualization/browser/)). Navigate to `localhost:9090/graph` in your browser and use the main expression bar at the top of the page to enter expressions. The expression bar looks like this: ![Prometheus expressions browser](/assets/docs/prometheus-expression-bar.png) Metrics specific to the Node Exporter are prefixed with `node\_` and include metrics like `node\_cpu\_seconds\_total` and `node\_exporter\_build\_info`. Click on the links below to see some example metrics: Metric | Meaning :------|:------- [`rate(node\_cpu\_seconds\_total{mode="system"}[1m])`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=rate(node\_cpu\_seconds\_total%7Bmode%3D%22system%22%7D%5B1m%5D)&g0.tab=1) | The average amount of CPU time spent in system mode, per second, over the last minute (in seconds) [`node\_filesystem\_avail\_bytes`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=node\_filesystem\_avail\_bytes&g0.tab=1) | The filesystem space available to non-root users (in bytes) [`rate(node\_network\_receive\_bytes\_total[1m])`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=rate(node\_network\_receive\_bytes\_total%5B1m%5D)&g0.tab=1) | The average network traffic received, per second, over the last minute (in bytes)
https://github.com/prometheus/docs/blob/main//docs/guides/node-exporter.md
main
prometheus
[ -0.08081940561532974, 0.005555629264563322, -0.03531133010983467, 0.0639275535941124, 0.05704851448535919, -0.10061054676771164, -0.031547293066978455, 0.03746503219008446, -0.025267619639635086, -0.015956968069076538, -0.03804749995470047, -0.06546510756015778, -0.05333277955651283, -0.02...
0.227105
The average amount of CPU time spent in system mode, per second, over the last minute (in seconds) [`node\_filesystem\_avail\_bytes`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=node\_filesystem\_avail\_bytes&g0.tab=1) | The filesystem space available to non-root users (in bytes) [`rate(node\_network\_receive\_bytes\_total[1m])`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=rate(node\_network\_receive\_bytes\_total%5B1m%5D)&g0.tab=1) | The average network traffic received, per second, over the last minute (in bytes)
https://github.com/prometheus/docs/blob/main//docs/guides/node-exporter.md
main
prometheus
[ 0.005801215302199125, 0.007918312214314938, -0.058153945952653885, 0.0346241295337677, 0.011886216700077057, -0.11990423500537872, 0.00716417608782649, 0.09837611764669418, -0.024181297048926353, 0.06262314319610596, 0.004373049363493919, -0.0164727121591568, 0.03474816679954529, -0.036027...
0.120351
Prometheus can discover targets in a [Docker Swarm][swarm] cluster, as of v2.20.0. This guide demonstrates how to use that service discovery mechanism. ## Docker Swarm service discovery architecture The [Docker Swarm service discovery][swarmsd] contains 3 different roles: nodes, services, and tasks. The first role, \*\*nodes\*\*, represents the hosts that are part of the Swarm. It can be used to automatically monitor the Docker daemons or the Node Exporters who run on the Swarm hosts. The second role, \*\*tasks\*\*, represents any individual container deployed in the swarm. Each task gets its associated service labels. One service can be backed by one or multiple tasks. The third one, \*\*services\*\*, will discover the services deployed in the swarm. It will discover the ports exposed by the services. Usually you will want to use the tasks role instead of this one. Prometheus will only discover tasks and service that expose ports. NOTE: The rest of this post assumes that you have a Swarm running. ## Setting up Prometheus For this guide, you need to [setup Prometheus][setup]. We will assume that Prometheus runs on a Docker Swarm manager node and has access to the Docker socket at `/var/run/docker.sock`. ## Monitoring Docker daemons Let's dive into the service discovery itself. Docker itself, as a daemon, exposes [metrics][dockermetrics] that can be ingested by a Prometheus server. You can enable them by editing `/etc/docker/daemon.json` and setting the following properties: ```json { "metrics-addr" : "0.0.0.0:9323", "experimental" : true } ``` Instead of `0.0.0.0`, you can set the IP of the Docker Swarm node. A restart of the daemon is required to take the new configuration into account. The [Docker documentation][dockermetrics] contains more info about this. Then, you can configure Prometheus to scrape the Docker daemon, by providing the following `prometheus.yml` file: ```yaml scrape\_configs: # Make Prometheus scrape itself for metrics. - job\_name: 'prometheus' static\_configs: - targets: ['localhost:9090'] # Create a job for Docker daemons. - job\_name: 'docker' dockerswarm\_sd\_configs: - host: unix:///var/run/docker.sock role: nodes relabel\_configs: # Fetch metrics on port 9323. - source\_labels: [\_\_meta\_dockerswarm\_node\_address] target\_label: \_\_address\_\_ replacement: $1:9323 # Set hostname as instance label - source\_labels: [\_\_meta\_dockerswarm\_node\_hostname] target\_label: instance ``` For the nodes role, you can also use the `port` parameter of `dockerswarm\_sd\_configs`. However, using `relabel\_configs` is recommended as it enables Prometheus to reuse the same API calls across identical Docker Swarm configurations. ## Monitoring Containers Let's now deploy a service in our Swarm. We will deploy [cadvisor][cad], which exposes container resources metrics: ```shell docker service create --name cadvisor -l prometheus-job=cadvisor \ --mode=global --publish target=8080,mode=host \ --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock,ro \ --mount type=bind,src=/,dst=/rootfs,ro \ --mount type=bind,src=/var/run,dst=/var/run \ --mount type=bind,src=/sys,dst=/sys,ro \ --mount type=bind,src=/var/lib/docker,dst=/var/lib/docker,ro \ google/cadvisor -docker\_only ``` This is a minimal `prometheus.yml` file to monitor it: ```yaml scrape\_configs: # Make Prometheus scrape itself for metrics. - job\_name: 'prometheus' static\_configs: - targets: ['localhost:9090'] # Create a job for Docker Swarm containers. - job\_name: 'dockerswarm' dockerswarm\_sd\_configs: - host: unix:///var/run/docker.sock role: tasks relabel\_configs: # Only keep containers that should be running. - source\_labels: [\_\_meta\_dockerswarm\_task\_desired\_state] regex: running action: keep # Only keep containers that have a `prometheus-job` label. - source\_labels: [\_\_meta\_dockerswarm\_service\_label\_prometheus\_job] regex: .+ action: keep # Use the prometheus-job Swarm label as Prometheus job label. - source\_labels: [\_\_meta\_dockerswarm\_service\_label\_prometheus\_job] target\_label: job ``` Let's analyze each part of the [relabel configuration][rela]. ```yaml - source\_labels: [\_\_meta\_dockerswarm\_task\_desired\_state] regex: running action: keep ``` Docker Swarm exposes the desired [state of the tasks][state] over the API. In out example, we only \*\*keep\*\* the targets that should be running. It prevents monitoring tasks that should be shut down. ```yaml - source\_labels: [\_\_meta\_dockerswarm\_service\_label\_prometheus\_job] regex: .+ action: keep ``` When we deployed our cadvisor, we have added a label `prometheus-job=cadvisor`. As Prometheus fetches the tasks labels,
https://github.com/prometheus/docs/blob/main//docs/guides/dockerswarm.md
main
prometheus
[ -0.06184406206011772, 0.033220380544662476, -0.05223458260297775, -0.006382779683917761, 0.05127596855163574, -0.10311463475227356, 0.05248868465423584, -0.0707831010222435, -0.02522885985672474, 0.04351017251610756, -0.036252740770578384, -0.03482801839709282, 0.04578341171145439, -0.0002...
0.237842
over the API. In out example, we only \*\*keep\*\* the targets that should be running. It prevents monitoring tasks that should be shut down. ```yaml - source\_labels: [\_\_meta\_dockerswarm\_service\_label\_prometheus\_job] regex: .+ action: keep ``` When we deployed our cadvisor, we have added a label `prometheus-job=cadvisor`. As Prometheus fetches the tasks labels, we can instruct it to \*\*only\*\* keep the targets which have a `prometheus-job` label. ```yaml - source\_labels: [\_\_meta\_dockerswarm\_service\_label\_prometheus\_job] target\_label: job ``` That last part takes the label `prometheus-job` of the task and turns it into a target label, overwriting the default `dockerswarm` job label that comes from the scrape config. ## Discovered labels The [Prometheus Documentation][swarmsd] contains the full list of labels, but here are other relabel configs that you might find useful. ### Scraping metrics via a certain network only ```yaml - source\_labels: [\_\_meta\_dockerswarm\_network\_name] regex: ingress action: keep ``` ### Scraping global tasks only Global tasks run on every daemon. ```yaml - source\_labels: [\_\_meta\_dockerswarm\_service\_mode] regex: global action: keep - source\_labels: [\_\_meta\_dockerswarm\_task\_port\_publish\_mode] regex: host action: keep ``` ### Adding a docker\_node label to the targets ```yaml - source\_labels: [\_\_meta\_dockerswarm\_node\_hostname] target\_label: docker\_node ``` ## Connecting to the Docker Swarm The above `dockerswarm\_sd\_configs` entries have a field host: ```yaml host: unix:///var/run/docker.sock ``` That is using the Docker socket. Prometheus offers [additional configuration options][swarmsd] to connect to Swarm using HTTP and HTTPS, if you prefer that over the unix socket. ## Conclusion There are many discovery labels you can play with to better determine which targets to monitor and how, for the tasks, there is more than 25 labels available. Don't hesitate to look at the "Service Discovery" page of your Prometheus server (under the "Status" menu) to see all the discovered labels. The service discovery makes no assumptions about your Swarm stack, in such a way that given proper configuration, this should be pluggable to any existing stack. [state]:https://docs.docker.com/engine/swarm/how-swarm-mode-works/swarm-task-states/ [rela]:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel\_config [swarm]:https://docs.docker.com/engine/swarm/ [swarmsd]:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm\_sd\_config [dockermetrics]:https://docs.docker.com/config/daemon/prometheus/ [cad]:https://github.com/google/cadvisor [setup]:https://prometheus.io/docs/prometheus/latest/getting\_started/
https://github.com/prometheus/docs/blob/main//docs/guides/dockerswarm.md
main
prometheus
[ -0.0905914381146431, 0.07675222307443619, -0.037707023322582245, 0.011532112024724483, 0.07364816963672638, -0.12023293972015381, 0.06117301061749458, -0.06385260820388794, 0.008603189140558243, -0.015391628257930279, -0.015129666775465012, -0.029144199565052986, 0.01599307730793953, -0.00...
0.182662
Prometheus has an official [Go client library](https://github.com/prometheus/client\_golang) that you can use to instrument Go applications. In this guide, we'll create a simple Go application that exposes Prometheus metrics via HTTP. NOTE: For comprehensive API documentation, see the [GoDoc](https://godoc.org/github.com/prometheus/client\_golang) for Prometheus' various Go libraries. ## Installation You can install the `prometheus`, `promauto`, and `promhttp` libraries necessary for the guide using [`go get`](https://golang.org/doc/articles/go\_command.html): ```bash go get github.com/prometheus/client\_golang/prometheus go get github.com/prometheus/client\_golang/prometheus/promauto go get github.com/prometheus/client\_golang/prometheus/promhttp ``` ## How Go exposition works To expose Prometheus metrics in a Go application, you need to provide a `/metrics` HTTP endpoint. You can use the [`prometheus/promhttp`](https://godoc.org/github.com/prometheus/client\_golang/prometheus/promhttp) library's HTTP [`Handler`](https://godoc.org/github.com/prometheus/client\_golang/prometheus/promhttp#Handler) as the handler function. This minimal application, for example, would expose the default metrics for Go applications via `http://localhost:2112/metrics`: ```go package main import ( "net/http" "github.com/prometheus/client\_golang/prometheus" "github.com/prometheus/client\_golang/prometheus/collectors" "github.com/prometheus/client\_golang/prometheus/promhttp" ) func main() { reg := prometheus.NewRegistry() reg.MustRegister( collectors.NewGoCollector(), collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}), ) http.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{})) http.ListenAndServe(":2112", nil) } ``` To start the application: ```bash go run main.go ``` To access the metrics: ```bash curl http://localhost:2112/metrics ``` ## Adding your own metrics The application [above](#how-go-exposition-works) exposes only the default Go metrics. You can also register your own custom application-specific metrics. This example application exposes a `myapp\_processed\_ops\_total` [counter](/docs/concepts/metric\_types/#counter) that counts the number of operations that have been processed thus far. Every 2 seconds, the counter is incremented by one. ```go package main import ( "net/http" "time" "github.com/prometheus/client\_golang/prometheus" "github.com/prometheus/client\_golang/prometheus/promauto" "github.com/prometheus/client\_golang/prometheus/promhttp" ) type metrics struct { opsProcessed prometheus.Counter } func newMetrics(reg prometheus.Registerer) \*metrics { m := &metrics{ opsProcessed: promauto.With(reg).NewCounter(prometheus.CounterOpts{ Name: "myapp\_processed\_ops\_total", Help: "The total number of processed events", }), } return m } func recordMetrics(m \*metrics) { go func() { for { m.opsProcessed.Inc() time.Sleep(2 \* time.Second) } }() } func main() { reg := prometheus.NewRegistry() m := newMetrics(reg) recordMetrics(m) http.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{})) http.ListenAndServe(":2112", nil) } ``` To run the application: ```bash go run main.go ``` To access the metrics: ```bash curl http://localhost:2112/metrics ``` In the metrics output, you'll see the help text, type information, and current value of the `myapp\_processed\_ops\_total` counter: ``` # HELP myapp\_processed\_ops\_total The total number of processed events # TYPE myapp\_processed\_ops\_total counter myapp\_processed\_ops\_total 5 ``` You can [configure](/docs/prometheus/latest/configuration/configuration/#scrape\_config) a locally running Prometheus instance to scrape metrics from the application. Here's an example `prometheus.yml` configuration: ```yaml scrape\_configs: - job\_name: myapp scrape\_interval: 10s static\_configs: - targets: - localhost:2112 ``` ## Other Go client features In this guide we covered just a small handful of features available in the Prometheus Go client libraries. You can also expose other metrics types, such as [gauges](https://godoc.org/github.com/prometheus/client\_golang/prometheus#Gauge) and [histograms](https://godoc.org/github.com/prometheus/client\_golang/prometheus#Histogram), [non-global registries](https://godoc.org/github.com/prometheus/client\_golang/prometheus#Registry), functions for [pushing metrics](https://godoc.org/github.com/prometheus/client\_golang/prometheus/push) to Prometheus [PushGateways](/docs/instrumenting/pushing/), bridging Prometheus and [Graphite](https://godoc.org/github.com/prometheus/client\_golang/prometheus/graphite), and more. ## Summary In this guide, you created two sample Go applications that expose metrics to Prometheus---one that exposes only the default Go metrics and one that also exposes a custom Prometheus counter---and configured a Prometheus instance to scrape metrics from those applications.
https://github.com/prometheus/docs/blob/main//docs/guides/go-application.md
main
prometheus
[ -0.05013475939631462, 0.026726070791482925, -0.05739057436585426, -0.03421372175216675, -0.10208519548177719, -0.06873827427625656, -0.007560097612440586, 0.05999113991856575, -0.017483482137322426, -0.05705958977341652, -0.01040442381054163, -0.022610343992710114, -0.01839078776538372, 0....
0.186751
Prometheus offers a variety of [service discovery options](https://github.com/prometheus/prometheus/tree/main/discovery) for discovering scrape targets, including [Kubernetes](/docs/prometheus/latest/configuration/configuration/#kubernetes\_sd\_config), [Consul](/docs/prometheus/latest/configuration/configuration/#consul\_sd\_config), and many others. If you need to use a service discovery system that is not currently supported, your use case may be best served by Prometheus' [file-based service discovery](/docs/prometheus/latest/configuration/configuration/#file\_sd\_config) mechanism, which enables you to list scrape targets in a JSON file (along with metadata about those targets). In this guide, we will: \* Install and run a Prometheus [Node Exporter](./node-exporter.md) locally \* Create a `targets.json` file specifying the host and port information for the Node Exporter \* Install and run a Prometheus instance that is configured to discover the Node Exporter using the `targets.json` file ## Installing and running the Node Exporter See [this section](./node-exporter.md#installing-and-running-the-node-exporter) of the [Monitoring Linux host metrics with the Node Exporter](./node-exporter.md) guide. The Node Exporter runs on port 9100. To ensure that the Node Exporter is exposing metrics: ```bash curl http://localhost:9100/metrics ``` The metrics output should look something like this: ``` # HELP go\_gc\_duration\_seconds A summary of the GC invocation durations. # TYPE go\_gc\_duration\_seconds summary go\_gc\_duration\_seconds{quantile="0"} 0 go\_gc\_duration\_seconds{quantile="0.25"} 0 go\_gc\_duration\_seconds{quantile="0.5"} 0 ... ``` ## Installing, configuring, and running Prometheus Like the Node Exporter, Prometheus is a single static binary that you can install via tarball. [Download the latest release](/download#prometheus) for your platform and untar it: ```bash wget https://github.com/prometheus/prometheus/releases/download/v\*/prometheus-\*.\*-amd64.tar.gz tar xvf prometheus-\*.\*-amd64.tar.gz cd prometheus-\*.\* ``` The untarred directory contains a `prometheus.yml` configuration file. Replace the current contents of that file with this: ```yaml scrape\_configs: - job\_name: 'node' file\_sd\_configs: - files: - 'targets.json' ``` This configuration specifies that there is a job called `node` (for the Node Exporter) that retrieves host and port information for Node Exporter instances from a `targets.json` file. Now create that `targets.json` file and add this content to it: ```json [ { "labels": { "job": "node" }, "targets": [ "localhost:9100" ] } ] ``` NOTE: In this guide we'll work with JSON service discovery configurations manually for the sake of brevity. In general, however, we recommend that you use some kind of JSON-generating process or tool instead. This configuration specifies that there is a `node` job with one target: `localhost:9100`. Now you can start up Prometheus: ```bash ./prometheus ``` If Prometheus has started up successfully, you should see a line like this in the logs: ``` level=info ts=2018-08-13T20:39:24.905651509Z caller=main.go:500 msg="Server is ready to receive web requests." ``` ## Exploring the discovered services' metrics With Prometheus up and running, you can explore metrics exposed by the `node` service using the Prometheus [expression browser](/docs/visualization/browser). If you explore the [`up{job="node"}`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D&g0.tab=1) metric, for example, you can see that the Node Exporter is being appropriately discovered. ## Changing the targets list dynamically When using Prometheus' file-based service discovery mechanism, the Prometheus instance will listen for changes to the file and automatically update the scrape target list, without requiring an instance restart. To demonstrate this, start up a second Node Exporter instance on port 9200. First navigate to the directory containing the Node Exporter binary and run this command in a new terminal window: ```bash ./node\_exporter --web.listen-address=":9200" ``` Now modify the config in `targets.json` by adding an entry for the new Node Exporter: ```json [ { "targets": [ "localhost:9100" ], "labels": { "job": "node" } }, { "targets": [ "localhost:9200" ], "labels": { "job": "node" } } ] ``` When you save the changes, Prometheus will automatically be notified of the new list of targets. The [`up{job="node"}`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D&g0.tab=1) metric should display two instances with `instance` labels `localhost:9100` and `localhost:9200`. ## Summary In this guide, you installed and ran a Prometheus Node Exporter and configured Prometheus to discover and scrape metrics from the Node Exporter using file-based
https://github.com/prometheus/docs/blob/main//docs/guides/file-sd.md
main
prometheus
[ -0.0577358677983284, -0.004531577229499817, 0.005986978765577078, -0.0254280436784029, 0.03256271407008171, -0.07658813148736954, -0.011801538988947868, -0.023264972493052483, 0.029975702986121178, 0.031445108354091644, -0.005086970515549183, -0.053864460438489914, -0.005342143587768078, 0...
0.172341
will automatically be notified of the new list of targets. The [`up{job="node"}`](http://localhost:9090/graph?g0.range\_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D&g0.tab=1) metric should display two instances with `instance` labels `localhost:9100` and `localhost:9200`. ## Summary In this guide, you installed and ran a Prometheus Node Exporter and configured Prometheus to discover and scrape metrics from the Node Exporter using file-based service discovery.
https://github.com/prometheus/docs/blob/main//docs/guides/file-sd.md
main
prometheus
[ -0.08161810785531998, -0.029799576848745346, -0.0329938605427742, 0.014128127135336399, 0.05069200322031975, -0.0908849686384201, -0.0038316072896122932, -0.04225078597664833, -0.048212215304374695, -0.00731409527361393, -0.0261687058955431, -0.07361668348312378, 0.0233622919768095, -0.007...
0.130491
Prometheus supports [OTLP](https://opentelemetry.io/docs/specs/otlp) (aka "OpenTelemetry Protocol") ingestion through [HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp). ## Enable the OTLP receiver By default, the OTLP receiver is disabled, similarly to the Remote Write receiver. This is because Prometheus can work without any authentication, so it would not be safe to accept incoming traffic unless explicitly configured. To enable the receiver you need to toggle the CLI flag `--web.enable-otlp-receiver`. This will cause Prometheus to serve OTLP metrics receiving on HTTP `/api/v1/otlp/v1/metrics` path. ```shell $ prometheus --web.enable-otlp-receiver ``` ## Send OpenTelemetry Metrics to the Prometheus Server Generally you need to tell the source of the OTLP metrics traffic about Prometheus endpoint and the fact that the [HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp) mode of OTLP should be used (gRPC is usually a default). OpenTelemetry SDKs and instrumentation libraries can be usually configured via [standard environment variables](https://opentelemetry.io/docs/languages/sdk-configuration/). The following are the OpenTelemetry variables needed to send OpenTelemetry metrics to a Prometheus server on localhost: ```shell export OTEL\_EXPORTER\_OTLP\_PROTOCOL=http/protobuf export OTEL\_EXPORTER\_OTLP\_METRICS\_ENDPOINT=http://localhost:9090/api/v1/otlp/v1/metrics ``` Turn off traces and logs: ```shell export OTEL\_TRACES\_EXPORTER=none export OTEL\_LOGS\_EXPORTER=none ``` The default push interval for OpenTelemetry metrics is 60 seconds. The following will set a 15 second push interval: ```shell export OTEL\_METRIC\_EXPORT\_INTERVAL=15000 ``` If your instrumentation library does not provide `service.name` and `service.instance.id` out-of-the-box, it is highly recommended to set them. ```shell export OTEL\_SERVICE\_NAME="my-example-service" export OTEL\_RESOURCE\_ATTRIBUTES="service.instance.id=$(uuidgen)" ``` The above assumes that `uuidgen` command is available on your system. Make sure that `service.instance.id` is unique for each instance, and that a new `service.instance.id` is generated whenever a resource attribute chances. The [recommended](https://github.com/open-telemetry/semantic-conventions/tree/main/docs/resource) way is to generate a new UUID on each startup of an instance. ## Configuring Prometheus This section explains various recommended configuration aspects of Prometheus server to enable and tune your OpenTelemetry flow. See the example Prometheus configuration [file](https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-otlp.yml) we will use in the below section. ### Enable out-of-order ingestion There are multiple reasons why you might want to enable out-of-order ingestion. For example, the OpenTelemetry collector encourages batching and you could have multiple replicas of the collector sending data to Prometheus. Because there is no mechanism ordering those samples they could get out-of-order. To enable out-of-order ingestion you need to extend the Prometheus configuration file with the following: ```shell storage: tsdb: out\_of\_order\_time\_window: 30m ``` 30 minutes of out-of-order have been enough for most cases but don't hesitate to adjust this value to your needs. ### Promoting resource attributes Based on experience and conversations with our community, we've found that out of all the commonly seen resource attributes, there are certain worth attaching to all your OTLP metrics. By default, Prometheus won't be promoting any attributes. If you'd like to promote any of them, you can do so in this section of the Prometheus configuration file. The following snippet shares the best practice set of attributes to promote: ```yaml otlp: # Recommended attributes to be promoted to labels. promote\_resource\_attributes: - service.instance.id - service.name - service.namespace - service.version - cloud.availability\_zone - cloud.region - container.name - deployment.environment - deployment.environment.name - k8s.cluster.name - k8s.container.name - k8s.cronjob.name - k8s.daemonset.name - k8s.deployment.name - k8s.job.name - k8s.namespace.name - k8s.pod.name - k8s.replicaset.name - k8s.statefulset.name ``` ## Including resource attributes at query time All OTel resource attributes, by default excepting `service.instance.id`, `service.namespace`, and `service.name`, are translated to labels on the special `target\_info` metric. This means that, for OTel resource attributes you are not promoting, you can still include the corresponding labels in your queries, through joining with `target\_info`. To accomplish this, we recommend enabling the experimental PromQL function `info` for easy joining via the following flag: ```shell --enable-feature=promql-experimental-functions ``` An example of such a query can look like the following: ```promql info(rate(http\_server\_request\_duration\_seconds\_count[2m]), {k8s\_cluster\_name=~".+"}) ``` Alternately, the same thing can be accomplished
https://github.com/prometheus/docs/blob/main//docs/guides/opentelemetry.md
main
prometheus
[ -0.03467080742120743, 0.04403049498796463, -0.01459406316280365, 0.002961972961202264, -0.015833673998713493, -0.15625205636024475, 0.001361602684482932, -0.03451826050877571, 0.02071968838572502, 0.012716848403215408, -0.0012884774478152394, -0.03751690313220024, 0.013312879018485546, 0.0...
0.154162
in your queries, through joining with `target\_info`. To accomplish this, we recommend enabling the experimental PromQL function `info` for easy joining via the following flag: ```shell --enable-feature=promql-experimental-functions ``` An example of such a query can look like the following: ```promql info(rate(http\_server\_request\_duration\_seconds\_count[2m]), {k8s\_cluster\_name=~".+"}) ``` Alternately, the same thing can be accomplished through a raw join query: ```promql rate(http\_server\_request\_duration\_seconds\_count[2m]) \* on (job, instance) group\_left (k8s\_cluster\_name) target\_info ``` What happens in the two above queries is that the time series resulting from `rate(http\_server\_request\_duration\_seconds\_count[2m])` are augmented with the `k8s\_cluster\_name` label from the `target\_info` series that share the same `job` and `instance` labels. In other words, the `job` and `instance` labels are shared between `http\_server\_request\_duration\_seconds\_count` and `target\_info`, akin to SQL foreign keys. The `k8s\_cluster\_name` label, on the other hand, corresponds to the OTel resource attribute `k8s.cluster.name` (Prometheus converts dots to underscores, unless configured otherwise). Be aware though that the `info` function is generally more performant than raw join queries, because it only selects `target\_info` series with matching `job` and `instance` labels. Maybe more importantly, the `info` function solves an old and rather esoteric problem with the join query approach. When the values of other labels than the ones being joined on (i.e. the so-called identifying labels) change (i.e. churn), unless the old `target\_info` version gets marked as stale, there will be overlap between the old and new version of `target\_info` for the duration of the PromQL lookback delta (5 minutes by default). For this duration, join queries against `target\_info` will fail due to there being two matching distinct `target\_info` time series. Fortunately, the `info` function doesn't suffer from this issue however, because it always picks the time series with the latest sample! So, what is the relation between the `target\_info` metric and OTel resource attributes? When Prometheus processes an OTLP write request, and provided that contained resources include the attributes `service.instance.id` and/or `service.name`, Prometheus generates the info metric `target\_info` for every (OTel) resource. It adds to each such `target\_info` series the label `instance` with the value of the `service.instance.id` resource attribute, and the label `job` with the value of the `service.name` resource attribute. If the resource attribute `service.namespace` exists, it's prefixed to the `job` label value (i.e., `/`). By default `service.name`, `service.namespace`, and `service.instance.id` themselves are not added to `target\_info`, because they are converted into `job` and `instance`. However, the following configuration parameter can be enabled to add them to `target\_info` directly (going through normalization to replace dots with underscores, if `otlp.translation\_strategy` is `UnderscoreEscapingWithSuffixes`) on top of the conversion into `job` and `instance`. ``` otlp: keep\_identifying\_resource\_attributes: true ``` The rest of the resource attributes are also added as labels to the `target\_info` series, names converted to Prometheus format (e.g. dots converted to underscores) if `otlp.translation\_strategy` is `UnderscoreEscapingWithSuffixes`. If a resource lacks both `service.instance.id` and `service.name` attributes, no corresponding `target\_info` series is generated. For each of a resource's OTel metrics, Prometheus converts it to a corresponding Prometheus time series, and (if `target\_info` is generated) adds the right `instance` and `job` labels to it. ## UTF-8 From the 3.x version, Prometheus supports UTF-8 for metric names and labels, so [Prometheus normalization translator package from OpenTelemetry](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/translator/prometheus) can be omitted. Note that when Prometheus announces through content negotiation that it allows UTF-8 characters, it does not require that metric names contain previously-unsupported characters. The OTLP metrics may be converted in several different ways, depending on the configuration of the endpoint. So while UTF-8 is enabled by default in Prometheus storage and UI, you need to set the `translation\_strategy` for OTLP metrics receiver, which by default is set to old normalization `UnderscoreEscapingWithSuffixes`. There are four possible translation strategies, two of
https://github.com/prometheus/docs/blob/main//docs/guides/opentelemetry.md
main
prometheus
[ -0.015856636688113213, 0.03829459846019745, -0.07153042405843735, 0.09912464022636414, -0.05519511178135872, -0.05156926065683365, 0.014782324433326721, -0.024090591818094254, 0.05187214910984039, -0.004918649327009916, 0.037020243704319, -0.13348087668418884, 0.026877058669924736, -0.0311...
0.143631
in several different ways, depending on the configuration of the endpoint. So while UTF-8 is enabled by default in Prometheus storage and UI, you need to set the `translation\_strategy` for OTLP metrics receiver, which by default is set to old normalization `UnderscoreEscapingWithSuffixes`. There are four possible translation strategies, two of which require UTF-8 support to be enabled in Prometheus: \* `UnderscoreEscapingWithSuffixes`, the default. This fully escapes metric names for classic [Prometheus metric name compatibility](https://prometheus.io/docs/practices/naming/), and includes appending type and unit suffixes. \* `UnderscoreEscapingWithoutSuffixes`. This fully escapes metric names similar to UnderscoreEscapingWithSuffixes, but does not append type and unit suffixes. This mode is undesirable from a number of standpoints and users should be aware that the lack of suffixes could cause metric name collisions and only enable this mode in concert with careful testing. It is used by some organizations who prefer this balance of Otel symmetry and limited character support. \* `NoUTF8EscapingWithSuffixes` will disable changing special characters to `\_` which allows native use of OpenTelemetry metric format, especially with [the semantic conventions](https://opentelemetry.io/docs/specs/semconv/general/metrics/). Note that special suffixes like units and `\_total` for counters will be attached to prevent possible collisions with multiple metrics of the same name having different type or units. This mode requires UTF-8 to be enabled. \* `NoTranslation`. This strategy bypasses all metric and label name translation, passing them through unaltered. This mode requires UTF-8 to be enabled. Note that without suffixes, it is possible to have collisions when multiple metrics of the same name have different type or units. ``` otlp: # Ingest OTLP data keeping UTF-8 characters in metric/label names. translation\_strategy: NoTranslation ``` ## Delta Temporality The [OpenTelemetry specification says](https://opentelemetry.io/docs/specs/otel/metrics/data-model/#temporality) that both Delta temporality and Cumulative temporality are supported. While delta temporality is common in systems like statsd and graphite, cumulative temporality is the default in Prometheus. Today, Prometheus embeds the [delta to cumulative processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/deltatocumulativeprocessor) from [OpenTelemetry-Collector-contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib), which is capable of ingesting deltas and transforming them into the equivalent cumulative representation before storing in Prometheus' TSDB. This feature is \*\*\*experimental\*\*\*, so start Prometheus with the feature-flag `otlp-deltatocumulative` enabled to use it. The team is still working on a more efficient way of handling OTLP deltas.
https://github.com/prometheus/docs/blob/main//docs/guides/opentelemetry.md
main
prometheus
[ -0.04778510704636574, 0.026780569925904274, 0.022760555148124695, -0.020636117085814476, -0.061888761818408966, -0.08893255144357681, 0.005428904201835394, -0.024177026003599167, 0.04150073230266571, -0.05732535198330879, 0.03564628213644028, -0.07803349196910858, 0.007120814640074968, 0.0...
0.149249
## Introduction Versions of Prometheus before 3.0 required that metric and label names adhere to a strict set of character requirements. With Prometheus 3.0, all UTF-8 strings are valid names, but there are some manual changes needed for other parts of the ecosystem to introduce names with any UTF-8 characters. There may also be circumstances where users want to enforce the legacy character set, perhaps for compatibility with an older Prometheus or other scraper that does not yet support UTF-8. This document guides you through the UTF-8 transition details. ## Go Instrumentation Currently, metrics created by the official Prometheus [client\_golang library](https://github.com/prometheus/client\_golang) accept UTF-8 names by default. Previously, documentation recommended that users override the value of `model.NameValidationScheme` to select legacy validation by default. This boolean is now deprecated and should always be set to UTF8Validation. Legacy validation enforcement, if desired, should be done by individual implementations calling the appropriate validation APIs and is no longer a library feature. ### Instrumenting in other languages Other client libraries may not yet support UTF-8 and may require special handling or configuration. Check the documentation for the library you are using. ## Configuring Name Validation during Scraping By default, Prometheus 3.0 accepts all UTF-8 strings as valid metric and label names. It is possible to override this behavior for scraped targets and reject names that do not conform to the legacy character set. This option can be set in the Prometheus YAML file on a global basis: ```yaml global: metric\_name\_validation\_scheme: legacy ``` or on a per-scrape config basis: ```yaml scrape\_configs: - job\_name: prometheus metric\_name\_validation\_scheme: legacy ``` Scrape config settings override the global setting. If a scrape config validation is set but the escaping scheme is not set, the escaping scheme will be inferred from the validation scheme. This allows users to set only metric\_name\_validation\_scheme in scrape configs without also having to specify a metric\_name\_escaping\_scheme. ### Scrape Content Negotiation for UTF-8 escaping At scrape time, the scraping system \*\*must\*\* pass `escaping=allow-utf-8` in the Accept header in order to be served UTF-8 names. If scrape target does not see this header, it will automatically convert UTF-8 names to legacy-compatible using underscore replacement. Prometheus and compatible scraping systems can also request a specific escaping method if desired by setting the `escaping` header to a different value. \* `underscores`: The default: convert legacy-invalid characters to underscores. \* `dots`: similar to UnderscoreEscaping, except that dots are converted to `\_dot\_` and pre-existing underscores are converted to `\_\_`. This allows for round-tripping of simple metric names that also contain dots. \* `values`: This mode prepends the name with `U\_\_` and replaces all invalid characters with the unicode value, surrounded by underscores. Single underscores are replaced with double underscores. This mode allows for full round-tripping of UTF-8 names with a legacy Prometheus. Announcing UTF-8 support in content negotiation indicates that Prometheus is \*capable\* of receiving UTF-8 characters, but does not require that metric names contain previously-unsupported characters. Nor does an Accept header announcing support for UTF-8 require that the metrics producer disable name translation on their end. The choice of exact name translation strategy is up to the metrics producer. The requirement is that when Prometheus requests an escaping scheme other than allow-utf-8, the producer convert the names in the manner requested. ### Remote Write 2.0 Remote Write 2.0 automatically accepts all UTF-8 names in Prometheus 3.0. There is no way to enforce the legacy character set validation with Remote Write 2.0. ## OTLP Metrics OTLP receiver in Prometheus 3.0 still normalizes all names to Prometheus format by default. You can change this in `otlp` section of the Prometheus configuration as follows: otlp:
https://github.com/prometheus/docs/blob/main//docs/guides/utf8.md
main
prometheus
[ -0.073841392993927, 0.03453437611460686, 0.002822472946718335, -0.040401481091976166, -0.07652866095304489, -0.0454043410718441, 0.01260040421038866, -0.025006763637065887, 0.025921287015080452, -0.06484212726354599, 0.035998012870550156, -0.07867191731929779, -0.0038673896342515945, 0.083...
0.109065
names in Prometheus 3.0. There is no way to enforce the legacy character set validation with Remote Write 2.0. ## OTLP Metrics OTLP receiver in Prometheus 3.0 still normalizes all names to Prometheus format by default. You can change this in `otlp` section of the Prometheus configuration as follows: otlp: # Ingest OTLP data keeping UTF-8 characters in metric/label names. translation\_strategy: NoTranslation Note that when not appending type and unit suffixes, if there are two metrics with the same name but differing type or unit, those metrics will collide in Prometheus. Once Prometheus has native support for type and unit metadata this issue will go away. See [OpenTelemetry guide](/docs/guides/opentelemetry) for more details. ## Querying Querying for metrics with UTF-8 names will require a slightly different syntax in PromQL. The classic query syntax will still work for legacy-compatible names: `my\_metric{}` But UTF-8 names must be quoted \*\*and\*\* moved into the braces: `{"my.metric"}` Label names must also be quoted if they contain legacy-incompatible characters: `{"metric.name", "my.label.name"="bar"}` The metric name can appear anywhere inside the braces, but style prefers that it be the first term.
https://github.com/prometheus/docs/blob/main//docs/guides/utf8.md
main
prometheus
[ -0.06598126888275146, -0.000251551071414724, 0.01954658515751362, -0.005589703097939491, -0.10618637502193451, -0.08822345733642578, -0.00501765962690115, -0.035273559391498566, 0.02571876160800457, -0.037521958351135254, 0.01642412133514881, -0.0930158719420433, 0.05710720270872116, 0.035...
0.1018
Prometheus fundamentally stores all data as [\_time series\_](http://en.wikipedia.org/wiki/Time\_series): streams of timestamped values belonging to the same metric and the same set of labeled dimensions. Besides stored time series, Prometheus may generate temporary derived time series as the result of queries. ## Metric names and labels Every time series is uniquely identified by its metric name and optional key-value pairs called labels. \*\*\*Metric names:\*\*\* \* Metric names SHOULD specify the general feature of a system that is measured (e.g. `http\_requests\_total` - the total number of HTTP requests received). \* Metric names MAY use any UTF-8 characters. \* Metric names SHOULD match the regex `[a-zA-Z\_:][a-zA-Z0-9\_:]\*` for the best experience and compatibility (see the warning below). Metric names outside of that set will require quoting e.g. when used in PromQL (see the [UTF-8 guide](../guides/utf8.md#querying)). NOTE: Colons (':') are reserved for user-defined recording rules. They SHOULD NOT be used by exporters or direct instrumentation. \*\*\*Metric labels:\*\*\* Labels let you capture different instances of the same metric name. For example: all HTTP requests that used the method `POST` to the `/api/tracks` handler. We refer to this as Prometheus's "dimensional data model". The query language allows filtering and aggregation based on these dimensions. The change of any label's value, including adding or removing labels, will create a new time series. \* Label names MAY use any UTF-8 characters. \* Label names beginning with `\_\_` (two underscores) MUST be reserved for internal Prometheus use. \* Label names SHOULD match the regex `[a-zA-Z\_][a-zA-Z0-9\_]\*` for the best experience and compatibility (see the warning below). Label names outside of that regex will require quoting e.g. when used in PromQL (see the [UTF-8 guide](../guides/utf8.md#querying)). \* Label values MAY contain any UTF-8 characters. \* Labels with an empty label value are considered equivalent to labels that do not exist. WARNING: The [UTF-8](../guides/utf8.md) support for metric and label names was added relatively recently in Prometheus v3.0.0. It might take time for the wider ecosystem (downstream PromQL compatible projects and vendors, tooling, third-party instrumentation, collectors, etc.) to adopt new quoting mechanisms, relaxed validation etc. For the best compatibility it's recommended to stick to the recommended ("SHOULD") character set. INFO: See also the [best practices for naming metrics and labels](/docs/practices/naming/). ## Samples Samples form the actual time series data. Each sample consists of: \* a float64 or [native histogram](https://prometheus.io/docs/specs/native\_histograms/) value \* a millisecond-precision timestamp ## Notation Given a metric name and a set of labels, time series are frequently identified using this notation: {="", ...} For example, a time series with the metric name `api\_http\_requests\_total` and the labels `method="POST"` and `handler="/messages"` could be written like this: api\_http\_requests\_total{method="POST", handler="/messages"} This is the same notation that [OpenTSDB](http://opentsdb.net/) uses. Names with UTF-8 characters outside the recommended set must be quoted, using this notation: {"", ="", ...} Since metric name are internally represented as a label pair with a special label name (`\_\_name\_\_=""`) one could also use the following notation: {\_\_name\_\_="", ="", ...}
https://github.com/prometheus/docs/blob/main//docs/concepts/data_model.md
main
prometheus
[ -0.14783228933811188, 0.02892029471695423, -0.04134160652756691, -0.004164521582424641, -0.04043596237897873, -0.10169757157564163, 0.03158271312713623, 0.009280902333557606, 0.06616973876953125, -0.03629840537905693, -0.02653643861413002, -0.07577680796384811, 0.007141585927456617, 0.0157...
0.165796
The Prometheus client libraries offer four core metric types. These are currently only differentiated in the client libraries (to enable APIs tailored to the usage of the specific types) and in the wire protocol. The Prometheus server does not yet make use of the type information and flattens all data into untyped time series. This may change in the future. ## Counter A \_counter\_ is a cumulative metric that represents a single [monotonically increasing counter](https://en.wikipedia.org/wiki/Monotonic\_function) whose value can only increase or be reset to zero on restart. For example, you can use a counter to represent the number of requests served, tasks completed, or errors. Do not use a counter to expose a value that can decrease. For example, do not use a counter for the number of currently running processes; instead use a gauge. Client library usage documentation for counters: \* [Go](http://godoc.org/github.com/prometheus/client\_golang/prometheus#Counter) \* [Java](https://prometheus.github.io/client\_java/getting-started/metric-types/#counter) \* [Python](https://prometheus.github.io/client\_python/instrumenting/counter/) \* [Ruby](https://github.com/prometheus/client\_ruby#counter) \* [.Net](https://github.com/prometheus-net/prometheus-net#counters) \* [Rust](https://docs.rs/prometheus-client/latest/prometheus\_client/metrics/counter/index.html) ## Gauge A \_gauge\_ is a metric that represents a single numerical value that can arbitrarily go up and down. Gauges are typically used for measured values like temperatures or current memory usage, but also "counts" that can go up and down, like the number of concurrent requests. Client library usage documentation for gauges: \* [Go](http://godoc.org/github.com/prometheus/client\_golang/prometheus#Gauge) \* [Java](https://prometheus.github.io/client\_java/getting-started/metric-types/#gauge) \* [Python](https://prometheus.github.io/client\_python/instrumenting/gauge/) \* [Ruby](https://github.com/prometheus/client\_ruby#gauge) \* [.Net](https://github.com/prometheus-net/prometheus-net#gauges) \* [Rust](https://docs.rs/prometheus-client/latest/prometheus\_client/metrics/gauge/index.html) ## Histogram A \_histogram\_ samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values. A histogram with a base metric name of `` exposes multiple time series during a scrape: \* cumulative counters for the observation buckets, exposed as `\_bucket{le=""}` \* the \*\*total sum\*\* of all observed values, exposed as `\_sum` \* the \*\*count\*\* of events that have been observed, exposed as `\_count` (identical to `\_bucket{le="+Inf"}` above) Use the [`histogram\_quantile()` function](/docs/prometheus/latest/querying/functions/#histogram\_quantile) to calculate quantiles from histograms or even aggregations of histograms. A histogram is also suitable to calculate an [Apdex score](http://en.wikipedia.org/wiki/Apdex). When operating on buckets, remember that the histogram is [cumulative](https://en.wikipedia.org/wiki/Histogram#Cumulative\_histogram). See [histograms and summaries](/docs/practices/histograms) for details of histogram usage and differences to [summaries](#summary). NOTE: Beginning with Prometheus v2.40, there is experimental support for native histograms. A native histogram requires only one time series, which includes a dynamic number of buckets in addition to the sum and count of observations. Native histograms allow much higher resolution at a fraction of the cost. Detailed documentation will follow once native histograms are closer to becoming a stable feature. NOTE: Beginning with Prometheus v3.0, the values of the `le` label of classic histograms are normalized during ingestion to follow the format of [OpenMetrics Canonical Numbers](https://github.com/prometheus/OpenMetrics/blob/main/specification/OpenMetrics.md#considerations-canonical-numbers). Client library usage documentation for histograms: \* [Go](http://godoc.org/github.com/prometheus/client\_golang/prometheus#Histogram) \* [Java](https://prometheus.github.io/client\_java/getting-started/metric-types/#histogram) \* [Python](https://prometheus.github.io/client\_python/instrumenting/histogram/) \* [Ruby](https://github.com/prometheus/client\_ruby#histogram) \* [.Net](https://github.com/prometheus-net/prometheus-net#histogram) \* [Rust](https://docs.rs/prometheus-client/latest/prometheus\_client/metrics/histogram/index.html) ## Summary Similar to a \_histogram\_, a \_summary\_ samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. A summary with a base metric name of `` exposes multiple time series during a scrape: \* streaming \*\*φ-quantiles\*\* (0 ≤ φ ≤ 1) of observed events, exposed as `{quantile="<φ>"}` \* the \*\*total sum\*\* of all observed values, exposed as `\_sum` \* the \*\*count\*\* of events that have been observed, exposed as `\_count` See [histograms and summaries](/docs/practices/histograms) for detailed explanations of φ-quantiles, summary usage, and differences to [histograms](#histogram). NOTE: Beginning with Prometheus v3.0, the values of the `quantile` label are normalized during ingestion to follow the format of [OpenMetrics Canonical Numbers](https://github.com/prometheus/OpenMetrics/blob/main/specification/OpenMetrics.md#considerations-canonical-numbers). Client library usage documentation for summaries: \* [Go](http://godoc.org/github.com/prometheus/client\_golang/prometheus#Summary) \* [Java](https://prometheus.github.io/client\_java/getting-started/metric-types/#summary) \* [Python](https://prometheus.github.io/client\_python/instrumenting/summary/)
https://github.com/prometheus/docs/blob/main//docs/concepts/metric_types.md
main
prometheus
[ -0.11982858180999756, -0.024216748774051666, -0.030918674543499947, 0.036509737372398376, -0.08741186559200287, -0.0735243707895279, 0.00437829690054059, 0.03934084251523018, 0.07810349017381668, -0.008722692728042603, -0.06823515146970749, -0.01468629390001297, -0.0016750048380345106, 0.0...
0.240994