tag
dict
content
listlengths
1
139
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Fonio", "subcategory": "Observability" }
[ { "data": "Were using cookies to give you the best experience across our websites and products. For more information, visit our Cookies policy Getting started Configuration Deployment Extending InGRAINd To get started with foniod, you will need a Linux-based system. Generally, any recent release of a modern distribution will work fine. To deploy on Kubernetes and on various cloud platforms, you will want to take a peek at our Kubernetes tutorial If youre running Fedora 31, or Ubuntu 18.04, you will be able to get going using Docker and a configuration file. A configuration file will look like the snippet below. Name it config.toml. ``` [[probe]] pipelines = [\"console\"] [probe.config] type = \"Network\" [[probe]] pipelines = [\"console\"] [probe.config] type = \"DNS\" interface = \"wlp61s0\" [[probe]] pipelines = [\"console\"] [probe.config] type = \"TLS\" interface = \"wlp61s0\" [[probe]] pipelines = [\"console\"] [probe.config] type = \"Files\" monitor_dirs = [\"/usr/bin\"] [[pipeline.console.steps]] type = \"Container\" [[pipeline.console.steps]] type = \"AddSystemDetails\" [[pipeline.console.steps]] type = \"Buffer\" interval_s = 1 enable_histograms = false [pipeline.console.config] backend = \"Console\" ``` For an exhaustive list of grains and configuration options, look at the example configuration in the repository. To start an foniod Docker container on Ubuntu 18.04, use the following command line: ``` docker run -v $(pwd)/config.toml:/config/foniod.toml --privileged --rm quay.io/redsift/foniod:latest-ubuntu-18.04 ``` For running on Fedora 31, you can use the following: ``` docker run -v $(pwd)/config.toml:/config/foniod.toml --privileged --rm quay.io/redsift/foniod:latest-fedora31 ``` To get foniod working on your workstation, you will need to start by installing a few packages and the Rust toolchain. ``` curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` On Ubuntu, the list of dependencies can be installed with apt. ``` apt-get -y install debhelper cmake libllvm9 llvm-9-dev libclang-9-dev \\ libelf-dev bison flex libedit-dev clang-format-9 \\ devscripts zlib1g-dev libfl-dev \\ pkg-config libssl-dev \\ curl wget \\ git \\ clang \\ capnproto ``` On Fedora, install dependencies using the following command. ``` yum install -y clang-9.0.0 llvm-9.0.0 llvm-libs-9.0.0 llvm-devel-9.0.0 llvm-static-9.0.0 capnproto kernel kernel-devel elfutils-libelf-devel ca-certificates ``` After installing the dependencies, build foniod with the usual build ritual. ``` cargo build --release ``` And run it as root. ``` sudo ./target/release/foniod ./config.toml ``` If everything worked, you should start seeing output on the console from events happening on your system. To get into more advanced topics, read the configuration pages." } ]
{ "category": "Observability and Analysis", "file_name": "docs.md", "project_name": "Google Stackdriver", "subcategory": "Observability" }
[ { "data": "This document describes how you query, view, and analyze log entries by using the Google Cloud console. There are two interfaces available to you, the Logs Explorer and Log Analytics. You can query, view, and analyze logs with both interfaces; however, they use different query languages and they have different capabilities. For troubleshooting and exploration of log data, we recommend using the Logs Explorer. To generate insights and trends, we recommend that you use Log Analytics. You can query your logs and save your queries by issuing Logging API commands. You can also query your logs by using Google Cloud CLI. The Logs Explorer is designed to help you troubleshoot and analyze the performance of your services and applications. For example, a histogram displays the rate of errors. If you see a spike in errors or something that is interesting, you can locate and view the corresponding log entries. When a log entry is associated with an error group, the log entry is annotated with a menu of options that let you access more information about the error group. The same query language is supported by the Cloud Logging API, the Google Cloud CLI, and the Logs Explorer. To simplify query construction when you are using the Logs Explorer, you can build queries by using menus, by entering text, and, in some cases, by using options included with the display of an individual log entry. The Logs Explorer doesn't support aggregate operations, like counting the number of log entries that contain a specific pattern. To perform aggregate operations, enable analytics on the log bucket and then use Log Analytics. For details about searching and viewing logs with the Logs Explorer, see View logs by using the Logs Explorer. Using Log Analytics, you can run queries that analyze your log data, and then you can view or chart the query results. Charts let you identify patterns and trends in your logs over time. The following screenshot illustrates the charting capabilities in Log Analytics: For example, suppose that you are troubleshooting a problem and you want to know the average latency for HTTP requests issued to a specific URL over time. When a log bucket is upgraded to use Log Analytics, you can use SQL queries to query logs stored in your log" }, { "data": "By grouping and aggregating your logs, you can gain insights into your log data which can help you reduce time spent troubleshooting. Log Analytics also let you use BigQuery to query your data. For example, suppose that you want to use BigQuery to compare URLs in your logs with a public dataset of known malicious URLs. To make your log data visible to BigQuery, upgrade your bucket to use Log Analytics and then create a linked dataset. You can continue to troubleshoot issues and view individual log entries in upgraded log buckets by using the Logs Explorer. Not all regions are supported for Log Analytics. For more information, see Supported regions. To upgrade an existing log bucket to use Log Analytics, the following restrictions apply: On log buckets that are upgraded to use Log Analytics, you can't do any of the following: You can delete the link to a linked BigQuery dataset. Deleting the link doesn't change your ability to query views on the log bucket by using the Log Analytics page. Only log entries written after the upgrade has completed are available for analytics. Cloud Logging doesn't charge to route logs to a supported destination; however, the destination might apply charges. With the exception of the _Required log bucket, Cloud Logging charges to stream logs into log buckets and for storage longer than the default retention period of the log bucket. Cloud Logging doesn't charge for copying logs, or for queries issued through the Logs Explorer page or through the Log Analytics page. For more information, see the following documents: Destination costs: There are no BigQuery ingestion or storage costs when you upgrade a bucket to use Log Analytics and then create a linked dataset. When you create a linked dataset for a log bucket, you don't ingest your log data into BigQuery. Instead, you get read access to the log data stored in your log bucket through the linked dataset. BigQuery analysis charges apply when you run SQL queries on BigQuery linked datasets, which includes using the BigQuery Studio page, the BigQuery API, and the BigQuery command-line tool. For more information about Log Analytics, see the following blog posts: Build queries: Sample queries: Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "Observability and Analysis", "file_name": "log-analytics#analytics.md", "project_name": "Google Stackdriver", "subcategory": "Observability" }
[ { "data": "Cloud Logging is a fully managed service that allows you to store, search, analyze, monitor, and alert on logging data and events from Google Cloud and Amazon Web Services. Using BindPlane, you can also collect this data from over 50 common application components, on-premise systems, and hybrid cloud systems. Logging includes storage for logs through log buckets, a user interface called the Logs Explorer, and an API to manage logs programmatically. Logging lets you read and write log entries, query your logs, and control how you route and use your logs. Quickstart: Write and query log entries with the gcloud CLI Quickstart: Write and query log entries using a Python script Using the Logs Explorer Logging query language Building queries Log Router overview About the Ops Agent Log-based metrics REST API RPC API Logging Client Libraries Command-line interface Monitored resources and services Introduction to the Cloud Logging API v2 Quotas and limits Release notes Pricing Monitoring and logging for Cloud Functions This hands-on lab shows you how to view your Cloud Functions with their execution times, execution counts, and memory usage in the Google Cloud console. These metrics are also available in Cloud Monitoring. Creating and alerting on logs-based metrics This hands-on lab shows you how to use both system and user-defined logs-based metrics to create charts and alerting policies. Customizing logs for Google Kubernetes Engine with Fluentd This tutorial describes how to customize Fluentd logging for a GKEcluster. Compliance requirements This tutorial shows how to export logs from Logging to Cloud Storage to meet your organization's compliance requirements. Security and access analytics This tutorial shows how to export logs from Logging to BigQuery to meet the security and analytics requirements of your organization's cloud infrastructure environment. Storing your organization's logs in a log bucket Learn how to store your organization's logs in a single source. Multi-tenant logging on Google Kubernetes Engine. Learn how to configure multi-tenant logging for GKE clusters. Regionalizing your project's logs using log buckets Learn how to store your logs data in a designated region. Log Entries: Write Learn how to write log entries in Go, Java, Python, Node.js, C#, and PHP Log Entries: Advanced Write Learn how to write log entries in Go, Java, Python, Node.js, C#, and PHP Log Entries: List Learn how to list log entries in Go, Java, Python, Node.js, C#, and PHP Logs: Delete Learn how to delete log entries in Go, Java, Python, Node.js, C#, and PHP Setting up Cloud Logging for Go Create a VM instance with Python More samples More logging-specific samples Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-10 UTC." } ]
{ "category": "Observability and Analysis", "file_name": "managed-prometheus.md", "project_name": "Google Stackdriver", "subcategory": "Observability" }
[ { "data": "This document explains how Cloud Logging processes log entries, and describes the key components of Logging routing and storage. Routing refers to the process that Cloud Logging uses to determine what to do with a newly-arrived log entry. You can route log entries to destinations like Logging buckets, which store the log entry, or to Pub/Sub. To export your logs into third-party destinations, route your logs to a Pub/Sub topic, and then authorize the third-party destination to subscribe to the Pub/Sub topic. At a high level, this is how Cloud Logging routes and stores log entries: The following sections explain how Logging routes logs with the Log Router by using sinks. A log entry is sent to the Google Cloud resource specified in its logName field during its entries.write call. Cloud Logging receives log entries with the Cloud Logging API where they pass through the Log Router. The sinks in the Log Router check each log entry against the existing inclusion filter and exclusion filters that determine which destinations, including Cloud Logging buckets, that the log entry should be sent to. You can use combinations of sinks to route logs to multiple destinations. The Log Router stores the logs temporarily. This behavior buffers against temporary disruptions and outages that might occur when a sink routes logs to a destination. The buffering doesn't protect against sink configuration errors. If your sink is configured incorrectly, then Logging discards the logs, an error log is generated, and an email notifying you of a sink configuration error is sent. Note that the Log Router's temporary storage is distinct from the longer term storage provided by Logging buckets. Incoming log entries with timestamps that are more than the logs retention period in the past or that are more than 24 hours in the future are discarded. Sinks control how Cloud Logging routes logs. Using sinks, you can route some or all of your logs to supported destinations. Some of the reasons that you might want to control how your logs are routed include the following: Sinks belong to a given Google Cloud resource: Google Cloud projects, billing accounts, folders, and organizations. When the resource receives a log entry, it routes the log entry according to the sinks contained by that resource and, if enabled, any ancestral sinks belonging under the resource hierarchy. The log entry is sent to the destination associated with each matching sink. Cloud Logging provides two predefined sinks for each Google Cloud project, billing account, folder, and organization: Required and Default. All logs that are generated in a resource are automatically processed through these two sinks and then are stored either in the correspondingly named Required or Default buckets. Sinks act independently of each other. Regardless of how the predefined sinks process your log entries, you can create your own sinks to route some or all of your logs to various supported destinations or to exclude them from being stored by Cloud Logging. The routing behavior for each sink is controlled by configuring the inclusion filter and exclusion filters for that sink. Depending on the sink's configuration, every log entry received by Cloud Logging falls into one or more of these categories: Stored in Cloud Logging and not routed elsewhere. Stored in Cloud Logging and routed to a supported destination. Not stored in Cloud Logging but routed to a supported destination. Neither stored in Cloud Logging nor routed" }, { "data": "You usually create sinks at the Google Cloud project level, but if you want to combine and route logs from the resources contained by a Google Cloud organization or folder, you can create aggregated sinks. You can't route log entries that Logging received before your sink was created because routing happens as logs pass through the Logging API, and new routing rules only apply to logs written after those rules have been created. If you need to route log entries retroactively, see Copy logs. For any new sink, if you don't specify filters, all logs match and are routed to the sink's destination. You can configure the sink to select specific logs by setting an inclusion filter. You can also set one or more exclusion filters to exclude logs from the sink's destination. When you configure sinks, you create inclusion filters by using the Logging query language. Sinks can also contain multiple exclusion filters. Every log entry received by Logging is routed based on these filtering rules: The sink's exclusion filters override any of its defined inclusion filters. If a log matches any exclusion filter in the sink, then it doesn't match the sink regardless of any inclusion filters defined. The log entry isn't routed to that sink's destination. If the sink doesn't contain an inclusion filter, then the following happens: If the sink contains an inclusion filter, then the following happens: When you create a sink, you can set multiple exclusion filters. Exclusion filters let you exclude matching log entries from being routed to the sink's destination or from being stored in a log bucket. You create exclusion filters by using the Logging query language. Log entries are excluded after they are received by the Logging API and therefore these log entries consume entries.write API quota. You can't reduce the number of entries.write API calls by excluding log entries. Excluded log entries aren't available in the Logs Explorer. Log entries that aren't routed to at least one log bucket, either explicitly with exclusion filters or because they don't match any sinks with a Logging storage destination, are also excluded from Error Reporting. Therefore, these logs aren't available to help troubleshoot failures. User-defined log-based metrics are computed from log entries in both included and excluded logs. For more information, see Monitor your logs. You can use the Log Router to route certain logs to supported destinations in any Google Cloud project. Logging supports the following sink destinations: For more information, see Route logs to supported destinations. The following section details how logs are stored in Cloud Logging, and how you can view and manage them. Cloud Logging uses log buckets as containers in your Google Cloud projects, billing accounts, folders, and organizations to store and organize your logs data. The logs that you store in Cloud Logging are indexed, optimized, and delivered to let you analyze your logs in real time. Cloud Logging buckets are different storage entities than the similarly named Cloud Storage buckets. For each Google Cloud project, billing account, folder, and organization, Logging automatically creates two log buckets: _Required and _Default. Logging automatically creates sinks named Required and Default that, in the default configuration, route logs to the correspondingly named buckets. You can disable the Default sink, which routes logs to the Default log bucket. You can change the behavior of the _Default sinks created for any new Google Cloud projects or folders. For more information, see Configure default settings for organizations and folders. You can't change routing rules for the _Required bucket. Additionally, you can create user-defined buckets for any Google Cloud project. You create sinks to route all, or just a subset, of your logs to any log" }, { "data": "This flexibility allows you to choose the Google Cloud project in which your logs are stored and what other logs are stored with them. For more information, see Configure log buckets. Cloud Logging automatically routes the following types of logs to the _Required bucket: Cloud Logging retains the logs in the _Required bucket for 400 days; you can't change this retention period. You can't modify or delete the _Required bucket. You can't disable the Required sink, which routes logs to the Required bucket. Any log entry that isn't stored in the _Required bucket is routed by the Default sink to the Default bucket, unless you disable or otherwise edit the _Default sink. For instructions on modifying sinks, see Manage sinks. For example, Cloud Logging automatically routes the following types of logs to the _Default bucket: Cloud Logging retains the logs in the _Default bucket for 30 days, unless you configure custom retention for the bucket. You can't delete the _Default bucket. You can also create user-defined log buckets in any Google Cloud project. By applying sinks to your user-defined log buckets, you can route any subset of your logs to any log bucket, letting you choose which Google Cloud project your logs are stored in and which other logs are stored with them. For example, for any log generated in Project-A, you can configure a sink to route that log to user-defined buckets in Project-A or Project-B. You can configure custom retention for the bucket. For information about managing your user-defined log buckets, including deleting or updating them, see Configure and manage log buckets. Log buckets are regional resources. The infrastructure that stores, indexes, and searches your logs is located in a specific geographical location. Google manages that infrastructure so that your applications are available redundantly across the zones within that" }, { "data": "When you create a log bucket or set an organization-level regional policy, you can choose to store your logs in any of the following regions: | Region name | Region description | Log Analytics support | |:--|:|:| | africa-south1 | Johannesburg | Yes | | Region name | Region description | Log Analytics support | |:|:|:| | northamerica-northeast1 | Montral | Yes | | northamerica-northeast2 | Toronto | Yes | | southamerica-east1 | So Paulo | Yes | | southamerica-west1 | Santiago | Yes | | us-central1 | Iowa | Yes | | us-east1 | South Carolina | Yes | | us-east4 | North Virginia | Yes | | us-east5 | Columbus | Yes | | us-south1 | Dallas | Yes | | us-west1 | Oregon | Yes | | us-west2 | Los Angeles | Yes | | us-west3 | Salt Lake City | Yes | | us-west4 | Las Vegas | Yes | | Region name | Region description | Log Analytics support | |:|:|:| | asia-east1 | Taiwan | Yes | | asia-east2 | Hong Kong | Yes | | asia-northeast1 | Tokyo | Yes | | asia-northeast2 | Osaka | Yes | | asia-northeast3 | Seoul | Yes | | asia-south1 | Mumbai | Yes | | asia-south2 | Delhi | Yes | | asia-southeast1 | Singapore | Yes | | asia-southeast2 | Jakarta | Yes | | australia-southeast1 | Sydney | Yes | | australia-southeast2 | Melbourne | Yes | | Region name | Region description | Log Analytics support | |:|:|:| | europe-central2 | Warsaw | Yes | | europe-north1 | Finland | Yes | | europe-southwest1 | Madrid | Yes | | europe-west1 | Belgium | Yes | | europe-west2 | London | Yes | | europe-west3 | Frankfurt | Yes | | europe-west4 | Netherlands | Yes | | europe-west6 | Zurich | Yes | | europe-west8 | Milan | Yes | | europe-west9 | Paris | Yes | | europe-west10 | Berlin | Yes | | europe-west12 | Turin | Yes | | Region name | Region description | Log Analytics support | |:--|:|:| | me-central1 | Doha | Yes | | me-central2 | Dammam | Yes | | me-west1 | Tel Aviv | Yes | | Region name | Region description | Log Analytics support | |:--|:--|:| | eu | Logs stored in data centers within the European Union; no additional redundancy guarantees | Yes | | global | Logs stored in any data center in the world; no additional redundancy guarantees | Yes | | nan | nan | nan | In addition to these regions, you can set the location to global, which means that you don't need to specify where your logs are physically stored. You can automatically apply a particular storage region to the _Default and _Required buckets created in an organization or folder. For more information, see Configure default settings for organizations and folders. For more information about data regionality and where you can store your logs data, see Understand data regions. You can create an organization policy to ensure that your organization meets your compliance and regulatory needs. Using an organization policy, you can specify in which regions your organization can create new log buckets. You can also restrict your organization from creating new log buckets in specified regions. Cloud Logging doesn't enforce your newly created organization policy on existing log buckets; it only enforces the policy on new log buckets. For information about creating a location-based organization policy, see Restrict resource locations. In addition, you can configure a default storage location for the Default and Required buckets in an organization or in a folder. If you configure an organization policy that constrains where data can be stored, then you must ensure that the default storage location you specify is consistent with that constraint. For more information, see Configure default settings for organizations and folders. Cloud Logging retains logs according to retention rules applying to the log bucket type where the logs are held. You can configure Cloud Logging to retain logs between 1 day and 3650 days. Custom retention rules apply to all the logs in a bucket, regardless of the log type or whether that log has been copied from another location. For information about setting retention rules for a log bucket, see Configure custom retention. For information about the retention periods for different types of logs, see Quotas and limits. Log views let you grant a user access to only a subset of the logs stored in a log bucket. For information about how to configure log views, and how to grant access to specific log views, see Configure log views on a log bucket. For every log bucket, Cloud Logging automatically creates the _AllLogs view, which shows all logs stored in that bucket. Cloud Logging also creates a view for the Default bucket called Default. The _Default view for the _Default bucket shows all logs except Data Access audit logs. The AllLogs and Default views aren't editable, and you can't delete the _Default log view. Custom log views provide you with an advanced and granular way to control access to your logs data. For example, consider a scenario in which you store all of your organization's logs in a central Google Cloud" }, { "data": "Because log buckets can contain logs from multiple Google Cloud projects, you might want to control which Google Cloud projects different users can view logs from. Using custom log views, you can give one user access to logs only from a single Google Cloud project, while you give another user access to logs from all the Google Cloud projects. The following section provides information about how to use logs in the broader Google Cloud. Log-based metrics are Cloud Monitoring metrics that are derived from the content of log entries. For example, if Cloud Logging receives a log entry for a Google Cloud project that matches the filters of one of the Google Cloud project's metrics, then that log entry is counted in the metric data. Log-based metrics interact with routing differently, depending on whether the log-based metrics are defined by the system or by you. The following sections describe these differences. Sink exclusion filters apply to system-defined log-based metrics, which count only logs that are stored in log buckets. Sink exclusion filters don't apply to user-defined log-based metrics. Even if you exclude logs from being stored in any Logging buckets, you could see those logs counted in these metrics. System-defined log-based metrics apply at the Google Cloud project level. These metrics are calculated by the Log Router and apply to logs only in the Google Cloud project in which they're received. User-defined log-based metrics can apply at either the Google Cloud project level or at the level of a specific log bucket: Bucket-scoped metrics apply to logs in the log bucket in which they're received, regardless of the Google Cloud project in which the log entries originated. With bucket-scoped log-based metrics, you can create log-based metrics that can evaluate logs in the following cases: For more information, see Log-based metrics overview. To learn about the format of routed log entries and how the logs are organized in destinations, see View logs in sink destinations. To address common use cases for routing and storing logs, see the following documents and tutorials: Aggregate your organization's log into a central log bucket Regionalize your logs Configure multi-tenant logging for Google Kubernetes Engine (GKE) clusters For best practices about using routing for data governance, see the following documents: Configure CMEK for logs routing Logs data: A step by step guide for overcoming common compliance challenges Data governance: Principles for securing and managing logs For information about how you use Identity and Access Management (IAM) roles and permissions to control access to Cloud Logging data, see the Access control with IAM. Cloud Logging doesn't charge to route logs to a supported destination; however, the destination might apply charges. With the exception of the _Required log bucket, Cloud Logging charges to stream logs into log buckets and for storage longer than the default retention period of the log bucket. Cloud Logging doesn't charge for copying logs, or for queries issued through the Logs Explorer page or through the Log Analytics page. For more information, see the following documents: Destination costs: To help you route and store Cloud Logging data, see the following documents: To create sinks to route logs to supported destinations, see Configure sinks. For routing and sinks troubleshooting information, see Troubleshoot routing and sinks. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Grafana Tempo", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot" } ]
{ "category": "Observability and Analysis", "file_name": "tutorials_doctype=quickstart.md", "project_name": "Google Stackdriver", "subcategory": "Observability" }
[ { "data": "This document describes how to configure Google Kubernetes Engine (GKE) to send metrics to Cloud Monitoring. Metrics in Cloud Monitoring can populate custom dashboards, generate alerts, create service-level objectives, or be fetched by third-party monitoring services using the Cloud Monitoring API. GKE provides several sources of metrics: Packages of observability metrics: Kube state metrics: a curated set of metrics exported from the kube state service, used to monitor the state of Kubernetes objects like Pods, Deployments, and more. For the set of included metrics, see Use kube state metrics. The kube state package is a managed solution. If you need greater flexibilityfor example, if you need to collect additional metrics, or need to manage scrape intervals or to scrape other resourcesyou can disable the package, if it is enabled, and deploy your own instance of the open source kube state metrics service. For more information, see the Google Cloud Managed Service for Prometheus exporter documentation for Kube state metrics. cAdvisor/Kubelet: a curated set of cAdvisor and Kubelet metrics. For the set of included metrics, see Use cAdvisor/Kubelet metrics. The cAdvisor/Kubelet package is a managed solution. If you need greater flexibilityfor example, if you need to collect additional metrics or to manage scrape intervals or to scrape other resourcesyou can disable the package, if it is enabled, and deploy your own instance of the open source cAdvisor/Kubelet metrics services. For more information, see the Google Cloud Managed Service for Prometheus documentation for the cAdvisor/Kubelet exporter. When a cluster is created, GKE by default collects certain metrics emitted by system components. You have a choice whether or not to send metrics from your GKE cluster to Cloud Monitoring. If you choose to send metrics to Cloud Monitoring, you must send system metrics. All GKE system metrics are ingested into Cloud Monitoring with the prefix kubernetes.io. Cloud Monitoring does not charge for the ingestion of GKE system metrics. For more information, see Cloud Monitoring pricing. To enable system metric collection, pass the SYSTEM value to the --monitoring flag of the gcloud container clusters create or gcloud container clusters update commands. To disable system metric collection, use the NONE value for the --monitoring flag. If system metric collection is disabled, basic information like CPU usage, memory usage, and disk usage are not available for a cluster in the Observability tab or the GKE section of the Google Cloud console. For GKE Autopilot clusters, you cannot disable the collection of system metrics. See Observability for GKE for more details about Cloud Monitoring integration with GKE. To configure the collection of system metrics by using Terraform, see the monitoring_config block in the Terraform registry for googlecontainercluster. For general information about using Google Cloud with Terraform, see Terraform with Google Cloud. System metrics include metrics from essential system components important for Kubernetes. For a list of these metrics, see GKE system" }, { "data": "In the following tables, a checkmark () indicates which metrics are enabled by default when you create and register a new cluster in a project with GKE Enterprise enabled: | Metric name | Autopilot | Standard | |:-|:|--:| | System | nan | nan | | API server | nan | nan | | Scheduler | nan | nan | | Controller Manager | nan | nan | | Persistent volume (Storage) | nan | nan | | Pods | nan | nan | | Deployment | nan | nan | | StatefulState | nan | nan | | DaemonSet | nan | nan | | HorizonalPodAutoscaler | nan | nan | | cAdvisor | nan | nan | | Kubelet | nan | nan | All registered clusters in a project that has GKE Enterprise enabled can use the packages for control plane metrics, kube state metrics, and cAdvisor/kubelet metrics without any additional charges. Otherwise these metrics incur Cloud Monitoring charges. If system metrics are not available in Cloud Monitoring as expected, see Troubleshoot system metrics. You can configure a GKE cluster to send certain metrics emitted by the Kubernetes API server, Scheduler, and Controller Manager to Cloud Monitoring. For more information, see Collect and view control plane metrics. You can configure a GKE cluster to send a curated set of kube state metrics in Prometheus format to Cloud Monitoring. This package of kube state metrics includes metrics for Pods, Deployments, StatefulSets, DaemonSets, HorizontalPodAutoscaler resources, Persistent Volumes, and Persistent Volume Claims. For more information, see Collect and view Kube state metrics. You can configure a GKE cluster to send a curated set of cAdvisor/Kubelet metrics in Prometheus format to Cloud Monitoring. The curated set of metrics is a subset of the large set of cAdvisor/Kubelet metrics built into every Kubernetes deployment by default. The curated cAdvisor/Kubelet is designed to provide the most useful metrics, reducing ingestion volume and associated costs. For more information, see Collect and view cAdvisor/Kubelet metrics. You can disable the use of metric packages in the cluster. You might want to disable certain packages to reduce costs or if you are using an alternate mechanism for collecting the metrics, like Google Cloud Managed Service for Prometheus and an exporter. To disable the collection of metrics from the Details tab for the cluster, do the following: In the Google Cloud console, go to the Kubernetes clusters page: Go to Kubernetes clusters If you use the search bar to find this page, then select the result whose subheading is Kubernetes Engine. Click your cluster's name. In the Features row labelled Cloud Monitoring, click the Edit icon. In the Components drop-down menu, clear the metric components that you want to disable. Click OK. Click Save Changes. Open a terminal window with Google Cloud SDK and the Google Cloud CLI installed. One way to do this is to use Cloud Shell. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to" }, { "data": "Call the gcloud container clusters update command and pass an updated set of values to the --monitoring flag. The set of values supplied to the --monitoring flag overrides any previous setting. For example, to turn off the collection of all metrics except system metrics, run the following command: ``` gcloud container clusters update CLUSTER_NAME \\ --location=COMPUTE_LOCATION \\ --enable-managed-prometheus \\ --monitoring=SYSTEM ``` This command disables the collection of any previously configured metric packages. To configure the collection of metrics by using Terraform, see the monitoring_config block in the Terraform registry for googlecontainercluster. For general information about using Google Cloud with Terraform, see Terraform with Google Cloud. You can use Cloud Monitoring to identify the control plane or kube state metrics that are writing the largest numbers of samples. These metrics are contributing the most to your costs. After you identify the most expensive metrics, you can modify your scrape configs to filter these metrics appropriately. The Cloud Monitoring Metrics Management page provides information that can help you control the amount you spend on chargeable metrics without affecting observability. The Metrics Management page reports the following information: To view the Metrics Management page, do the following: In the Google Cloud console, go to the query_statsMetrics management page: Go to Metrics management If you use the search bar to find this page, then select the result whose subheading is Monitoring. For more information about the Metrics Management page, see View and manage metric usage. To identify which control plane or kube state metrics have the largest number of samples being ingested, do the following: In the Google Cloud console, go to the query_statsMetrics management page: Go to Metrics management If you use the search bar to find this page, then select the result whose subheading is Monitoring. On the Billable samples ingested scorecard, click View charts. Locate the Namespace Volume Ingestion chart, and then click more_vertMore chart options. In the Metric field, verify that the following resource and and metric are selected: Metric Ingestion Attribution and Samples written by attribution id. In the Filters page, do the following: In the Label field, verify that the value is attribution_dimension. In the Comparison field, verify that the value is = (equals). In the Value field, select cluster. Clear the Group by setting. Optionally, filter for only certain metrics. For example, control plane API server metrics all include \"apiserver\" as part of the metric name, and kube state Pod metrics all include \"kube_pod\" as part of the metric name, so you can filter for metrics containing those strings: Click Add Filter. In the Label field, select metric_type. In the Comparison field, select =~ (equals regex). In the Value field, enter .apiserver. or .kube_pod.. Optionally, group the number of samples ingested by GKE region or project: Click Group by. Ensure metric_type is selected. To group by GKE region, select location. To group by project, select project_id. Click OK. Optionally, group the number of samples ingested by GKE cluster name: Click Group by. To group by GKE cluster name, ensure both attribution_dimension and attribution_id are selected. Click" }, { "data": "To see the ingestion volume for each of the metrics, in the toggle labeled Chart Table Both, select Both. The table shows the ingested volume for each metric in the Value column. Click the Value column header twice to sort the metrics by descending ingestion volume. These steps show the metrics with the highest rate of samples ingested into Cloud Monitoring. Because the metrics in the observability packages are charged by the number of samples ingested, pay attention to metrics with the greatest rate of samples being ingested. In addition to the system metrics and metric packages described in this document, Istio metrics are also available for GKE clusters. For pricing information, see Cloud Monitoring pricing. The following table indicates supported values for the --monitoring flag for the create and update commands. | Source | --monitoring value | Metrics Collected | |:-|:|:--| | None | NONE | No metrics sent to Cloud Monitoring; no metric collection agent installed in the cluster. This value isn't supported for Autopilot clusters. | | System | SYSTEM | Metrics from essential system components required for Kubernetes. For a complete list of the metrics, see Kubernetes metrics. | | API server | API_SERVER | Metrics from kube-apiserver. For a complete list of the metrics, see API server metrics. | | Scheduler | SCHEDULER | Metrics from kube-scheduler. For a complete list of the metrics, see Scheduler metrics. | | Controller Manager | CONTROLLER_MANAGER | Metrics from kube-controller-manager. For a complete list of the metrics, see Controller Manager metrics. | | Persistent volume (Storage) | STORAGE | Storage metrics from kube-state-metrics. Includes metrics for Persistent Volume and Persistent Volume Claims. For a complete list of the metrics, see Storage metrics. | | Pod | POD | Pod metrics from kube-state-metrics. For a complete list of the metrics, see Pod metrics. | | Deployment | DEPLOYMENT | Deployment metrics from kube-state-metrics. For a complete list of the metrics, see Deployment metrics. | | StatefulSet | STATEFULSET | StatefulSet metrics from kube-state-metrics. For a complete list of the metrics, see StatefulSet metrics. | | DaemonSet | DAEMONSET | DaemonSet metrics from kube-state-metrics. For a complete list of the metrics, see DaemonSet metrics. | | HorizonalPodAutoscaler | HPA | HPA metrics from kube-state-metrics. See a complete list of HorizonalPodAutoscaler metrics. | | cAdvisor | CADVISOR | cAdvisor metrics from the cAdvisor/Kubelet metrics package. For a complete list of the metrics, see cAdvisor metrics. | | Kubelet | KUBELET | Kubelet metrics from the cAdvisor/Kubelet For a complete list of the metrics, see Kubelet metrics. | You can also collect Prometheus-style metrics exposed by any GKE workload by using Google Cloud Managed Service for Prometheus, which lets you monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "Observability and Analysis", "file_name": "introduction-google-cloud-platform-integrations.md", "project_name": "Google Stackdriver", "subcategory": "Observability" }
[ { "data": "Get started using Google Cloud by trying one of our product quickstarts, tutorials, or interactive walkthroughs. Get started with Google Cloud quickstarts Whether you're looking to deploy a web app, set up a database, or run big data workloads, it can be challenging to get started. Luckily, Google Cloud quickstarts offer step-by-step tutorials that cover basic use cases, operating the Google Cloud console, and how to use the Google command-line tools." } ]
{ "category": "Observability and Analysis", "file_name": "_pg=hp&plcmt=lt-box-dashboards.md", "project_name": "Grafana", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot" } ]
{ "category": "Observability and Analysis", "file_name": "_pg=oss-tempo&plcmt=resources.md", "project_name": "Grafana Tempo", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot cannot remember your choice unless you click the consent notice at the bottom. I am Grot. Ask me anything All free accounts begin with a 14-day trial period for sending unlimited data and creating unlimited users so that you can test Grafana Cloud at scale. When the trial ends, you transition to the free-forever plan with usage restrictions. If desired, you can upgrade to a paid account at any time to access increased volume, users, and additional features. If you want to explore Grafana Cloud without instrumenting and connecting your own applications, you can import a set of example data sources and dashboards. This get started experience provides two sets of dashboards covering the following use cases: Note The rest of this get started guide shows you how to: To sign up for a Grafana Cloud account, complete the following steps: The demo data sources and dashboards provide you with visualizations that you can explore. To install the demo data source and dashboards, complete the following steps: Sign in to Grafana Cloud. On the left navigation, click Apps > Demo Data Dashboards. Select a demo and click on Install Dashboards. This imports the required data sources and dashboards into your Grafana Cloud instance. Note To reset the dashboards to their original state at any point, click Reset dashboards. Depending on which variant you chose, a folder with the same name appears in your list of dashboards. You can also build your own dashboards based on the grafanacloud-demoinfra-{logs,prom,traces} data sources. The SRE demo folder contains: You can influence the metrics by checking out the live site at quickpizza-demo.grafana.fun. The weather demo ships with two dashboards giving a detailed overview of the weather. Both dashboards show the same data but are either in imperial or metric units. You can also use the data provided to explore more advanced Grafana Cloud features such as Application Observability and Kubernetes Monitoring. To do this, select the grafanacloud-demoinfra data sources in the respective plugin settings. You can remove the demo data sources and dashboards by clicking the Remove button of the respective plugin. When all dashboards are removed, you can remove the data sources directly from the Demo Data Dashboards app. Refer to any of the following getting started guides, as necessary." } ]
{ "category": "Observability and Analysis", "file_name": "_pg=oss-tempo&plcmt=quick-links.md", "project_name": "Grafana Tempo", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Grafana", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot cannot remember your choice unless you click the consent notice at the bottom. I am" }, { "data": "Ask me anything This documentation will help you go from a total beginner to a seasoned k6 expert! Get up and running in no-time, using either a package manager, standalone installer or the official Docker image. Write and execute your first load test locally using JavaScript and the k6 API, adding multiple virtual users, checks and ramping stages. Learn how to leverage the results output to gain actionable insight about your application's performance. Grafana k6 is an open-source load testing tool that makes performance testing easy and productive for engineering teams. k6 is free, developer-centric, and extensible. Using k6, you can test the reliability and performance of your systems and catch performance regressions and problems earlier. k6 will help you to build resilient and performant applications that scale. k6 is developed by Grafana Labs and the community. Watch the video below to learn more about k6 and why it could be the missing puzzle in your Grafana stack. k6 is packed with features, which you can learn all about in the documentation. Key features include: k6 users are typically Developers, QA Engineers, SDETs, and SREs. They use k6 for testing the performance and reliability of APIs, microservices, and websites. Common k6 use cases are: Load testing k6 is optimized for minimal resource consumption and designed for running high load tests (spike, stress, soak tests). Browser testing Through k6 browser, you can run browser-based performance testing and catch issues related to browsers only which can be skipped entirely from the protocol level. Chaos and resilience testing You can use k6 to simulate traffic as part of your chaos experiments, trigger them from your k6 tests or inject different types of faults in Kubernetes with xk6-disruptor. Performance and synthetic monitoring With k6, you can automate and schedule to trigger tests very frequently with a small load to continuously validate the performance and availability of your production environment. You can also use Grafana Cloud Synthetic Monitoring for a managed solution built specifically for synthetic monitoring that supports k6 test scripts. Our load testing manifesto is the result of having spent years hip deep in the trenches, doing performance- and load testing. Weve created it to be used as guidance, helping you in getting your performance testing on the right track! k6 is a high-performing load testing tool, scriptable in JavaScript. The architectural design to have these capabilities brings some trade-offs: Does not run natively in a browser By default, k6 does not render web pages the same way a browser does. Browsers can consume significant system resources. Skipping the browser allows running more load within a single machine. However, with k6 browser, you can interact with real browsers and collect frontend metrics as part of your k6 tests. Does not run in NodeJS JavaScript is not generally well suited for high performance. To achieve maximum performance, the tool itself is written in Go, embedding a JavaScript runtime allowing for easy test scripting. If you want to import npm modules or libraries using NodeJS APIs, you can bundle npm modules with webpack and import them in your tests." } ]
{ "category": "Observability and Analysis", "file_name": "_pg=hp&plcmt=lt-box-governance.md", "project_name": "Grafana", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot" } ]
{ "category": "Observability and Analysis", "file_name": "_pg=hp&plcmt=lt-box-metrics.md", "project_name": "Grafana", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot" } ]
{ "category": "Observability and Analysis", "file_name": "_pg=hp&plcmt=lt-box-grafana.md", "project_name": "Grafana", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed," }, { "data": "Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot cannot remember your choice unless you click the consent notice at the bottom. I am Grot. Ask me anything Integrations bundle Grafana Agent, tailored Grafana dashboards, and alerting defaults for common observability targets like Linux hosts, Kubernetes clusters, and Nginx servers. With Grafana integrations, you can get a pre-configured Prometheus and Grafana-based observability stack up and running in minutes. See Install and manage integrations to learn how to use integrations. See Integrations reference for the current list of integrations. See Grafana Agent integrationsconfig documentation for details about the integrationsconfig block for each integration. A Prometheus-based observability stack generally consists of the following core components: Configuring, installing, connecting, and maintaining all of these components often involves significant domain knowledge and can be tedious and time consuming. Grafana integrations offer you a fast and easy way to get started with minimal effort. There are currently many Grafana integrations for common data sources such as Linux machines, databases, applications, and more. Any existing integration is included in Grafana Agent, but must be installed for it to send metrics that can be viewed in Grafana Cloud. When you install an integration, this involves activity in two places: in Grafana Cloud and on the host you gather metrics from. In Grafana Cloud, new dashboards are created and pre-configured to receive and visualize metrics from Grafana Agent. On the host where" } ]
{ "category": "Observability and Analysis", "file_name": "feeding-carbon.html.md", "project_name": "Graphite", "subcategory": "Observability" }
[ { "data": "Getting your data into Graphite is very flexible. There are three main methods for sending data to Graphite: Plaintext, Pickle, and AMQP. Its worth noting that data sent to Graphite is actually sent to the Carbon and Carbon-Relay, which then manage the data. The Graphite web interface reads this data back out, either from cache or straight off disk. Choosing the right transfer method for you is dependent on how you want to build your application or script to send data: The plaintext protocol is the most straightforward protocol supported by Carbon. The data sent must be in the following format: <metric path> <metric value> <metric timestamp>. Carbon will then help translate this line of text into a metric that the web interface and Whisper understand. On Unix, the nc program (netcat) can be used to create a socket and send data to Carbon (by default, plaintext runs on port 2003): ``` PORT=2003 SERVER=graphite.your.org echo \"local.random.diceroll 4 `date +%s`\" | nc ${SERVER} ${PORT} ``` As many netcat implementations exist, a parameter may be needed to instruct nc to close the socket once data is sent. Such param will usually be -q0, -c or -N. Refer to your nc implementation man page to determine it. Note that if your Carbon instance is listening using the UDP protocol, you also need the -u parameter. The pickle protocol is a much more efficient take on the plaintext protocol, and supports sending batches of metrics to Carbon in one go. The general idea is that the pickled data forms a list of multi-level tuples: ``` [(path, (timestamp, value)), ...] ``` Once youve formed a list of sufficient size (dont go too big!), and pickled it (if your client is running a more recent version of python than your server, you may need to specify the protocol) send the data over a socket to Carbons pickle receiver (by default, port 2004). Youll need to pack your pickled data into a packet containing a simple header: ``` payload = pickle.dumps(listOfMetricTuples, protocol=2) header = struct.pack(\"!L\", len(payload)) message = header + payload ``` You would then send the message object through a network socket. When AMQPMETRICNAMEINBODY is set to True in your carbon.conf file, the data should be of the same format as the plaintext protocol, e.g. echo local.random.diceroll 4 date +%s. When AMQPMETRICNAMEINBODY is set to False, you should omit" }, { "data": "Graphite is useful if you have some numeric values that change over time and you want to graph them. Basically you write a program to collect these numeric values which then sends them to graphites backend, Carbon. Every series stored in Graphite has a unique identifier, which is composed of a metric name and optionally a set of tags. In a traditional hierarchy, website.orbitz.bookings.air or something like that would represent the number of air bookings on orbitz. Before producing your data you need to decide what your naming scheme will be. In a path such as foo.bar.baz, each thing surrounded by dots is called a path component. So foo is a path component, as well as bar, etc. Each path component should have a clear and well-defined purpose. Volatile path components should be kept as deep into the hierarchy as possible. Matt _Aimonetti has a reasonably sane post describing the organization of your namespace. The disadvantage of a purely hierarchical system is that it is very difficult to make changes to the hierarchy, since anything querying Graphite will also need to be updated. Additionally, there is no built-in description of the meaning of any particular element in the hierarchy. To address these issues, Graphite also supports using tags to describe your metrics, which makes it much simpler to design the initial structure and to evolve it over time. A tagged series is made up of a name and a set of tags, like disk.used;datacenter=dc1;rack=a1;server=web01. In that example, the series name is disk.used and the tags are datacenter = dc1, rack = a1, and server = web01. When series are named this way they can be selected using the seriesByTag function as described in Graphite Tag Support. When using a tagged naming scheme it is much easier to add or alter individual tags as needed. It is important however to be aware that changing the number of tags reported for a given metric or the value of a tag will create a new database file on disk, so tags should not be used for data that changes over the lifetime of a particular metric. Graphite is built on fixed-size databases (see Whisper.) so we have to configure in advance how much data we intend to store and at what level of precision. For instance you could store your data with 1-minute precision (meaning you will have one data point for each minute) for say 2" }, { "data": "Additionally you could store your data with 10-minute precision for 2 weeks, etc. The idea is that the storage cost is determined by the number of data points you want to store, the less fine your precision, the more time you can cover with fewer points. To determine the best retention configuration, you must answer all of the following questions. Once you have picked your naming scheme and answered all of the retention questions, you need to create a schema by creating/editing the /opt/graphite/conf/storage-schemas.conf file. The format of the schemas file is easiest to demonstrate with an example. Lets say weve written a script to collect system load data from various servers, the naming scheme will be like so: servers.HOSTNAME.METRIC Where HOSTNAME will be the servers hostname and METRIC will be something like cpuload, memusage, open_files, etc. Also lets say we want to store this data with minutely precision for 30 days, then at 15 minute precision for 10 years. For details of implementing your schema, see the Configuring Carbon document. Basically, when carbon receives a metric, it determines where on the filesystem the whisper data file should be for that metric. If the data file does not exist, carbon knows it has to create it, but since whisper is a fixed size database, some parameters must be determined at the time of file creation (this is the reason were making a schema). Carbon looks at the schemas file, and in order of priority (highest to lowest) looks for the first schema whose pattern matches the metric name. If no schema matches the default schema (2 hours of minutely data) is used. Once the appropriate schema is determined, carbon uses the retention configuration for the schema to create the whisper data file appropriately. Graphite understands messages with this format: ``` metric_path value timestamp\\n ``` metric_path is the metric namespace that you want to populate. value is the value that you want to assign to the metric at this time. timestamp is the number of seconds since unix epoch time. Carbon-cache will use the time of arrival if the timestamp is set to -1. A simple example of doing this from the unix terminal would look like this: ``` echo \"test.bash.stats 42 `date +%s`\" | nc graphite.example.com 2003 ``` There are many tools that interact with Graphite. See the Tools page for some choices of tools that may be used to feed Graphite. Copyright 2008-2012, Chris Davis; 2011-2021 The Graphite Project Revision b52987ac." } ]
{ "category": "Observability and Analysis", "file_name": "_pg=hp&plcmt=lt-box-reports.md", "project_name": "Grafana", "subcategory": "Observability" }
[ { "data": "All Products Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE All Open Source Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data All Community resources Dashboard templates Try out and share prebuilt visualizations Prometheus exporters Get your metrics into Prometheus quickly All end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana All Learn Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with" }, { "data": "Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups All Docs Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes All Company Help build the future of open source observability software Open positions Check out the open source projects we support Downloads Core LGTM Stack Logs powered by Grafana Loki Grafana for visualization Traces powered by Grafana Tempo Metrics powered by Grafana Mimir and Prometheus extend observability Performance & load testing powered by Grafana k6 Continuous profiling powered by Grafana Pyroscope Plugins Connect Grafana to data sources, apps, and more end-to-end solutions Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management with Grafana Alerting, Grafana Incident, Grafana OnCall, and Grafana SLO Synthetic Monitoring Powered by Grafana k6 Deploy The Stack Grafana Cloud Fully managed Grafana Enterprise Self-managed Pricing Hint: It starts at FREE Free forever plan (Surprise: its actually useful) No credit card needed, ever. Grafana Loki Multi-tenant log aggregation system Grafana Query, visualize, and alert on data Grafana Tempo High-scale distributed tracing backend Grafana Mimir Scalable and performant metrics backend Grafana OnCall On-call management Grafana Pyroscope Scalable continuous profiling backend Grafana Beyla eBPF auto-instrumentation Grafana Faro Frontend application observability web SDK Grafana Alloy OpenTelemetry Collector distribution with Prometheus pipelines Grafana k6 Load testing for engineering teams Prometheus Monitor Kubernetes and cloud native OpenTelemetry Instrument and collect telemetry data Graphite Scalable monitoring for time series data Community resources end-to-end solutions Opinionated solutions that help you get there easier and faster Kubernetes Monitoring Get K8s health, performance, and cost monitoring from cluster to container Application Observability Monitor application performance Frontend Observability Gain real user monitoring insights Incident Response & Management Detect and respond to incidents with a simplified workflow monitor infrastructure Out-of-the-box KPIs, dashboards, and alerts for observability visualize any data Instantly connect all your data sources to Grafana Stay up to date ObservabilityCON Annual flagship observability conference ObservabilityCON on the Road Observability roadshow series Story of Grafana 10 years of Grafana Observability Survey 2024 Key findings and results Blog News, releases, cool stories, and more Events Upcoming in-person and virtual events Success stories By use case, product, and industry Technical learning Documentation All the docs Webinars and videos Demos, webinars, and feature tours Tutorials Step-by-step guides Workshops Free, in-person or online Writers' Toolkit Contribute to technical documentation provided by Grafana Labs Plugin development Visit the Grafana developer portal for tools and resources for extending Grafana with plugins. Join the community Community Join the Grafana community Community forums Ask the community for help Community Slack Real-time engagement Grafana Champions Contribute to the community Community organizers Host local meetups Featured Getting started with the Grafana LGTM Stack Well demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics. Grafana Cloud Grafana Grafana Alloy Grafana Loki Grafana Mimir Grafana Tempo Grafana Pyroscope Grafana OnCall Application Observability Grafana Faro Grafana Beyla Grafana k6 Prometheus Grafana Enterprise Grafana Enterprise Logs Grafana Enterprise Traces Grafana Enterprise Metrics Grafana plugins Community plugins Grafana Alerting Get started Get Started with Grafana Build your first dashboard Getting started with Grafana Cloud What's new / Release notes Grot cannot remember your choice unless you click the consent notice at the bottom. I am Grot. Ask me anything Welcome to Grafana 11.0! This release contains some major improvements: most notably, the ability to explore your Prometheus metrics and Loki logs without writing any PromQL or LogQL, using Explore Metrics and Explore Logs. The dashboard experience is better than ever with edit mode for dashboards, AI-generated dashboard names and descriptions, and general availability for" }, { "data": "You can also take advantage of improvements to the canvas and table visualizations, new transformations, a revamp of the Alert Rule page, and more. For even more detail about all the changes in this release, refer to the changelog. For the specific steps we recommend when you upgrade to v11.0, check out our Upgrade Guide. For Grafana v11.0, weve also provided a list of breaking changes to help you upgrade with greater confidence. For information about these along with guidance on how to proceed, refer to Breaking changes in Grafana v11.0. Public preview in all editions of Grafana Explore Metrics is a query-less experience for browsing Prometheus-compatible metrics. Search for or filter to find a metric. Quickly find related metrics - all in just a few clicks. You do not need to learn PromQL! With Explore Metrics, you can: all without writing any queries! To learn more, refer to Explore Metrics as well as the following video demo: Experimental in Grafana Open Source and Enterprise Explore Logs is a queryless experience for exploring Loki logs - no LogQL required! The primary interaction modes are point-and-click based on log volume, similar to Explore Metrics. Highlights: Explore Logs is Open Source, and experimental - some papercuts are to be expected. Give it a try and let us know what you think! Available in public preview in all editions of Grafana For the past few months weve been working on a major update of our Dashboards architecture and migrated it to the Scenes library. This migration provides us with more stable, dynamic, and flexible dashboards as well as setting the foundation for what we envision the future of Grafana dashboards will be. Here are two of the improvements that are being introduced as part of this work. It can be difficult to efficiently navigate through the visually cluttered options during the dashboard editing process. With the introduction of the edit mode, we aim to provide an easier way to discover and interact with the dashboard edit experience. We moved the time picker into the dashboard canvas and now, together with template variables, it will stick to the top as you scroll through your dashboard. This has historically been a very requested feature that were very happy to be able to finally roll out! If you want to learn more, in detail, about all the improvements weve made, dont miss our blog post. Generally available in all editions of Grafana Dashboards, when accessed by users with the Viewer role, are now using the Scenes library. Those users shouldnt see any difference in the dashboards apart from two small changes to the user interface (UI): the variables UI has slightly changed and the time picker is now part of the dashboard container. Dashboards arent affected for users in other roles. This is the first step towards a more robust and dynamic dashboarding system that well be releasing in the upcoming months. Generally available in all editions of Grafana Subfolders are here at last! Some of you want subfolders in order to keep things" }, { "data": "Its easy for dashboard sprawl to get out of control, and setting up folders in a nested hierarchy helps with that. Others of you want subfolders in order to create nested layers of permissions, where teams have access at different levels that reflect their organizations hierarchy. We are thrilled to bring this long-awaited functionality to our community of users! Subfolders are currently being rolled out to Grafana Cloud instances and will be generally available to all Grafana users for the Grafana 11 release. Just a quick note: the upgrade to enable subfolders can cause some issues with alerts in certain cases. We think these cases are pretty rare, but just in case, youll want to check for this: If youve previously set up a folder that uses a forward slash in its name, and you have an alert rule in that folder, and the notification policy is set to match that folders name, notifications will be sent to the default receiver instead of the configured receiver. To correct this, take the following steps: If you use file provisioning, you can upgrade and update the routes at the same time. Generally available in all editions of Grafana You can now use generative AI to assist you in your Grafana dashboards. So far generative AI can help you generate panel and dashboard titles and descriptions - You can now generate a title and description for your panel or dashboard based on the data youve added to it. This is useful when you want to quickly visualize your data and dont want to spend time coming up with a title or description. Make sure to enable and configure Grafanas LLM app plugin. For more information, refer to the Grafana LLM app plugin documentation. When enabled, look for the Auto generate option next to the Title and Description fields in your panels and dashboards, or when you press the Save button. Generally available in all editions of Grafana Weve made a number of improvements to the canvas visualization. With this release, weve updated the canvas visualization to include much-requested flowcharting features. These improvements are: Weve updated data links so that you can add them to almost all elements or element properties that are tied to data. Previously, you could only add data links to text elements or elements that used the TextConfig object. This update removes that limitation. Note Documentation Available in public preview in all editions of Grafana With the newly added Infinite panning editor option, you can now view and navigate very large canvases. This option is displayed when the Pan and zoom switch is enabled. To try out this feature, you must first enable the canvasPanelPanZoom feature toggle. Documentation Generally available in all editions of Grafana Grafana 11 adds the ability to color full table rows using the Colored background cell type of the table visualization. When you configure fields in a table to use this cell type, an option to apply the color of the cell to the entire row becomes available. This feature is useful for a wide variety of use cases including mapping status fields to colors (for example, info, debug, warning) and allowing rows to be colored based on threshold" }, { "data": "This is one of the first steps in making formatting tables more seamless, and allows for quick scanning of data using the table visualization. To learn more, refer to the documentation for the Colored background cell type. Generally available in all editions of Grafana You now have the ability to customize specific colors for individual thresholds when using the Config from query results transformer. Previously, when you added multiple thresholds, they all defaulted to the same color, red. With this addition, you gain the flexibility to assign distinct colors to each threshold. This feature addresses a common pain point highlighted by users. With customizable threshold colors, you now have greater control over your data representation, fostering more insightful and impactful analyses across diverse datasets. Generally available in Grafana Cloud and Open Source This update to the Filter data by values transformation simplifies data filtering by enabling partial string matching on field values thanks to two new matchers: Contains substring and Does not contain substring. With the substring matcher built into the Filter data by values transformation, you can efficiently filter large datasets, displaying relevant information with speed and precision. Whether youre searching for keywords, product names, or user IDs, this feature streamlines the process, saving time and effort while ensuring accurate data output. In the Filter data by values transformation, simply add a condition, choose a field, choose your matcher, and then input the string to match against. This update will be rolled out to customers over the next few weeks. Available in public preview in Grafana Cloud and Enterprise Introducing a major performance improvement for the PDF export feature. Are you tired of waiting for your PDF to be generated or your report to be sent? Were working on a major update of the dashboard-to-PDF feature to make it faster for large dashboards. The generation time will no longer be proportional to the number of panels in your dashboard. As an example, an SLO dashboard containing around 200 panels has gone from taking more than seven minutes to be generated to only eleven seconds. This update also fixes all caveats related to rendering a report with panels or rows set to repeat by a variable, like rendering repeating panels inside collapsed rows. To try out this update, enable the newPDFRendering feature toggle. Generally available in all editions of Grafana (Re-)introducing Keep Last State to Grafana managed alert rules. You can now choose to keep the last evaluated state of an alert rule when that rule produces No Data or Error results. Simply choose the Keep Last State option for no data or error handling when editing a rule. Refer to the Alerting documentation on state and health of alert rules for more information. Generally available in all editions of Grafana The new alert rule detail view has a new look and feel with helpful metadata at the top. The namespace and group are shown in the breadcrumb navigation. This is interactive and can be used to filter rules by namespace or" }, { "data": "The rest of the alert detail content is split up into tabs: Query and conditions View the details of the query that is used for the alert rule, including the expressions and intermediate values for each step of the expression pipeline. A graph view is included for range queries and data sources that return time series-like data frames. Instances Explore each alert instance, its status, labels and various other metadata for multi-dimensional alert rules. History Explore the recorded history for an alert rule. Details Debug or audit using the alert rule metadata and view the alert rule annotations. Generally available in all editions of Grafana The Alerting Provisioning HTTP API has been updated to enforce Role-Based Access Control (RBAC). Generally available in all editions of Grafana In Grafana v10.1, we added a Tempo search editor powered by TraceQL (search tab). We also recommended using this new editor over the older non-TraceQL powered editor. The older non-TraceQL powered editor has been removed. Any existing queries using the older editor will be automatically migrated to the new TraceQL-powered editor. The new TraceQL-powered editor makes it much easier to build your query by way of static filters, better input/selection validation, copy query to the TraceQL tab, query preview, dedicated status filter, and the ability to run aggregate by (metrics summary) queries. Refer to Query tracing data to learn more. The Loki Search tab has been around since before we could natively query Tempo for traces. This search is used by a low number of users in comparison to the TraceQL-powered editor (Search tab) or the TraceQL tab itself. If you would like to see what logs are linked to a specific trace or service, you can use the Trace to logs feature, which provides an easy way to create a custom link and set an appropriate time range if necessary. Generally available in Grafana Open Source and Enterprise You can now use Windows Active Directory (or Kerberos) to authenticate to MSSQL servers from Grafana. There are four primary ways to authenticate from Grafana to a MSSQL instance with Windows Active Directory: To get started, refer to the Getting Started documentation for MSSQL. Available in public preview in Grafana Open Source and Enterprise If you manage your users using Grafanas built-in basic authorization as an identity provider, consider enabling our new strong password policy feature. Starting with Grafana v11.0, you can enable an opinionated strong password policy feature. This configuration option validates all password updates to comply with our strong password policy. To learn more about Grafanas strong password policy, refer to the documentation. Generally available in Grafana Enterprise We are announcinga license change to the anonymous access feature in Grafana 11. As you may already be aware, anonymous access allows users access to Grafana without login credentials. Anonymous access was an early feature of Grafana to share dashboards; however, we recently introduced Public Dashboards which allows you to share dashboards in a more secure manner. We also noticed that anonymous access inadvertently resulted in user licensing issues. After careful consideration, we have decided to charge for the continued use of anonymous access starting in Grafana 11. Affected Grafana versions Anonymous authentication is disabled by default in Grafana Cloud. This licensing change only affects Grafana Enterprise (self-managed) edition. Anonymous users will be charged as active users in Grafana Enterprise. Documentation" } ]
{ "category": "Observability and Analysis", "file_name": "install.html.md", "project_name": "Graphite", "subcategory": "Observability" }
[ { "data": "Try Graphite in Docker and have it running in seconds: ``` docker run -d \\ --name graphite \\ --restart=always \\ -p 80:80 \\ -p 2003-2004:2003-2004 \\ -p 2023-2024:2023-2024 \\ -p 8125:8125/udp \\ -p 8126:8126 \\ graphiteapp/graphite-statsd ``` Check docker repo for details. This is portable, fast and easy to use. Or use instructions below for installation. Graphite renders graphs using the Cairo graphics library. This adds dependencies on several graphics-related libraries not typically found on a server. If youre installing from source you can use the check-dependencies.py script to see if the dependencies have been met or not. Basic Graphite requirements: Additionally, the Graphite webapp and Carbon require the Whisper database library which is part of the Graphite project. There are also several other dependencies required for additional features: See also On some systems it is necessary to install fonts for Cairo to use. If the webapp is running but all graphs return as broken images, this may be why. Most current Linux distributions have all of the requirements available in the base packages. RHEL based distributions may require the EPEL repository for requirements. Python module dependencies can be install with pip rather than system packages if desired or if using a Python version that differs from the system default. Some modules (such as Cairo) may require library development headers to be available. Graphite defaults to an installation layout that puts the entire install in its own directory: /opt/graphite Whisper is installed Pythons system-wide site-packages directory with Whispers utilities installed in the bin dir of the systems default prefix (generally /usr/bin/). Carbon and Graphite-web are installed in /opt/graphite/ with the following layout: bin/ conf/ lib/ Carbon PYTHONPATH storage/ log Log directory for Carbon and Graphite-web rrd Location for RRD files to be read whisper Location for Whisper data files to be stored and read ceres Location for Ceres data files to be stored and read webapp/ Graphite-web PYTHONPATH graphite/ Location of local_settings.py content/ Graphite-web static content directory Several installation options exist: If you run into any issues with Graphite, please to post a question to our Questions forum on Launchpad or join us on IRC in #graphite on FreeNode. Unfortunately, native Graphite on Windows is completely unsupported, but you can run Graphite on Windows in Docker or the Installing via Synthesize article will help you set up a Vagrant VM that will run Graphite. In order to leverage this, you will need to install Vagrant. Copyright 2008-2012, Chris Davis; 2011-2021 The Graphite Project Revision b52987ac." } ]
{ "category": "Observability and Analysis", "file_name": "tools.html.md", "project_name": "Graphite", "subcategory": "Observability" }
[ { "data": "A daemon which collects system performance statistics periodically and provides mechanisms to store the values in a variety of ways, including RRD. To send collectd metrics into carbon/graphite, use collectds write-graphite plugin (available as of 5.1). Other options include: Graphite can also read directly from collectds RRD files. RRD files can simply be added to STORAGEDIR/rrd (as long as directory names and files do not contain any . characters). For example, collectds host.name/load/load.rrd can be symlinked to rrd/collectd/hostname/load/load.rrd to graph collectd.host_name.load.load.{short,mid,long}term. Very light caching proxy for graphite metrics with additional features: A tool for manage graphite dashboards from command line: If you wish to use a backend to graphite other than Whisper, there are some options available to you. Copyright 2008-2012, Chris Davis; 2011-2021 The Graphite Project Revision b52987ac." } ]
{ "category": "Observability and Analysis", "file_name": "mac-installation.md", "project_name": "Headlamp", "subcategory": "Observability" }
[ { "data": "A common use-case for any Kubernetes web UI is to deploy it in-cluster and set up an ingress server for having it available to users. The easiest way to install headlamp in your existing cluster is to use helm with our helm chart . ``` helm repo add headlamp https://headlamp-k8s.github.io/headlamp/ helm install my-headlamp headlamp/headlamp --namespace kube-system ``` As usual it is possible to configure the helm release via the values file or setting your preferred values directly. ``` helm install my-headlamp headlamp/headlamp --namespace kube-system -f values.yaml helm install my-headlamp headlamp/headlamp --namespace kube-system --set replicaCount=2 ``` We also maintain a simple/vanilla file for setting up a Headlamp deployment and service. Be sure to review it and change anything you need. If youre happy with the options in this deployment file, and assuming you have a running Kubernetes cluster and your kubeconfig pointing to it, you can run: ``` kubectl apply -f https://raw.githubusercontent.com/kinvolk/headlamp/main/kubernetes-headlamp.yaml ``` With the instructions in the previous section, the Headlamp service should be running, but you still need the ingress server as mentioned. We provide an example sample ingress yaml file for this purpose, but you have to manually replace the URL placeholder with the desired URL (the ingress file also assumes that you have contour and a cert-manager set up, but if you dont then youll just not have TLS). Assuming your URL is headlamp.mydeployment.io, getting the sample ingress file and changing the URL can quickly be done by: ``` curl -s https://raw.githubusercontent.com/kinvolk/headlamp/main/kubernetes-headlamp-ingress-sample.yaml | sed -e s/URL/headlamp.mydeployment.io/ > headlamp-ingress.yaml ``` and with that, youll have a configured ingress file, so verify it and apply it: ``` kubectl apply -f ./headlamp-ingress.yaml ``` If you want to quickly access Headlamp (after having its service running) and dont want to set up an ingress for it, you can run use port-forwarding as follows: ``` kubectl port-forward -n kube-system service/headlamp 8080:80 ``` and then you can access localhost:8080 in your browser. Once Headlamp is up and running, be sure to enable access to it either by creating a service account or by setting up OIDC . Copyright 2024 The Headlamp Contributors The Linux Foundation (TLF) has registered trademarks and uses trademarks.For a list of TLF trademarks, see Trademark Usage" } ]
{ "category": "Observability and Analysis", "file_name": "in-cluster.md", "project_name": "Headlamp", "subcategory": "Observability" }
[ { "data": "Headlamp can be run as a desktop application, for users who dont want to deploy it in cluster, or those who want to manage unrelated clusters locally. Currently there are desktop apps for Linux , Mac , and Windows . Please check the following guides for the installation in your desired platform. If you wish to use a non-default kube config file, then you can do it by providing it as an argument to Headlamp, e.g.: ``` /path/to/headlamp /my/different/kubeconfig ``` or by using an environment variable: ``` KUBECONFIG=/my/different/kubeconfig /path/to/headlamp ``` If you need to use more than one kube config file at the same time, you can list each config file path with a separator. On unix: ``` KUBECONFIG=kubeconfig1:kubeconfig2:kubeconfig3 /path/to/headlamp ``` On windows cmd/PowerShell: ``` KUBECONFIG=kubeconfig1;kubeconfig2;kubeconfig3 /path/to/headlamp ``` OIDC has a feature makes more sense when running Headlamp in a cluster as it will allow cluster operators to just give users a URL that they can use for logging in and access Headlamp. However, if you have your kube config set to use OIDC for the authentication (because you already authenticated and produced a kube config with that data), Headlamp will read those settings and try to use them for offering the effortless login to the cluster. Still, the kube config OIDC settings will not provide a OIDC callback URL, so make sure that your OIDC configuration for your cluster include Headlamps OIDC callback in its redirect URIs. i.e. say youre using Dex for the OIDC connection and you have it already configured in your kube config, then be sure to add the /oidc-callback endpoint with Headlamps the local address to Dexs staticClient.redirectURIs: http://localhost:6644/oidc-callback. Copyright 2024 The Headlamp Contributors The Linux Foundation (TLF) has registered trademarks and uses trademarks.For a list of TLF trademarks, see Trademark Usage" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "HertzBeat", "subcategory": "Observability" }
[ { "data": "First from the remote repository https://github.com/apache/hertzbeat.git fork a copy of the code into your own repository The remote dev and merge branch is master. Clone your repository to your local ``` git clone git@github.com:<Your Github ID>/hertzbeat.git``` ``` git remote add upstream git@github.com:apache/hertzbeat.git``` ``` git remote -v``` At this time, there will be two repositories: origin (your own repository) and upstream (remote repository) Get/Update remote repository code ``` git fetch upstream``` Synchronize remote repository code to local repository ``` git checkout origin/devgit merge --no-ff upstream/dev``` Note that you must create a new branch to develop features git checkout -b feature-xxx. It is not recommended to use the master branch for direct development After modifying the code locally, submit it to your own repository: Note that the submission information does not contain special characters ``` git commit -m 'commit content'git push``` Submit changes to the remote repository, you can see a green button \"Compare & pull request\" on your repository page, click it. Select the modified local branch and the branch you want to merge with the past, you need input the message carefully, describe doc is important as code, click \"Create pull request\". Then the community Committers will do CodeReview, and then he will discuss some details (design, implementation, performance, etc.) with you, afterward you can directly update the code in this branch according to the suggestions (no need to create a new PR). When this pr is approved, the commit will be merged into the master branch Finally, congratulations, you have become an official contributor to HertzBeat ! You will be added to the contributor wall, you can contact the community to obtain a contributor certificate. Apache HertzBeat is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF." } ]
{ "category": "Observability and Analysis", "file_name": "how_to_release.md", "project_name": "HertzBeat", "subcategory": "Observability" }
[ { "data": "Here is the Apache HertzBeat (incubating) official download page. Please choose version to download from the following tables. It is recommended use the latest. Previous releases of HertzBeat may be affected by security issues, please use the latest one. | Version | Date | Download | Release Notes | |:-|:--|:-|:-| | v1.6.0 | 2024.06.10 | apache-hertzbeat-1.6.0-incubating-bin.tar.gz ( signature , sha512 ) apache-hertzbeat-collector-1.6.0-incubating-bin.tar.gz ( signature , sha512 ) apache-hertzbeat-1.6.0-incubating-src.tar.gz ( signature , sha512 ) | release note | Apache HertzBeat provides a docker image for each release. You can pull the image from the Docker Hub. For older releases, please check the archive. Apache HertzBeat is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF." } ]
{ "category": "Observability and Analysis", "file_name": "guide.md", "project_name": "HertzBeat", "subcategory": "Observability" }
[ { "data": "ISSUE/PR(pull request) driving and naming After creating a new PR, you need to associate the existing corresponding ISSUE at the Github Development button on the PR page (if there is no corresponding ISSUE, it is recommended to create a new corresponding ISSUE). Title naming format [feature/bugfix/doc/improve/refactor/bug/cleanup] title Description It's recommended that PR should be arranged changes such as cleanup, Refactor, improve, and feature into separated PRs/Commits. Commit message(English, lowercase, no special characters) The commit of messages should follow a pattern similar to the [feature/bugfix/doc/improve/refactor/bug/cleanup] title Backend code specification Maven plugin: checkstyle Just run mvn checkstyle:checkstyle. Frontend code formatting plugin eslint Just run npm run lint:fix in web-app Prioritize selecting nouns for variable naming, it's easier to distinguish between variables or methods. ``` Cache<String> publicKeyCache;``` Pinyin abbreviations are prohibited for variables (excluding nouns such as place names), such as chengdu. It is recommended to end variable names with a type. For variables of type Collection/List, take xxxx (plural representing multiple elements) or end with xxxList (specific type). For variables of type map, describe the key and value clearly: ``` Map<Long, User> idUserMap; Map<Long, String> userIdNameMap;``` That can intuitively know the type and meaning of the variable through its name. Method names should start with a verb first as follows: ``` void computeVcores(Object parameter1);``` Note: It is not necessary to strictly follow this rule in the Builder tool class. Redundant strings should be extracted as constants If a constant has been hardcoded twice or more times, please directly extract it as a constant and change the corresponding reference. In generally, constants in log can be ignored to extract. Negative demo: ``` public static RestResponse success(Object data) { RestResponse resp = new RestResponse(); resp.put(\"status\", \"success\"); resp.put(\"code\", ResponseCode.CODESUCCESS); resp.put(\"data\", data); return resp;}public static RestResponse error() { RestResponse resp = new RestResponse(); resp.put(\"status\", \"error\"); resp.put(\"code\", ResponseCode.CODEFAIL); resp.put(\"data\", null); return resp;}``` Positive demo: Strings are extracted as constant references. ``` public static final String STATUS = \"status\"; public static final String CODE = \"code\"; public static final String DATA = \"data\"; public static RestResponse success(Object data) { RestResponse resp = new RestResponse(); resp.put(STATUS, \"success\"); resp.put(CODE, ResponseCode.CODESUCCESS); resp.put(DATA, data); return resp; } public static RestResponse error() { RestResponse resp = new RestResponse(); resp.put(STATUS, \"error\"); resp.put(CODE, ResponseCode.CODEFAIL); resp.put(DATA, null); return resp; }``` Ensure code readability and intuitiveness The string in the annotation symbol doesn't need to be extracted as constant. The referenced package or resource name doesn't need to be extracted as constant. Variables that have not been reassigned must also be declared as final types. About the arrangement order of constant/variable lines Sort the variable lines in the class in the order of Sort the methods in the class in the order of public, protected, private Static methods of a class can be placed after non-static methods and sorted according to consistent method visibility. When there are restrictions on the method, the parameters and returned values of the method need to be annotated with @Nonnull or @Nullable annotations and constraints. For example, if the parameter cannot be null, it is best to add a @Nonnull" }, { "data": "If the returned value can be null, the @Nullable annotation should be added first. If there are too many lines of code in the method, please have a try on using multiple sub methods at appropriate points to segment the method body. Generally speaking, it needs to adhere to the following principles: In addition, it is also necessary to consider whether the splitting is reasonable in terms of components, logic, abstraction, and other aspects in the scenario. However, there is currently no clear definition of demo. During the evolution process, we will provide additional examples for developers to have a clearer reference and understanding. For collection returned values, unless there are special concurrent (such as thread safety), always return the interface, such as: If there are multiple threads, the following declaration or returned types can be used: ``` private CurrentHashMap map; public CurrentHashMap funName();``` Use isEmpty() instead of length() == 0 or size() == 0 Negative demo ``` if (pathPart.length() == 0) { return;}``` Positive demo ``` if (pathPart.isEmpty()) { return;}``` The thread pool needs to be managed, using a unified entry point to obtain the thread pool. Thread pool needs to be resource constrained to prevent resource leakage caused by improper handling Avoid unreasonable condition/control branches order leads to: Generally speaking, if a method's code line depth exceeds 2+ Tabs due to continuous nested if... else.., it should be considered to try to reduce code line depth and improve readability like follows: Union or merge the logic into the next level calling ``` if (isInsert) { save(platform); } else { updateById(platform); }``` ``` saveOrUpdate(platform);``` Merge the conditions ``` if (expression1) { if(expression2) { ...... } }``` ``` if (expression1 && expression2) { ...... }``` Reverse the condition Negative demo: ``` public void doSomething() { // Ignored more deeper block lines // ..... if (condition1) { ... } else { ... } }``` Positive demo: ``` public void doSomething() { // Ignored more deeper block lines // ..... if (!condition1) { ... return; } // ... }``` Using a single variable or method to reduce the complex conditional expression Negative demo: ``` if (dbType.indexOf(\"sqlserver\") >= 0 || dbType.indexOf(\"sql server\") >= 0) { ... }``` Positive demo: ``` if (containsSqlServer(dbType)) { .... } //..... // definition of the containsSqlServer``` Using sonarlint and better highlights to check code depth looks like good in the future. Method lacks comments: Missing necessary class header description comments. Add What, Note, etc. like mentioned in the 1. The method declaration in the interface must be annotated. If the semantics of the implementation and the annotation content at the interface declaration are inconsistent, the specific implementation method also needs to be rewritten with annotations. If the semantics of the method implementation are consistent with the annotation content at the interface declaration, it is not recommended to write annotations to avoid duplicate annotations. The first word in the comment lines need to be capitalized, like param lines, return lines. If a special reference as a subject does not need to be capitalized, special symbols such as quotation marks need to be" }, { "data": "Prefer non-capturing lambdas (lambdas that do not contain references to the outer scope). Capturing lambdas need to create a new object instance for every call. Non-capturing lambdas can use the same instance for each invocation. Negative demo: ``` map.computeIfAbsent(key, x -> key.toLowerCase())``` Positive demo: ``` map.computeIfAbsent(key, k -> k.toLowerCase());``` Consider method references instead of inline lambdas Negative demo: ``` map.computeIfAbsent(key, k-> Loader.load(k));``` Positive demo: ``` map.computeIfAbsent(key, Loader::load);``` Avoid Java Streams in any performance critical code. The main motivation to use Java Streams would be to improve code readability. As such, they can be a good match in parts of the code that are not data-intensive, but deal with coordination. Even in the latter case, try to limit the scope to a method, or a few private methods within an internal class. Use StringUtils.isBlank instead of StringUtils.isEmpty Negative demo: ``` if (StringUtils.isEmpty(name)) {return;}``` Positive demo: ``` if (StringUtils.isBlank(name)) {return;}``` Use StringUtils.isNotBlank instead of StringUtils.isNotEmpty Negative demo: ``` if (StringUtils.isNotEmpty(name)) { return;}``` Positive demo: ``` if (StringUtils.isNotBlank(name)) { return;}``` Use StringUtils.isAllBlank instead of StringUtils.isAllEmpty Negative demo: ``` if (StringUtils.isAllEmpty(name, age)) { return;}``` Positive demo: ``` if (StringUtils.isAllBlank(name, age)) { return;}``` Enumeration value comparison Negative demo: ``` if (status.equals(JobStatus.RUNNING)) { return;}``` Positive demo: ``` if (status == JobStatus.RUNNING) { return;}``` Enumeration classes do not need to implement Serializable Negative demo: ``` public enum JobStatus implements Serializable { ...}``` Positive demo: ``` public enum JobStatus { ...}``` Use Enum.name() instead of Enum.toString() Negative demo: ``` System.out.println(JobStatus.RUNNING.toString());``` Positive demo: ``` System.out.println(JobStatus.RUNNING.name());``` Enumeration class names uniformly use the Enum suffix Negative demo: ``` public enum JobStatus { ...}``` Positive demo: ``` public enum JobStatusEnum { ...}``` Negative demo: ``` @deprecatedpublic void process(String input) { ...}``` Positive demo: ``` @Deprecatedpublic void process(String input) { ...}``` Use placeholders for log output: ``` log.info(\"Deploy cluster request \" + deployRequest);``` ``` log.info(\"load plugin:{} to {}\", file.getName(), appPlugins);``` Pay attention to the selection of log level when printing logs When printing the log content, if the actual parameters of the log placeholder are passed, it is necessary to avoid premature evaluation to avoid unnecessary evaluation caused by the log level. Negative demo: Assuming the current log level is INFO: ``` // ignored declaration lines. List<User> userList = getUsersByBatch(1000); LOG.debug(\"All users: {}\", getAllUserIds(userList));``` Positive demo: In this case, we should determine the log level in advance before making actual log calls as follows: ``` // ignored declaration lines. List<User> userList = getUsersByBatch(1000); if (LOG.isDebugEnabled()) { LOG.debug(\"All ids of users: {}\", getAllIDsOfUsers(userList)); }``` It's recommended to use JUnit5 to develop test case preparation The implemented interface needs to write the e2e test case script under the e2e module. Apache HertzBeat is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF." } ]
{ "category": "Observability and Analysis", "file_name": "new_committer_process.md", "project_name": "HertzBeat", "subcategory": "Observability" }
[ { "data": "Apache New Committer Guideline Call a vote in mailing private@hertzbeat.apache.org see Committer Vote Template Close the vote see Close Vote Template If the result is positive, invite the new committer see Committer Invite Template If accepted, then: Accept the committer see Committer Accept Template New Committer sign CLA and wait for CLA is recorded Request creation of the committer account see Committer Account Creation Notify the committer of completion see Committer Done Template Note that, there are three placeholder in template should be replaced before using ``` To: private@hertzbeat.apache.orgSubject: [VOTE] New committer: ${NEWCOMMITTERNAME}``` ``` Hi HertzBeat PPMC,This is a formal vote about inviting ${NEWCOMMITTERNAME} as our new committer.${Work list} https://github.com/apache/hertzbeat/commits?author=${NEWCOMMITTERNAME}``` Note that, Voting ends one week from today, i.e. midnight UTC on YYYY-MM-DD Apache Voting Guidelines ``` To: private@hertzbeat.apache.orgSubject: [RESULT] [VOTE] New committer: ${NEWCOMMITTERNAME}``` ``` Hi HertzBeat PPMC,The vote has now closed. The results are:Binding Votes:+1 [TOTAL BINDING +1 VOTES] 0 [TOTAL BINDING +0/-0 VOTES]-1 [TOTAL BINDING -1 VOTES]The vote is *successful/not successful*``` ``` To: ${NEWCOMMITTEREMAIL}Cc: private@hertzbeat.apache.orgSubject: Invitation to become HertzBeat committer: ${NEWCOMMITTERNAME}``` ``` Hello ${NEWCOMMITTERNAME},The HertzBeat Project Management Committee (PMC) hereby offers you committer privileges to the project.These privileges are offered on the understanding thatyou'll use them reasonably and with common sense.We like to work on trust rather than unnecessary constraints. Being a committer enables you to more easily make changes without needing to go through the patch submission process.Being a committer does not require you to participate any more than you already do. It does tend to make one even more committed. You will probably find that you spend more time here.Of course, you can decline and instead remain as a contributor, participating as you do now.A. This personal invitation is a chance for you to accept or decline in private. Either way, please let us know in reply to the private@hertzbeat.apache.orgaddress only.B. If you accept, the next step is to register an iCLA: 1. Details of the iCLA and the forms are found through this link: https://www.apache.org/licenses/#clas 2. Instructions for its completion and return to the Secretary of the ASF are found at https://www.apache.org/licenses/#submitting 3. When you transmit the completed iCLA, request to notify the Apache HertzBeat and choose a unique Apache ID. Look to see if your preferred ID is already taken at https://people.apache.org/committer-index.html This will allow the Secretary to notify the PMC when your iCLA has been" }, { "data": "recording of your iCLA is noted, you will receive a follow-up message with the next steps for establishing you as a committer.``` ``` To: ${NEWCOMMITTEREMAIL}Cc: private@hertzbeat.apache.orgSubject: Re: invitation to become HertzBeat committer``` ``` Welcome. Here are the next steps in becoming a project committer. After thatwe will make an announcement to the dev@hertzbeat.apache.org list.You need to send a Contributor License Agreement to the ASF.Normally you would send an Individual CLA. If you also makecontributions done in work time or using work resources,see the Corporate CLA. Ask us if you have any issues.https://www.apache.org/licenses/#clas.You need to choose a preferred ASF user name and alternatives.In order to ensure it is available you can view a list of taken IDs athttps://people.apache.org/committer-index.htmlPlease notify us when you have submitted the CLA and by what means you did so. This will enable us to monitor its progress.We will arrange for your Apache user account when the CLA has been recorded.After that is done, please make followup replies to the dev@hertzbeat.apache.org list.We generally discuss everything there and keep theprivate@hertzbeat.apache.org list for occasional matters which must be private.The developer section of the website describes roles within the ASF and provides otherresources: https://www.apache.org/foundation/how-it-works.html https://www.apache.org/dev/The incubator also has some useful information for new committersin incubating projects: https://incubator.apache.org/guides/committer.html https://incubator.apache.org/guides/ppmc.htmlJust as before you became a committer, participation in any ASF communityrequires adherence to the ASF Code of Conduct: https://www.apache.org/foundation/policies/conduct.htmlYours,The Apache HertzBeat PPMC``` ``` To: private@hertzbeat.apache.org, ${NEWCOMMITTEREMAIL}Subject: account request: ${NEWCOMMITTERNAME}``` ``` ${NEWCOMMITTERNAME}, as you know, the ASF Infrastructure has set up yourcommitter account with the username '${NEWCOMMITTERAPACHE_NAME}'.Please follow the instructions to set up your SSH,svn password, svn configuration, email forwarding, etc.https://www.apache.org/dev/#committersYou have commit access to specific sections of theASF repository, as follows:The general \"committers\" at: https://svn.apache.org/repos/private/committersIf you have any questions during this phase, then pleasesee the following resources:Apache developer's pages: https://www.apache.org/dev/Incubator committer guide: https://incubator.apache.org/guides/committer.htmlNaturally, if you don't understand anything be sure to ask us on the dev@hertzbeat.apache.org mailing list. Documentation is maintained by volunteers and hence can be out-of-date and incomplete - of courseyou can now help fix that.A PPMC member will announce your election to the dev list soon.``` Apache HertzBeat is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF." } ]
{ "category": "Observability and Analysis", "file_name": "quickstart.md", "project_name": "HertzBeat", "subcategory": "Observability" }
[ { "data": "The Developer Mailing List is the community-recommended way to communicate and obtain the latest information. Before you post anything to the mailing lists, be sure that you already subscribe to them. | List Name | Address | Subscribe | Unsubscribe | Archive | |:|:-|:|:--|:-| | Developer List | dev@hertzbeat.apache.org | subscribe | unsubscribe | archive | | List Name | Address | Subscribe | Unsubscribe | Archive | |:|:--|:|:--|:-| | Notification List | notifications@hertzbeat.apache.org | subscribe | unsubscribe | archive | Sending a subscription email is also very simple. The steps are as follows: When posting to the mailing lists, please use plain text emails. Do not use HTML emails. HTML emails are more likely to be targeted as spam mails and rejected. It may get malformed through different mail clients and not easily readable by others. Apache HertzBeat is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF." } ]
{ "category": "Observability and Analysis", "file_name": "github-privacy-statement.md", "project_name": "Hubble", "subcategory": "Observability" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Hubble", "subcategory": "Observability" }
[ { "data": "Overview Getting Started Advanced Installation Networking Security Observability Operations Community Contributor Guide Reference BPF and XDP Reference Guide This tutorial guides you through enabling the Hubble UI to access the graphical service map. Note This guide assumes that Cilium and Hubble have been correctly installed in your Kubernetes cluster. Please see Cilium Quick Installation and Setting up Hubble Observability for more information. If unsure, run cilium status and validate that Cilium and Hubble are installed. Enable the Hubble UI by running the following command: If Hubble is already enabled with cilium hubble enable, you must first temporarily disable Hubble with cilium hubble disable. This is because the Hubble UI cannot be added at runtime. ``` cilium hubble enable --ui Found existing CA in secret cilium-ca Patching ConfigMap cilium-config to enable Hubble... Restarted Cilium pods Relay is already deployed Hubble UI is already deployed ``` ``` helm upgrade cilium cilium/cilium --version 1.15.6 \\ --namespace $CILIUM_NAMESPACE \\ --reuse-values \\ --set hubble.relay.enabled=true \\ --set hubble.ui.enabled=true``` Clusters sometimes come with Cilium, Hubble, and Hubble relay already installed. When this is the case you can still use Helm to install only Hubble UI on top of the pre-installed components. You will need to set hubble.ui.standalone.enabled to true and optionally provide a volume to mount Hubble UI client certificates if TLS is enabled on Hubble Relay server side. Below is an example deploying Hubble UI as standalone, with client certificates mounted from a my-hubble-ui-client-certs secret: ``` helm upgrade --install --namespace kube-system cilium cilium/cilium --version 1.15.6 --values - <<EOF agent: false operator: enabled: false cni: install: false hubble: enabled: false relay: enabled: false tls: server: enabled: true ui: enabled: true standalone: enabled: true tls: certsVolume: projected: defaultMode: 0400 sources: secret: name: my-hubble-ui-client-certs items: key: tls.crt path: client.crt key: tls.key path: client.key key: ca.crt path:" }, { "data": "EOF``` Please note that Hubble UI expects the certificate files to be available under the following paths: ``` name: TLSRELAYCACERTFILES value: /var/lib/hubble-ui/certs/hubble-relay-ca.crt name: TLSRELAYCLIENTCERTFILE value: /var/lib/hubble-ui/certs/client.crt name: TLSRELAYCLIENTKEYFILE value: /var/lib/hubble-ui/certs/client.key ``` Keep this in mind when providing the volume containing the certificate. Open the Hubble UI in your browser by running cilium hubble ui. It will automatically set up a port forward to the hubble-ui service in your Kubernetes cluster and make it available on a local port on your machine. ``` cilium hubble ui Forwarding from 0.0.0.0:12000 -> 8081 Forwarding from [::]:12000 -> 8081 ``` Tip The above command will block and continue running while the port forward is active. You can interrupt the command to abort the port forward and re-run the command to make the UI accessible again. If your browser has not automatically opened the UI, open the page http://localhost:12000 in your browser. You should see a screen with an invitation to select a namespace, use the namespace selector dropdown on the left top corner to select a namespace: In this example, we are deploying the Star Wars demo from the Identity-Aware and HTTP-Aware Policy Enforcement guide. However you can apply the same techniques to observe application connectivity dependencies in your own namespace, and clusters for application of any type. Once the deployment is ready, issue a request from both spaceships to emulate some traffic. ``` $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed ``` These requests will then be displayed in the UI as service dependencies between the different pods: In the bottom of the interface, you may also inspect each recent Hubble flow event in your current namespace individually. In order to generate some network traffic, run the connectivity test in a loop: ``` while true; do cilium connectivity test; done ``` To see the traffic in Hubble, open http://localhost:12000/cilium-test in your browser. Copyright Cilium Authors. Revision a09e05e6. Last updated on Jun 10, 2024." } ]
{ "category": "Observability and Analysis", "file_name": "github-terms-of-service.md", "project_name": "Hubble", "subcategory": "Observability" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Icinga", "subcategory": "Observability" }
[ { "data": "Learn how to monitor your entire infrastructure with the help of our documentation, demo, FAQ, and blog articles. Become an Icinga pro! | 0 | 1 | |:|:| | Icinga 2Docs | v2.14.2Changelog | | Icinga WebDocs | v2.12.1Changelog | | Icinga DBDocs | v1.2.0Changelog | | Icinga DB WebDocs | v1.1.2Changelog | | Icinga DirectorDocs | v1.11.1Changelog | | 0 | 1 | |:--|:| | Icinga for WindowsDocs | v1.12.3Changelog | | Icinga ReportingDocs | v1.0.2Changelog | | Icinga CubeDocs | v1.3.3Changelog | | Icinga Business Process ModelingDocs | v2.5.1Changelog | | Icinga Certificate MonitoringDocs | v1.3.2Changelog | | Icinga vSphere IntegrationDocs | v1.7.1Changelog | | Icinga Web JIRA IntegrationDocs | v1.3.4Changelog | | Icinga Web Graphite IntegrationDocs | v1.2.4Changelog | | IcingabeatDocs | v7.17.4Changelog | | 0 | 1 | |:-|:--| | Icinga for KubernetesDocs | v0.1.0Changelog | | Icinga for Kubernetes WebDocs | v0.1.0Changelog | | 0 | 1 | |:|-:| | Icinga Director Branches Docs | nan | We have several modules installed that will give you an idea about how Icinga feels in a production environment. The demo system gets automatically set to default every now and then, so dont use it for your production environment. How to get startedMonitoring BasicsIcinga Template LibraryDistributed MonitoringAgent based MonitoringIcinga Web ConfigurationIcinga Web AuthentificationIcinga Director and Agents by Julian Brost | May 22, 2024 When writing a custom check plugin for Icinga 2, there are situations where in addition to observing the current state of a system, taking the past into account as well can be helpful. A common case for this is when the data source provides counter values, i.e. values... by Alvar Penning | Apr 10, 2024 Nearly every operating system comes with at least one kind of service management. On a Unix-based operating system, this is historically part of the init system. While the specific tools have matured over time and there are changes between operating systems, they are... by Feu Mourek | Mar 27, 2024 Have you ever wanted to extend the functionality of Icinga Web to suit your specific needs, but didn't know where to start? Well, you're in luck! Last week, we released a series of tutorial videos on YouTube, hosted by Markus Opolka, Senior Consultant at NETWAYS,... Sometimes its just a missing bracket in your config an extra pair of eyes will surely help! Get in touch with us and the community to figure things out. A monthly digest of the latest Icinga news, releases, articles and community topics. Icinga Stack Infrastructure MonitoringMonitoring AutomationCloud MonitoringMetrics & LogsNotificationsAnalyticsIcinga Integrations Resources Get StartedDocsBlogCommunityCase StudiesNewsletter Services ConsultingTrainingsSupportSubscriptions ConnectForumGitHub Company About UsPartnersPress & MediaContact Us 2009 - 2024 Icinga GmbH" } ]
{ "category": "Observability and Analysis", "file_name": "proto3#json.md", "project_name": "Hubble", "subcategory": "Observability" }
[ { "data": "Overview Getting Started Advanced Installation Networking Security Observability Operations Community Contributor Guide Reference BPF and XDP Reference Guide Cilium and Hubble can both be configured to serve Prometheus metrics. Prometheus is a pluggable metrics collection and storage system and can act as a data source for Grafana, a metrics visualization frontend. Unlike some metrics collectors like statsd, Prometheus requires the collectors to pull metrics from each source. Cilium and Hubble metrics can be enabled independently of each other. Cilium metrics provide insights into the state of Cilium itself, namely of the cilium-agent, cilium-envoy, and cilium-operator processes. To run Cilium with Prometheus metrics enabled, deploy it with the prometheus.enabled=true Helm value set. Cilium metrics are exported under the cilium_ Prometheus namespace. Envoy metrics are exported under the envoy_ Prometheus namespace, of which the Cilium-defined metrics are exported under the envoycilium namespace. When running and collecting in Kubernetes they will be tagged with a pod name and namespace. You can enable metrics for cilium-agent (including Envoy) with the Helm value prometheus.enabled=true. cilium-operator metrics are enabled by default, if you want to disable them, set Helm value operator.prometheus.enabled=false. ``` helm install cilium cilium/cilium --version 1.15.6 \\ --namespace kube-system \\ --set prometheus.enabled=true \\ --set operator.prometheus.enabled=true``` The ports can be configured via prometheus.port, envoy.prometheus.port, or operator.prometheus.port respectively. When metrics are enabled, all Cilium components will have the following annotations. They can be used to signal Prometheus whether to scrape metrics: ``` prometheus.io/scrape: true prometheus.io/port: 9962 ``` To collect Envoy metrics the Cilium chart will create a Kubernetes headless service named cilium-agent with the prometheus.io/scrape:'true' annotation set: ``` prometheus.io/scrape: true prometheus.io/port: 9964 ``` This additional headless service in addition to the other Cilium components is needed as each component can only have one Prometheus scrape and port annotation. Prometheus will pick up the Cilium and Envoy metrics automatically if the following option is set in the scrape_configs section: ``` scrape_configs: job_name: 'kubernetes-pods' kubernetessdconfigs: role: pod relabel_configs: sourcelabels: [metakubernetespodannotationprometheusio_scrape] action: keep regex: true sourcelabels: [address, metakubernetespodannotationprometheusio_port] action: replace regex: ([^:]+)(?::\\d+)?;(\\d+) replacement: ${1}:${2} targetlabel: address_ ``` While Cilium metrics allow you to monitor the state Cilium itself, Hubble metrics on the other hand allow you to monitor the network behavior of your Cilium-managed Kubernetes pods with respect to connectivity and security. To deploy Cilium with Hubble metrics enabled, you need to enable Hubble with hubble.enabled=true and provide a set of Hubble metrics you want to enable via hubble.metrics.enabled. Some of the metrics can also be configured with additional options. See the Hubble exported metrics section for the full list of available metrics and their options. ``` helm install cilium cilium/cilium --version 1.15.6 \\ --namespace kube-system \\ --set prometheus.enabled=true \\ --set operator.prometheus.enabled=true \\ --set hubble.enabled=true \\ --set hubble.metrics.enableOpenMetrics=true \\ --set hubble.metrics.enabled=\"{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=sourceip\\,sourcenamespace\\,sourceworkload\\,destinationip\\,destinationnamespace\\,destinationworkload\\,traffic_direction}\"``` The port of the Hubble metrics can be configured with the hubble.metrics.port Helm value. Note L7 metrics such as HTTP, are only emitted for pods that enable Layer 7 Protocol Visibility. When deployed with a non-empty hubble.metrics.enabled Helm value, the Cilium chart will create a Kubernetes headless service named hubble-metrics with the prometheus.io/scrape:'true' annotation set: ``` prometheus.io/scrape: true prometheus.io/port: 9965 ``` Set the following options in the scrape_configs section of Prometheus to have it scrape all Hubble metrics from the endpoints automatically: ``` scrape_configs: job_name: 'kubernetes-endpoints' scrape_interval: 30s kubernetessdconfigs: role: endpoints relabel_configs: sourcelabels: [metakubernetesserviceannotationprometheusio_scrape] action: keep regex: true sourcelabels: [address, metakubernetesserviceannotationprometheusio_port] action: replace targetlabel: address_ regex: (.+)(?::\\d+);(\\d+) replacement: $1:$2 ``` Additionally, you can opt-in to OpenMetrics by setting hubble.metrics.enableOpenMetrics=true. Enabling OpenMetrics configures the Hubble metrics endpoint to support exporting metrics in OpenMetrics format when explicitly requested by" }, { "data": "Using OpenMetrics supports additional functionality such as Exemplars, which enables associating metrics with traces by embedding trace IDs into the exported metrics. Prometheus needs to be configured to take advantage of OpenMetrics and will only scrape exemplars when the exemplars storage feature is enabled. OpenMetrics imposes a few additional requirements on metrics names and labels, so this functionality is currently opt-in, though we believe all of the Hubble metrics conform to the OpenMetrics requirements. Cluster Mesh API Server metrics provide insights into the state of the clustermesh-apiserver process, the kvstoremesh process (if enabled), and the sidecar etcd instance. Cluster Mesh API Server metrics are exported under the ciliumclustermeshapiserver_ Prometheus namespace. KVStoreMesh metrics are exported under the ciliumkvstoremesh Prometheus namespace. Etcd metrics are exported under the etcd_ Prometheus namespace. You can enable the metrics for different Cluster Mesh API Server components by setting the following values: clustermesh-apiserver: clustermesh.apiserver.metrics.enabled=true kvstoremesh: clustermesh.apiserver.metrics.kvstoremesh.enabled=true sidecar etcd instance: clustermesh.apiserver.metrics.etcd.enabled=true ``` helm install cilium cilium/cilium --version 1.15.6 \\ --namespace kube-system \\ --set clustermesh.useAPIServer=true \\ --set clustermesh.apiserver.metrics.enabled=true \\ --set clustermesh.apiserver.metrics.kvstoremesh.enabled=true \\ --set clustermesh.apiserver.metrics.etcd.enabled=true``` You can figure the ports by way of clustermesh.apiserver.metrics.port, clustermesh.apiserver.metrics.kvstoremesh.port and clustermesh.apiserver.metrics.etcd.port respectively. You can automatically create a Prometheus Operator ServiceMonitor by setting clustermesh.apiserver.metrics.serviceMonitor.enabled=true. If you dont have an existing Prometheus and Grafana stack running, you can deploy a stack with: ``` kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.15.6/examples/kubernetes/addons/prometheus/monitoring-example.yaml``` It will run Prometheus and Grafana in the cilium-monitoring namespace. If you have either enabled Cilium or Hubble metrics, they will automatically be scraped by Prometheus. You can then expose Grafana to access it via your browser. ``` kubectl -n cilium-monitoring port-forward service/grafana --address 0.0.0.0 --address :: 3000:3000 ``` Open your browser and access http://localhost:3000/ To expose any metrics, invoke cilium-agent with the --prometheus-serve-addr option. This option takes a IP:Port pair but passing an empty IP (e.g. :9962) will bind the server to all available interfaces (there is usually only one in a container). To customize cilium-agent metrics, configure the --metrics option with \"+metrica -metricb -metric_c\", where +/- means to enable/disable the metric. For example, for really large clusters, users may consider to disable the following two metrics as they generate too much data: ciliumnodeconnectivity_status ciliumnodeconnectivitylatencyseconds You can then configure the agent with --metrics=\"-ciliumnodeconnectivitystatus -ciliumnodeconnectivitylatency_seconds\". | Name | Labels | Default | Description | |:--|:|:-|:--| | endpoint | nan | Enabled | Number of endpoints managed by this agent | | endpointmaxifindex | nan | Disabled | Maximum interface index observed for existing endpoints | | endpointregenerationstotal | outcome | Enabled | Count of all endpoint regenerations that have completed | | endpointregenerationtimestatsseconds | scope | Enabled | Endpoint regeneration time stats | | endpoint_state | state | Enabled | Count of all endpoints | Name Labels Default Description endpoint Enabled Number of endpoints managed by this agent endpointmaxifindex Disabled Maximum interface index observed for existing endpoints endpointregenerationstotal outcome Enabled Count of all endpoint regenerations that have completed endpointregenerationtimestatsseconds scope Enabled Endpoint regeneration time stats endpoint_state state Enabled Count of all endpoints The default enabled status of endpointmaxifindex is dynamic. On earlier kernels (typically with version lower than 5.10), Cilium must store the interface index for each endpoint in the conntrack map, which reserves 16 bits for this field. If Cilium is running on such a kernel, this metric will be enabled by default. It can be used to implement an alert if the ifindex is approaching the limit of 65535. This may be the case in instances of significant Endpoint" }, { "data": "| Name | Labels | Default | Description | |:-|:|:-|:-| | serviceseventstotal | nan | Enabled | Number of services events labeled by action type | Name Labels Default Description serviceseventstotal Enabled Number of services events labeled by action type | Name | Labels | Default | Description | |:--|:|:-|:--| | unreachable_nodes | nan | Enabled | Number of nodes that cannot be reached | | unreachablehealthendpoints | nan | Enabled | Number of health endpoints that cannot be reached | Name Labels Default Description unreachable_nodes Enabled Number of nodes that cannot be reached unreachablehealthendpoints Enabled Number of health endpoints that cannot be reached | Name | Labels | Default | Description | |:-|:--|:-|:--| | nodeconnectivitystatus | sourcecluster, sourcenodename, targetcluster, targetnodename, targetnodetype, type | Enabled | The last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes | | nodeconnectivitylatencyseconds | addresstype, protocol, sourcecluster, sourcenodename, targetcluster, targetnodeip, targetnodename, targetnodetype, type | Enabled | The last observed latency between the current Cilium agent and other Cilium nodes in seconds | Name Labels Default Description nodeconnectivitystatus sourcecluster, sourcenodename, targetcluster, targetnodename, targetnodetype, type Enabled The last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes nodeconnectivitylatency_seconds addresstype, protocol, sourcecluster, sourcenodename, targetcluster, targetnodeip, targetnodename, targetnode_type, type Enabled The last observed latency between the current Cilium agent and other Cilium nodes in seconds | Name | Labels | Default | Description | |:--|:-|:-|:| | clustermeshglobalservices | sourcecluster, sourcenode_name | Enabled | The total number of global services in the cluster mesh | | clustermeshremoteclusters | sourcecluster, sourcenode_name | Enabled | The total number of remote clusters meshed with the local cluster | | clustermeshremoteclusterfailures | sourcecluster, sourcenodename, target_cluster | Enabled | The total number of failures related to the remote cluster | | clustermeshremoteclusternodes | sourcecluster, sourcenodename, target_cluster | Enabled | The total number of nodes in the remote cluster | | clustermeshremoteclusterlastfailurets | sourcecluster, sourcenodename, target_cluster | Enabled | The timestamp of the last failure of the remote cluster | | clustermeshremoteclusterreadinessstatus | sourcecluster, sourcenodename, targetcluster | Enabled | The readiness status of the remote cluster | Name Labels Default Description clustermeshglobalservices sourcecluster, sourcenode_name Enabled The total number of global services in the cluster mesh clustermeshremoteclusters sourcecluster, sourcenode_name Enabled The total number of remote clusters meshed with the local cluster clustermeshremotecluster_failures sourcecluster, sourcenodename, targetcluster Enabled The total number of failures related to the remote cluster clustermeshremotecluster_nodes sourcecluster, sourcenodename, targetcluster Enabled The total number of nodes in the remote cluster clustermeshremoteclusterlastfailure_ts sourcecluster, sourcenodename, targetcluster Enabled The timestamp of the last failure of the remote cluster clustermeshremoteclusterreadinessstatus sourcecluster, sourcenodename, targetcluster Enabled The readiness status of the remote cluster | Name | Labels | Default | Description | |:|:-|:-|:-| | datapathconntrackdumpresetstotal | area, name, family | Enabled | Number of conntrack dump resets. Happens when a BPF entry gets removed while dumping the map is in progress. | | datapathconntrackgcrunstotal | status | Enabled | Number of times that the conntrack garbage collector process was run | | datapathconntrackgckeyfallbacks_total | nan | Enabled | The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family | | datapathconntrackgc_entries | family | Enabled | The number of alive and deleted conntrack entries at the end of a garbage collector run | | datapathconntrackgcdurationseconds | status | Enabled | Duration in seconds of the garbage collector process | Name Labels Default Description datapathconntrackdumpresetstotal area, name, family Enabled Number of conntrack dump resets. Happens when a BPF entry gets removed while dumping the map is in" }, { "data": "datapathconntrackgcrunstotal status Enabled Number of times that the conntrack garbage collector process was run datapathconntrackgckeyfallbacks_total Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family datapathconntrackgc_entries family Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run datapathconntrackgcdurationseconds status Enabled Duration in seconds of the garbage collector process | Name | Labels | Default | Description | |:--|:|:-|:-| | ipsecxfrmerror | error, type | Enabled | Total number of xfrm errors | | ipsec_keys | nan | Enabled | Number of keys in use | | ipsecxfrmstates | direction | Enabled | Number of XFRM states | | ipsecxfrmpolicies | direction | Enabled | Number of XFRM policies | Name Labels Default Description ipsecxfrmerror error, type Enabled Total number of xfrm errors ipsec_keys Enabled Number of keys in use ipsecxfrmstates direction Enabled Number of XFRM states ipsecxfrmpolicies direction Enabled Number of XFRM policies | Name | Labels | Default | Description | |:--|:|:-|:| | bpfsyscallduration_seconds | operation, outcome | Disabled | Duration of eBPF system call performed | | bpfmapopstotal | mapName (deprecated), mapname, operation, outcome | Enabled | Number of eBPF map operations performed. mapName is deprecated and will be removed in 1.10. Use map_name instead. | | bpfmappressure | map_name | Enabled | Map pressure is defined as a ratio of the required map size compared to its configured size. Values < 1.0 indicate the maps utilization, while values >= 1.0 indicate that the map is full. Policy map metrics are only reported when the ratio is over 0.1, ie 10% full. | | bpfmapcapacity | map_group | Enabled | Maximum size of eBPF maps by group of maps (type of map that have the same max capacity size). Map types with size of 65536 are not emitted, missing map types can be assumed to be 65536. | | bpfmapsvirtualmemorymax_bytes | nan | Enabled | Max memory used by eBPF maps installed in the system | | bpfprogsvirtualmemorymax_bytes | nan | Enabled | Max memory used by eBPF programs installed in the system | Name Labels Default Description bpfsyscallduration_seconds operation, outcome Disabled Duration of eBPF system call performed bpfmapops_total mapName (deprecated), map_name, operation, outcome Enabled Number of eBPF map operations performed. mapName is deprecated and will be removed in 1.10. Use map_name instead. bpfmappressure map_name Enabled Map pressure is defined as a ratio of the required map size compared to its configured size. Values < 1.0 indicate the maps utilization, while values >= 1.0 indicate that the map is full. Policy map metrics are only reported when the ratio is over 0.1, ie 10% full. bpfmapcapacity map_group Enabled Maximum size of eBPF maps by group of maps (type of map that have the same max capacity size). Map types with size of 65536 are not emitted, missing map types can be assumed to be 65536. bpfmapsvirtualmemorymax_bytes Enabled Max memory used by eBPF maps installed in the system bpfprogsvirtualmemorymax_bytes Enabled Max memory used by eBPF programs installed in the system Both bpfmapsvirtualmemorymaxbytes and bpfprogsvirtualmemorymaxbytes are currently reporting the system-wide memory usage of eBPF that is directly and not directly managed by Cilium. This might change in the future and only report the eBPF memory usage directly managed by" }, { "data": "" }, { "data": "| Name | Labels | Default | Description | |:--|:|:-|:| | dropcounttotal | reason, direction | Enabled | Total dropped packets | | dropbytestotal | reason, direction | Enabled | Total dropped bytes | | forwardcounttotal | direction | Enabled | Total forwarded packets | | forwardbytestotal | direction | Enabled | Total forwarded bytes | Name Labels Default Description dropcounttotal reason, direction Enabled Total dropped packets dropbytestotal reason, direction Enabled Total dropped bytes forwardcounttotal direction Enabled Total forwarded packets forwardbytestotal direction Enabled Total forwarded bytes | Name | Labels | Default | Description | |:|:|:-|:-| | policy | nan | Enabled | Number of policies currently loaded | | policyregenerationtotal | nan | Enabled | Total number of policies regenerated successfully | | policyregenerationtimestatsseconds | scope | Enabled | Policy regeneration time stats labeled by the scope | | policymaxrevision | nan | Enabled | Highest policy revision number in the agent | | policychangetotal | nan | Enabled | Number of policy changes by outcome | | policyendpointenforcement_status | nan | Enabled | Number of endpoints labeled by policy enforcement status | | policyimplementationdelay | source | Enabled | Time in seconds between a policy change and it being fully deployed into the datapath, labeled by the policys source | Name Labels Default Description policy Enabled Number of policies currently loaded policyregenerationtotal Enabled Total number of policies regenerated successfully policyregenerationtimestatsseconds scope Enabled Policy regeneration time stats labeled by the scope policymaxrevision Enabled Highest policy revision number in the agent policychangetotal Enabled Number of policy changes by outcome policyendpointenforcement_status Enabled Number of endpoints labeled by policy enforcement status policyimplementationdelay source Enabled Time in seconds between a policy change and it being fully deployed into the datapath, labeled by the policys source | Name | Labels | Default | Description | |:|:--|:-|:-| | proxy_redirects | protocol | Enabled | Number of redirects installed for endpoints | | proxyupstreamreplyseconds | error, protocoll7, scope | Enabled | Seconds waited for upstream server to reply to a request | | proxydatapathupdatetimeouttotal | nan | Disabled | Number of total datapath update timeouts due to FQDN IP updates | | policyl7total | rule, proxy_type | Enabled | Number of total L7 requests/responses | Name Labels Default Description proxy_redirects protocol Enabled Number of redirects installed for endpoints proxyupstreamreply_seconds error, protocol_l7, scope Enabled Seconds waited for upstream server to reply to a request proxydatapathupdatetimeouttotal Disabled Number of total datapath update timeouts due to FQDN IP updates policyl7total rule, proxy_type Enabled Number of total L7 requests/responses | Name | Labels | Default | Description | |:|:|:-|:-| | identity | type | Enabled | Number of identities currently allocated | | ipcacheerrorstotal | type, error | Enabled | Number of errors interacting with the ipcache | | ipcacheeventstotal | type | Enabled | Number of events interacting with the ipcache | Name Labels Default Description identity type Enabled Number of identities currently allocated ipcacheerrorstotal type, error Enabled Number of errors interacting with the ipcache ipcacheeventstotal type Enabled Number of events interacting with the ipcache | Name | Labels | Default | Description | |:-|:|:-|:-| | event_ts | source | Enabled | Last timestamp when Cilium received an event from a control plane source, per resource and per action | | k8seventlag_seconds | source | Disabled | Lag for Kubernetes events - computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube-api-server | Name Labels Default Description event_ts source Enabled Last timestamp when Cilium received an event from a control plane source, per resource and per action k8seventlag_seconds source Disabled Lag for Kubernetes events - computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube-api-server | Name | Labels | Default | Description | |:-|:-|:-|:| | controllersrunstotal | status | Enabled | Number of times that a controller process was run | | controllersrunsduration_seconds | status | Enabled | Duration in seconds of the controller process | | controllersgrouprunstotal | status, groupname | Enabled | Number of times that a controller process was run, labeled by controller group name | | controllers_failing | nan | Enabled | Number of failing controllers | Name Labels Default Description controllersrunstotal status Enabled Number of times that a controller process was run controllersrunsduration_seconds status Enabled Duration in seconds of the controller process controllersgroupruns_total status, group_name Enabled Number of times that a controller process was run, labeled by controller group name controllers_failing Enabled Number of failing controllers The controllersgroupruns_total metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Due to the large number of controllers, enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the controller-group-metrics configuration flag, or the prometheus.controllerGroupMetrics helm value. The current recommended default set of group names can be found in the values file of the Cilium Helm chart. The special names all and none are supported. | Name | Labels | Default | Description | |:--|:-|:-|:--| | subprocessstarttotal | subsystem | Enabled | Number of times that Cilium has started a subprocess | Name Labels Default Description subprocessstarttotal subsystem Enabled Number of times that Cilium has started a subprocess | Name | Labels | Default | Description | |:|:-|:-|:| | kuberneteseventsreceived_total | scope, action, validity, equal | Enabled | Number of Kubernetes events received | | kuberneteseventstotal | scope, action, outcome | Enabled | Number of Kubernetes events processed | | k8scnpstatuscompletionseconds | attempts, outcome | Enabled | Duration in seconds in how long it took to complete a CNP status update | | k8sterminatingendpointseventstotal | nan | Enabled | Number of terminating endpoint events received from Kubernetes | Name Labels Default Description kuberneteseventsreceived_total scope, action, validity, equal Enabled Number of Kubernetes events received kuberneteseventstotal scope, action, outcome Enabled Number of Kubernetes events processed k8scnpstatuscompletionseconds attempts, outcome Enabled Duration in seconds in how long it took to complete a CNP status update k8sterminatingendpointseventstotal Enabled Number of terminating endpoint events received from Kubernetes | Name | Labels | Default | Description | |:--|:--|:-|:--| | k8sclientapilatencytime_seconds | path, method | Enabled | Duration of processed API calls labeled by path and method | | k8sclientratelimiterduration_seconds | path, method | Enabled | Kubernetes client rate limiter latency in seconds. Broken down by path and method | | k8sclientapicallstotal | host, method, return_code | Enabled | Number of API calls made to kube-apiserver labeled by host, method and return code | Name Labels Default Description k8sclientapilatencytime_seconds path, method Enabled Duration of processed API calls labeled by path and method k8sclientratelimiterduration_seconds path, method Enabled Kubernetes client rate limiter latency in seconds. Broken down by path and method k8sclientapicallstotal host, method, return_code Enabled Number of API calls made to kube-apiserver labeled by host, method and return code | Name | Labels | Default | Description | |:|:|:-|:-| | k8sworkqueuedepth | name | Enabled | Current depth of workqueue | | k8sworkqueueadds_total | name | Enabled | Total number of adds handled by workqueue | | k8sworkqueuequeuedurationseconds | name | Enabled | Duration in seconds an item stays in workqueue prior to request | | k8sworkqueueworkdurationseconds | name | Enabled | Duration in seconds to process an item from workqueue | | k8sworkqueueunfinishedworkseconds | name | Enabled | Duration in seconds of work in progress that hasnt been observed by work_duration. Large values indicate stuck threads. You can deduce the number of stuck threads by observing the rate at which this value" }, { "data": "| | k8sworkqueuelongestrunningprocessor_seconds | name | Enabled | Duration in seconds of the longest running processor for workqueue | | k8sworkqueueretries_total | name | Enabled | Total number of retries handled by workqueue | Name Labels Default Description k8sworkqueuedepth name Enabled Current depth of workqueue k8sworkqueueadds_total name Enabled Total number of adds handled by workqueue k8sworkqueuequeuedurationseconds name Enabled Duration in seconds an item stays in workqueue prior to request k8sworkqueueworkdurationseconds name Enabled Duration in seconds to process an item from workqueue k8sworkqueueunfinishedworkseconds name Enabled Duration in seconds of work in progress that hasnt been observed by work_duration. Large values indicate stuck threads. You can deduce the number of stuck threads by observing the rate at which this value increases. k8sworkqueuelongestrunningprocessor_seconds name Enabled Duration in seconds of the longest running processor for workqueue k8sworkqueueretries_total name Enabled Total number of retries handled by workqueue | Name | Labels | Default | Description | |:|:|:-|:--| | ipam_capacity | family | Enabled | Total number of IPs in the IPAM pool labeled by family | | ipameventstotal | nan | Enabled | Number of IPAM events received labeled by action and datapath family type | | ip_addresses | family | Enabled | Number of allocated IP addresses | Name Labels Default Description ipam_capacity family Enabled Total number of IPs in the IPAM pool labeled by family ipameventstotal Enabled Number of IPAM events received labeled by action and datapath family type ip_addresses family Enabled Number of allocated IP addresses | Name | Labels | Default | Description | |:|:|:-|:-| | kvstoreoperationsduration_seconds | action, kind, outcome, scope | Enabled | Duration of kvstore operation | | kvstoreeventsqueue_seconds | action, scope | Enabled | Seconds waited before a received event was queued | | kvstorequorumerrors_total | error | Enabled | Number of quorum errors | | kvstoresyncerrorstotal | scope, sourcecluster | Enabled | Number of times synchronization to the kvstore failed | | kvstoresyncqueuesize | scope, sourcecluster | Enabled | Number of elements queued for synchronization in the kvstore | | kvstoreinitialsynccompleted | scope, sourcecluster, action | Enabled | Whether the initial synchronization from/to the kvstore has completed | Name Labels Default Description kvstoreoperationsduration_seconds action, kind, outcome, scope Enabled Duration of kvstore operation kvstoreeventsqueue_seconds action, scope Enabled Seconds waited before a received event was queued kvstorequorumerrors_total error Enabled Number of quorum errors kvstoresyncerrors_total scope, source_cluster Enabled Number of times synchronization to the kvstore failed kvstoresyncqueue_size scope, source_cluster Enabled Number of elements queued for synchronization in the kvstore kvstoreinitialsync_completed scope, source_cluster, action Enabled Whether the initial synchronization from/to the kvstore has completed | Name | Labels | Default | Description | |:-|:|:-|:--| | agentbootstrapseconds | scope, outcome | Enabled | Duration of various bootstrap phases | | apiprocesstime_seconds | nan | Enabled | Processing time of all the API calls made to the cilium-agent, labeled by API method, API path and returned HTTP code. | Name Labels Default Description agentbootstrapseconds scope, outcome Enabled Duration of various bootstrap phases apiprocesstime_seconds Enabled Processing time of all the API calls made to the cilium-agent, labeled by API method, API path and returned HTTP" }, { "data": "| Name | Labels | Default | Description | |:|:|:-|:-| | fqdngcdeletions_total | nan | Enabled | Number of FQDNs that have been cleaned on FQDN garbage collector job | | fqdnactivenames | endpoint | Disabled | Number of domains inside the DNS cache that have not expired (by TTL), per endpoint | | fqdnactiveips | endpoint | Disabled | Number of IPs inside the DNS cache associated with a domain that has not expired (by TTL), per endpoint | | fqdnalivezombie_connections | endpoint | Disabled | Number of IPs associated with domains that have expired (by TTL) yet still associated with an active connection (aka zombie), per endpoint | Name Labels Default Description fqdngcdeletions_total Enabled Number of FQDNs that have been cleaned on FQDN garbage collector job fqdnactivenames endpoint Disabled Number of domains inside the DNS cache that have not expired (by TTL), per endpoint fqdnactiveips endpoint Disabled Number of IPs inside the DNS cache associated with a domain that has not expired (by TTL), per endpoint fqdnalivezombie_connections endpoint Disabled Number of IPs associated with domains that have expired (by TTL) yet still associated with an active connection (aka zombie), per endpoint | Name | Labels | Default | Description | |:--|:|:-|:-| | jobserrorstotal | job | Enabled | Number of jobs runs that returned an error | | jobsoneshotrunseconds | job | Enabled | Histogram of one shot job run duration | | jobstimerrun_seconds | job | Enabled | Histogram of timer job run duration | | jobsobserverrun_seconds | job | Enabled | Histogram of observer job run duration | Name Labels Default Description jobserrorstotal job Enabled Number of jobs runs that returned an error jobsoneshotrunseconds job Enabled Histogram of one shot job run duration jobstimerrun_seconds job Enabled Histogram of timer job run duration jobsobserverrun_seconds job Enabled Histogram of observer job run duration | Name | Labels | Default Description | |:--|:|:--| | cidrgroups_referenced | nan | Enabled Number of CNPs and CCNPs referencing at least one CiliumCIDRGroup. CNPs with empty or non-existing CIDRGroupRefs are not considered | | cidrgrouptranslationtimestatsseconds | nan | Disabled CIDRGroup translation time stats | Name Labels Default Description cidrgroups_referenced Enabled Number of CNPs and CCNPs referencing at least one CiliumCIDRGroup. CNPs with empty or non-existing CIDRGroupRefs are not considered cidrgrouptranslationtimestatsseconds Disabled CIDRGroup translation time stats | Name | Labels | Default | Description | |:|:-|:-|:| | apilimiteradjustmentfactor | apicall | Enabled | Most recent adjustment factor for automatic adjustment | | apilimiterprocessedrequeststotal | apicall, outcome, returncode | Enabled | Total number of API requests processed | | apilimiterprocessingdurationseconds | api_call, value | Enabled | Mean and estimated processing duration in seconds | | apilimiterratelimit | apicall, value | Enabled | Current rate limiting configuration (limit and burst) | | apilimiterrequestsinflight | api_call value | Enabled | Current and maximum allowed number of requests in flight | | apilimiterwaitdurationseconds | api_call, value | Enabled | Mean, min, and max wait duration | | apilimiterwaithistorydurationseconds | apicall | Disabled | Histogram of wait duration per API call processed | Name Labels Default Description apilimiteradjustment_factor api_call Enabled Most recent adjustment factor for automatic adjustment apilimiterprocessedrequeststotal apicall, outcome, returncode Enabled Total number of API requests processed apilimiterprocessingdurationseconds api_call, value Enabled Mean and estimated processing duration in seconds apilimiterrate_limit api_call, value Enabled Current rate limiting configuration (limit and burst) apilimiterrequestsinflight api_call value Enabled Current and maximum allowed number of requests in flight apilimiterwaitdurationseconds api_call, value Enabled Mean, min, and max wait duration apilimiterwaithistoryduration_seconds api_call Disabled Histogram of wait duration per API call processed cilium-operator can be configured to serve metrics by running with the option --enable-metrics. By default, the operator will expose metrics on port 9963, the port can be changed with the option --operator-prometheus-serve-addr. All metrics are exported under the ciliumoperator Prometheus namespace. Note IPAM metrics are all Enabled only if using the AWS, Alibabacloud or Azure IPAM plugins. | Name | Labels | Default | Description | |:-|:-|:-|:--| | ipam_ips | type | Enabled | Number of IPs allocated | | ipamipallocationops | subnetid | Enabled | Number of IP allocation operations. | | ipamipreleaseops | subnetid | Enabled | Number of IP release" }, { "data": "| | ipaminterfacecreationops | subnetid | Enabled | Number of interfaces creation operations. | | ipamreleasedurationseconds | type, status, subnetid | Enabled | Release ip or interface latency in seconds | | ipamallocationdurationseconds | type, status, subnetid | Enabled | Allocation ip or interface latency in seconds | | ipamavailableinterfaces | nan | Enabled | Number of interfaces with addresses available | | ipam_nodes | category | Enabled | Number of nodes by category { total | in-deficit | at-capacity } | | ipamresynctotal | nan | Enabled | Number of synchronization operations with external IPAM API | | ipamapidurationseconds | operation, responsecode | Enabled | Duration of interactions with external IPAM API. | | ipamapiratelimitduration_seconds | operation | Enabled | Duration of rate limiting while accessing external IPAM API | | ipamavailableips | target_node | Enabled | Number of available IPs on a node (taking into account plugin specific NIC/Address limits). | | ipamusedips | target_node | Enabled | Number of currently used IPs on a node. | | ipamneededips | target_node | Enabled | Number of IPs needed to satisfy allocation on a node. | Name Labels Default Description ipam_ips type Enabled Number of IPs allocated ipamipallocation_ops subnet_id Enabled Number of IP allocation operations. ipamiprelease_ops subnet_id Enabled Number of IP release operations. ipaminterfacecreation_ops subnet_id Enabled Number of interfaces creation operations. ipamreleaseduration_seconds type, status, subnet_id Enabled Release ip or interface latency in seconds ipamallocationduration_seconds type, status, subnet_id Enabled Allocation ip or interface latency in seconds ipamavailableinterfaces Enabled Number of interfaces with addresses available ipam_nodes category Enabled Number of nodes by category { total | in-deficit | at-capacity } ipamresynctotal Enabled Number of synchronization operations with external IPAM API ipamapiduration_seconds operation, response_code Enabled Duration of interactions with external IPAM API. ipamapiratelimitduration_seconds operation Enabled Duration of rate limiting while accessing external IPAM API ipamavailableips target_node Enabled Number of available IPs on a node (taking into account plugin specific NIC/Address limits). ipamusedips target_node Enabled Number of currently used IPs on a node. ipamneededips target_node Enabled Number of IPs needed to satisfy allocation on a node. | Name | Labels | Default | Description | |:-|:|:-|:| | lbipamconflictingpools_total | nan | Enabled | Number of conflicting pools | | lbipamipsavailable_total | pool | Enabled | Number of available IPs per pool | | lbipamipsused_total | pool | Enabled | Number of used IPs per pool | | lbipamservicesmatching_total | nan | Enabled | Number of matching services | | lbipamservicesunsatisfied_total | nan | Enabled | Number of services which did not get requested IPs | Name Labels Default Description lbipamconflictingpools_total Enabled Number of conflicting pools lbipamipsavailable_total pool Enabled Number of available IPs per pool lbipamipsused_total pool Enabled Number of used IPs per pool lbipamservicesmatching_total Enabled Number of matching services lbipamservicesunsatisfied_total Enabled Number of services which did not get requested IPs | Name | Labels | Default | Description | |:--|:-|:-|:| | controllersgrouprunstotal | status, groupname | Enabled | Number of times that a controller process was run, labeled by controller group name | Name Labels Default Description controllersgroupruns_total status, group_name Enabled Number of times that a controller process was run, labeled by controller group name The controllersgroupruns_total metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Due to the large number of controllers, enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the controller-group-metrics configuration flag, or the prometheus.controllerGroupMetrics helm value. The current recommended default set of group names can be found in the values file of the Cilium Helm chart. The special names all and none are" }, { "data": "Hubble metrics are served by a Hubble instance running inside cilium-agent. The command-line options to configure them are --enable-hubble, --hubble-metrics-server, and --hubble-metrics. --hubble-metrics-server takes an IP:Port pair, but passing an empty IP (e.g. :9965) will bind the server to all available interfaces. --hubble-metrics takes a comma-separated list of metrics. Some metrics can take additional semicolon-separated options per metric, e.g. --hubble-metrics=\"dns:query;ignoreAAAA,http:destinationContext=workload-name\" will enable the dns metric with the query and ignoreAAAA options, and the http metric with the destinationContext=workload-name option. Hubble metrics support configuration via context options. Supported context options for all metrics: sourceContext - Configures the source label on metrics for both egress and ingress traffic. sourceEgressContext - Configures the source label on metrics for egress traffic (takes precedence over sourceContext). sourceIngressContext - Configures the source label on metrics for ingress traffic (takes precedence over sourceContext). destinationContext - Configures the destination label on metrics for both egress and ingress traffic. destinationEgressContext - Configures the destination label on metrics for egress traffic (takes precedence over destinationContext). destinationIngressContext - Configures the destination label on metrics for ingress traffic (takes precedence over destinationContext). labelsContext - Configures a list of labels to be enabled on metrics. There are also some context options that are specific to certain metrics. See the documentation for the individual metrics to see what options are available for each. See below for details on each of the different context options. Most Hubble metrics can be configured to add the source and/or destination context as a label using the sourceContext and destinationContext options. The possible values are: | Option Value | Description | |:|:| | identity | All Cilium security identity labels | | namespace | Kubernetes namespace name | | pod | Kubernetes pod name and namespace name in the form of namespace/pod. | | pod-name | Kubernetes pod name. | | dns | All known DNS names of the source or destination (comma-separated) | | ip | The IPv4 or IPv6 address | | reserved-identity | Reserved identity label. | | workload | Kubernetes pods workload name and namespace in the form of namespace/workload-name. | | workload-name | Kubernetes pods workload name (workloads are: Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift), etc). | | app | Kubernetes pods app name, derived from pod labels (app.kubernetes.io/name, k8s-app, or app). | Option Value Description identity All Cilium security identity labels namespace Kubernetes namespace name pod Kubernetes pod name and namespace name in the form of namespace/pod. pod-name Kubernetes pod name. dns All known DNS names of the source or destination (comma-separated) ip The IPv4 or IPv6 address reserved-identity Reserved identity label. workload Kubernetes pods workload name and namespace in the form of namespace/workload-name. workload-name Kubernetes pods workload name (workloads are: Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift), etc). app Kubernetes pods app name, derived from pod labels (app.kubernetes.io/name, k8s-app, or app). When specifying the source and/or destination context, multiple contexts can be specified by separating them via the | symbol. When multiple are specified, then the first non-empty value is added to the metric as a label. For example, a metric configuration of flow:destinationContext=dns|ip will first try to use the DNS name of the target for the label. If no DNS name is known for the target, it will fall back and use the IP address of the target instead. Note There are 3 cases in which the identity label list contains multiple reserved labels: reserved:kube-apiserver and reserved:host reserved:kube-apiserver and reserved:remote-node reserved:kube-apiserver and reserved:world In all of these 3 cases, reserved-identity context returns" }, { "data": "Hubble metrics can also be configured with a labelsContext which allows providing a list of labels that should be added to the metric. Unlike sourceContext and destinationContext, instead of different values being put into the same metric label, the labelsContext puts them into different label values. | Option Value | Description | |:--|:-| | source_ip | The source IP of the flow. | | source_namespace | The namespace of the pod if the flow source is from a Kubernetes pod. | | source_pod | The pod name if the flow source is from a Kubernetes pod. | | source_workload | The name of the source pods workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). | | sourceworkloadkind | The kind of the source pods workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). | | source_app | The app name of the source pod, derived from pod labels (app.kubernetes.io/name, k8s-app, or app). | | destination_ip | The destination IP of the flow. | | destination_namespace | The namespace of the pod if the flow destination is from a Kubernetes pod. | | destination_pod | The pod name if the flow destination is from a Kubernetes pod. | | destination_workload | The name of the destination pods workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). | | destinationworkloadkind | The kind of the destination pods workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). | | destination_app | The app name of the source pod, derived from pod labels (app.kubernetes.io/name, k8s-app, or app). | | traffic_direction | Identifies the traffic direction of the flow. Possible values are ingress, egress and unknown. | Option Value Description source_ip The source IP of the flow. source_namespace The namespace of the pod if the flow source is from a Kubernetes pod. source_pod The pod name if the flow source is from a Kubernetes pod. source_workload The name of the source pods workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). sourceworkloadkind The kind of the source pods workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). source_app The app name of the source pod, derived from pod labels (app.kubernetes.io/name, k8s-app, or app). destination_ip The destination IP of the flow. destination_namespace The namespace of the pod if the flow destination is from a Kubernetes pod. destination_pod The pod name if the flow destination is from a Kubernetes pod. destination_workload The name of the destination pods workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). destinationworkloadkind The kind of the destination pods workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). destination_app The app name of the source pod, derived from pod labels (app.kubernetes.io/name, k8s-app, or app). traffic_direction Identifies the traffic direction of the flow. Possible values are ingress, egress and unknown. When specifying the flow context, multiple values can be specified by separating them via the , symbol. All labels listed are included in the metric, even if empty. For example, a metric configuration of http:labelsContext=sourcenamespace,sourcepod will add the sourcenamespace and sourcepod labels to all Hubble HTTP metrics. Note To limit metrics cardinality hubble will remove data series bound to specific pod after one minute from pod" }, { "data": "Metric is considered to be bound to a specific pod when at least one of the following conditions is met: sourceContext is set to pod and metric series has source label matching <podnamespace>/<podname> destinationContext is set to pod and metric series has destination label matching <podnamespace>/<podname> labelsContext contains both sourcenamespace and sourcepod and metric series labels match namespace and name of deleted pod labelsContext contains both destinationnamespace and destinationpod and metric series labels match namespace and name of deleted pod Hubble metrics are exported under the hubble_ Prometheus namespace. This metric, unlike other ones, is not directly tied to network flows. Its enabled if any of the other metrics is enabled. | Name | Labels | Default | Description | |:|:|:-|:-| | losteventstotal | source | Enabled | Number of lost events | Name Labels Default Description losteventstotal source Enabled Number of lost events perfeventring_buffer observereventsqueue hubbleringbuffer | Name | Labels | Default | Description | |:-|:-|:-|:| | dnsqueriestotal | rcode, qtypes, ips_returned | Disabled | Number of DNS queries observed | | dnsresponsestotal | rcode, qtypes, ips_returned | Disabled | Number of DNS responses observed | | dnsresponsetypes_total | type, qtypes | Disabled | Number of DNS response types | Name Labels Default Description dnsqueriestotal rcode, qtypes, ips_returned Disabled Number of DNS queries observed dnsresponsestotal rcode, qtypes, ips_returned Disabled Number of DNS responses observed dnsresponsetypes_total type, qtypes Disabled Number of DNS response types | Option Key | Option Value | Description | |:-|:|:--| | query | nan | Include the query as label query | | ignoreAAAA | nan | Ignore any AAAA requests/responses | Option Key Option Value Description query N/A Include the query as label query ignoreAAAA N/A Ignore any AAAA requests/responses This metric supports Context Options. | Name | Labels | Default | Description | |:--|:--|:-|:-| | drop_total | reason, protocol | Disabled | Number of drops | Name Labels Default Description drop_total reason, protocol Disabled Number of drops This metric supports Context Options. | Name | Labels | Default | Description | |:-|:--|:-|:--| | flowsprocessedtotal | type, subtype, verdict | Disabled | Total number of flows processed | Name Labels Default Description flowsprocessedtotal type, subtype, verdict Disabled Total number of flows processed This metric supports Context Options. This metric counts all non-reply flows containing the reserved:world label in their destination identity. By default, dropped flows are counted if and only if the drop reason is Policy denied. Set any-drop option to count all dropped flows. | Name | Labels | Default | Description | |:|:|:-|:--| | flowstoworld_total | protocol, verdict | Disabled | Total number of flows to reserved:world. | Name Labels Default Description flowstoworld_total protocol, verdict Disabled Total number of flows to reserved:world. | Option Key | Option Value | Description | |:-|:|:-| | any-drop | nan | Count any dropped flows regardless of the drop reason. | | port | nan | Include the destination port as label port. | | syn-only | nan | Only count non-reply SYNs for TCP flows. | Option Key Option Value Description any-drop N/A Count any dropped flows regardless of the drop reason. port N/A Include the destination port as label port. syn-only N/A Only count non-reply SYNs for TCP flows. This metric supports Context Options. Deprecated, use httpV2 instead. These metrics can not be enabled at the same time as httpV2. | Name | Labels | Default | Description | |:|:|:-|:-| | httprequeststotal | method, protocol, reporter | Disabled | Count of HTTP requests | | httpresponsestotal | method, status, reporter | Disabled | Count of HTTP responses | | httprequestduration_seconds | method, reporter | Disabled | Histogram of HTTP request duration in seconds | Name Labels Default Description httprequeststotal method, protocol, reporter Disabled Count of HTTP requests httpresponsestotal method, status, reporter Disabled Count of HTTP responses httprequestduration_seconds method, reporter Disabled Histogram of HTTP request duration in seconds method is the HTTP method of the request/response. protocol is the HTTP protocol of the request, (For example: HTTP/1.1, HTTP/2). status is the HTTP status code of the" }, { "data": "reporter identifies the origin of the request/response. It is set to client if it originated from the client, server if it originated from the server, or unknown if its origin is unknown. This metric supports Context Options. httpV2 is an updated version of the existing http metrics. These metrics can not be enabled at the same time as http. The main difference is that httprequeststotal and httpresponsestotal have been consolidated, and use the response flow data. Additionally, the httprequestduration_seconds metric source/destination related labels now are from the perspective of the request. In the http metrics, the source/destination were swapped, because the metric uses the response flow data, where the source/destination are swapped, but in httpV2 we correctly account for this. | Name | Labels | Default | Description | |:|:--|:-|:-| | httprequeststotal | method, protocol, status, reporter | Disabled | Count of HTTP requests | | httprequestduration_seconds | method, reporter | Disabled | Histogram of HTTP request duration in seconds | Name Labels Default Description httprequeststotal method, protocol, status, reporter Disabled Count of HTTP requests httprequestduration_seconds method, reporter Disabled Histogram of HTTP request duration in seconds method is the HTTP method of the request/response. protocol is the HTTP protocol of the request, (For example: HTTP/1.1, HTTP/2). status is the HTTP status code of the response. reporter identifies the origin of the request/response. It is set to client if it originated from the client, server if it originated from the server, or unknown if its origin is unknown. | Option Key | Option Value | Description | |:-|:|:| | exemplars | True | Include extracted trace IDs in HTTP metrics. Requires OpenMetrics to be enabled. | Option Key Option Value Description exemplars true Include extracted trace IDs in HTTP metrics. Requires OpenMetrics to be enabled. This metric supports Context Options. | Name | Labels | Default | Description | |:--|:-|:-|:| | icmp_total | family, type | Disabled | Number of ICMP messages | Name Labels Default Description icmp_total family, type Disabled Number of ICMP messages This metric supports Context Options. | Name | Labels | Default | Description | |:-|:-|:-|:| | kafkarequeststotal | topic, apikey, errorcode, reporter | Disabled | Count of Kafka requests by topic | | kafkarequestdurationseconds | topic, apikey, reporter | Disabled | Histogram of Kafka request duration by topic | Name Labels Default Description kafkarequeststotal topic, apikey, errorcode, reporter Disabled Count of Kafka requests by topic kafkarequestduration_seconds topic, api_key, reporter Disabled Histogram of Kafka request duration by topic This metric supports Context Options. | Name | Labels | Default | Description | |:|:|:-|:| | portdistributiontotal | protocol, port | Disabled | Numbers of packets distributed by destination port | Name Labels Default Description portdistributiontotal protocol, port Disabled Numbers of packets distributed by destination port This metric supports Context Options. | Name | Labels | Default | Description | |:-|:-|:-|:| | tcpflagstotal | flag, family | Disabled | TCP flag occurrences | Name Labels Default Description tcpflagstotal flag, family Disabled TCP flag occurrences This metric supports Context Options. This is dynamic hubble exporter metric. | Name | Labels | Default | Description | |:|:|:-|:--| | dynamicexporterexporters_total | source | Enabled | Number of configured hubble exporters | Name Labels Default Description dynamicexporterexporters_total source Enabled Number of configured hubble exporters active inactive This is dynamic hubble exporter" }, { "data": "| Name | Labels | Default | Description | |:--|:|:-|:-| | dynamicexporterup | source | Enabled | Status of exporter (1 - active, 0 - inactive) | Name Labels Default Description dynamicexporterup source Enabled Status of exporter (1 - active, 0 - inactive) name identifies exporter name This is dynamic hubble exporter metric. | Name | Labels | Default | Description | |:-|:|:-|:| | dynamicexporterreconfigurations_total | op | Enabled | Number of dynamic exporters reconfigurations | Name Labels Default Description dynamicexporterreconfigurations_total op Enabled Number of dynamic exporters reconfigurations add update remove This is dynamic hubble exporter metric. | Name | Labels | Default | Description | |:--|:|:-|:-| | dynamicexporterconfig_hash | nan | Enabled | Hash of last applied config | Name Labels Default Description dynamicexporterconfig_hash Enabled Hash of last applied config This is dynamic hubble exporter metric. | Name | Labels | Default | Description | |:-|:|:-|:| | dynamicexporterconfiglastapplied | nan | Enabled | Timestamp of last applied config | Name Labels Default Description dynamicexporterconfiglastapplied Enabled Timestamp of last applied config To expose any metrics, invoke clustermesh-apiserver with the --prometheus-serve-addr option. This option takes a IP:Port pair but passing an empty IP (e.g. :9962) will bind the server to all available interfaces (there is usually only one in a container). All metrics are exported under the ciliumclustermeshapiserver_ Prometheus namespace. | Name | Labels | Description | |:|:|:-| | kvstoreoperationsduration_seconds | action, kind, outcome, scope | Duration of kvstore operation | | kvstoreeventsqueue_seconds | action, scope | Seconds waited before a received event was queued | | kvstorequorumerrors_total | error | Number of quorum errors | | kvstoresyncerrorstotal | scope, sourcecluster | Number of times synchronization to the kvstore failed | | kvstoresyncqueuesize | scope, sourcecluster | Number of elements queued for synchronization in the kvstore | | kvstoreinitialsynccompleted | scope, sourcecluster, action | Whether the initial synchronization from/to the kvstore has completed | Name Labels Description kvstoreoperationsduration_seconds action, kind, outcome, scope Duration of kvstore operation kvstoreeventsqueue_seconds action, scope Seconds waited before a received event was queued kvstorequorumerrors_total error Number of quorum errors kvstoresyncerrors_total scope, source_cluster Number of times synchronization to the kvstore failed kvstoresyncqueue_size scope, source_cluster Number of elements queued for synchronization in the kvstore kvstoreinitialsync_completed scope, source_cluster, action Whether the initial synchronization from/to the kvstore has completed | Name | Labels | Description | |:-|:-|:| | apilimiterprocessedrequeststotal | apicall, outcome, returncode | Total number of API requests processed | | apilimiterprocessingdurationseconds | api_call, value | Mean and estimated processing duration in seconds | | apilimiterratelimit | apicall, value | Current rate limiting configuration (limit and burst) | | apilimiterrequestsinflight | api_call value | Current and maximum allowed number of requests in flight | | apilimiterwaitdurationseconds | api_call, value | Mean, min, and max wait duration | Name Labels Description apilimiterprocessedrequeststotal apicall, outcome, returncode Total number of API requests processed apilimiterprocessingdurationseconds api_call, value Mean and estimated processing duration in seconds apilimiterrate_limit api_call, value Current rate limiting configuration (limit and burst) apilimiterrequestsinflight api_call value Current and maximum allowed number of requests in flight apilimiterwaitdurationseconds api_call, value Mean, min, and max wait duration | Name | Labels | Default | Description | |:--|:-|:-|:| | controllersgrouprunstotal | status, groupname | Enabled | Number of times that a controller process was run, labeled by controller group name | Name Labels Default Description controllersgroupruns_total status, group_name Enabled Number of times that a controller process was run, labeled by controller group name The controllersgroupruns_total metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the controller-group-metrics configuration flag. The current default set for clustermesh-apiserver found in the Cilium Helm chart is the special name all, which enables the metric for all controller groups. The special name none is also" }, { "data": "To expose any metrics, invoke kvstoremesh with the --prometheus-serve-addr option. This option takes a IP:Port pair but passing an empty IP (e.g. :9964) binds the server to all available interfaces (there is usually only one interface in a container). All metrics are exported under the ciliumkvstoremesh Prometheus namespace. | Name | Labels | Description | |:--|:-|:| | remoteclusters | sourcecluster | The total number of remote clusters meshed with the local cluster | | remoteclusterfailures | sourcecluster, targetcluster | The total number of failures related to the remote cluster | | remoteclusterlastfailurets | sourcecluster, targetcluster | The timestamp of the last failure of the remote cluster | | remoteclusterreadinessstatus | sourcecluster, target_cluster | The readiness status of the remote cluster | Name Labels Description remote_clusters source_cluster The total number of remote clusters meshed with the local cluster remoteclusterfailures sourcecluster, targetcluster The total number of failures related to the remote cluster remoteclusterlastfailurets sourcecluster, targetcluster The timestamp of the last failure of the remote cluster remoteclusterreadiness_status sourcecluster, targetcluster The readiness status of the remote cluster | Name | Labels | Description | |:|:|:-| | kvstoreoperationsduration_seconds | action, kind, outcome, scope | Duration of kvstore operation | | kvstoreeventsqueue_seconds | action, scope | Seconds waited before a received event was queued | | kvstorequorumerrors_total | error | Number of quorum errors | | kvstoresyncerrorstotal | scope, sourcecluster | Number of times synchronization to the kvstore failed | | kvstoresyncqueuesize | scope, sourcecluster | Number of elements queued for synchronization in the kvstore | | kvstoreinitialsynccompleted | scope, sourcecluster, action | Whether the initial synchronization from/to the kvstore has completed | Name Labels Description kvstoreoperationsduration_seconds action, kind, outcome, scope Duration of kvstore operation kvstoreeventsqueue_seconds action, scope Seconds waited before a received event was queued kvstorequorumerrors_total error Number of quorum errors kvstoresyncerrors_total scope, source_cluster Number of times synchronization to the kvstore failed kvstoresyncqueue_size scope, source_cluster Number of elements queued for synchronization in the kvstore kvstoreinitialsync_completed scope, source_cluster, action Whether the initial synchronization from/to the kvstore has completed | Name | Labels | Description | |:-|:-|:| | apilimiterprocessedrequeststotal | apicall, outcome, returncode | Total number of API requests processed | | apilimiterprocessingdurationseconds | api_call, value | Mean and estimated processing duration in seconds | | apilimiterratelimit | apicall, value | Current rate limiting configuration (limit and burst) | | apilimiterrequestsinflight | api_call value | Current and maximum allowed number of requests in flight | | apilimiterwaitdurationseconds | api_call, value | Mean, min, and max wait duration | Name Labels Description apilimiterprocessedrequeststotal apicall, outcome, returncode Total number of API requests processed apilimiterprocessingdurationseconds api_call, value Mean and estimated processing duration in seconds apilimiterrate_limit api_call, value Current rate limiting configuration (limit and burst) apilimiterrequestsinflight api_call value Current and maximum allowed number of requests in flight apilimiterwaitdurationseconds api_call, value Mean, min, and max wait duration | Name | Labels | Default | Description | |:--|:-|:-|:| | controllersgrouprunstotal | status, groupname | Enabled | Number of times that a controller process was run, labeled by controller group name | Name Labels Default Description controllersgroupruns_total status, group_name Enabled Number of times that a controller process was run, labeled by controller group name The controllersgroupruns_total metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the controller-group-metrics configuration flag. The current default set for kvstoremesh found in the Cilium Helm chart is the special name all, which enables the metric for all controller groups. The special name none is also supported. Copyright Cilium Authors." } ]
{ "category": "Observability and Analysis", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Hubble", "subcategory": "Observability" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "InfluxData", "subcategory": "Observability" }
[ { "data": "InfluxDB Cloud Dedicated InfluxDB client libraries are language-specific packages that integrate with InfluxDB APIs. Flight clients are language-specific drivers that can interact with Flight servers using the Arrow in-memory format and the Flight RPC framework. View the list of available clients. InfluxDB v3 client libraries use InfluxDB HTTP APIs to write data and use Flight clients to execute SQL and InfluxQL queries. View the list of available client libraries. InfluxDB v2 client libraries use InfluxDB /api/v2 endpoints and work with InfluxDB 2.0 API compatibility endpoints. View the list of available client libraries. InfluxDB v1 client libraries use the InfluxDB 1.7 API and should be fully compatible with InfluxDB 1.5+. View the list of available client libraries. Was this page helpful? Thank you for your feedback! Thank you for being part of our community! We welcome and encourage your feedback and bug reports for InfluxDB and this documentation. To find support, use the following resources: Customers with an annual or support contract can contact InfluxData Support. Enter your InfluxDB Cloud Dedicated cluster URL and well update code examples for you. Let us know what we can do better: Thank you! Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code. Read more" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Inspektor Gadget", "subcategory": "Observability" }
[ { "data": "The advise gadgets suggest different system configurations by capturing and analyzing data from the host. This audit gadgets help to audit specific functionalities or security settings. The profile gadgets provide a way to measure the performance of a sub-system. These gadgets capture system events for a period and then print a report. The snapshot gadgets capture and print the status of a system at a specific point in time. The top gadgets show the current activity sorted by the highest to the lowest in the resource being observed, generating the output every few seconds. The trace gadgets capture and print system events. Get strace-like logs of a container from the past. Copyright 2024 The Inspektor Gadget Contributors The Linux Foundation (TLF) has registered trademarks and uses trademarks.For a list of TLF trademarks, see Trademark Usage" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Jaeger", "subcategory": "Observability" }
[ { "data": "We stand with our friends and colleagues in Ukraine. To support Ukraine in their time of need visit this page. Securing Jaeger Installation This page documents the existing security mechanisms in Jaeger, organized by the pairwise connections between Jaeger components. We ask for community help with implementing additional security measures (see issue-1718external link ). Deployments that involve jaeger-agent are meant for trusted environments where the agent is run as a sidecar within the containers network namespace, or as a host agent. Therefore, there is currently no support for traffic encryption between clients and agents. OpenTelemetry SDKs can be configured to communicate directly with jaeger-collector via gRPC or HTTP, with optional TLS enabled. 2024 The Jaeger Authors. Documentation distributed under CC-BY-4.0. 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Observability and Analysis", "file_name": "frontend-ui.md", "project_name": "Jaeger", "subcategory": "Observability" }
[ { "data": "We stand with our friends and colleagues in Ukraine. To support Ukraine in their time of need visit this page. Architecture See also: Jaeger represents tracing data in a data model inspired by the OpenTracing Specificationexternal link . The data model is logically very similar to OpenTelemetry Tracesexternal link , with some naming differences: | Jaeger | OpenTelemetry | Notes | |:-|:-|:-| | Tags | Attributes | Both support typed values, but nested tags are not supported in Jaeger. | | Span Logs | Span Events | Point-in-time events on the span recorded in a structured form. | | Span References | Span Links | Jaegers Span References have a required type (child-of or follows-from) and always refer to predecessor spans; OpenTelemetrys Span Links have no type, but allow attributes. | | Process | Resource | A struct describing the entity that produces the telemetry. | A span represents a logical unit of work that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships. A trace represents the data or execution path through the system. It can be thought of as a directed acyclic graph of spans. Baggage is arbitrary user-defined metadata (key-value pairs) that can be attached to distributed context and propagated by the tracing SDKs. See W3C Baggageexternal link for more information. Jaeger can be deployed either as an all-in-one binary, where all Jaeger backend components run in a single process, or as a scalable distributed system. There are two main deployment options discussed below. In this deployment the collectors receive the data from traced applications and write it directly to storage. The storage must be able to handle both the average and peak traffic. Collectors use an in-memory queue to smooth short-term traffic peaks, but a sustained traffic spike may result in dropped data if the storage is not able to keep up. Collectors are able to centrally serve sampling configuration to the SDKs, known as remote sampling mode . They can also enable automatic sampling configuration calculation, known as adaptive sampling . To prevent data loss between collectors and storage, Kafka can be used as an intermediary, persistent queue. An additional component, jaeger-ingester, needs to be deployed to read data from Kafka and save to the database. Multiple jaeger-ingesters can be deployed to scale up ingestion; they will automatically partition the load across them. You do not need to use OpenTelemetry Collector, because jaeger-collector can receive OpenTelemetry data directly from the OpenTelemetry SDKs (using OTLP exporters). However, if you already use the OpenTelemetry Collectors, such as for gathering other types of telemetry or for pre-processing / enriching the tracing data, it can be placed between the SDKs and" }, { "data": "The OpenTelemetry Collectors can be run as an application sidecar, as a host agent / daemon, or as a central cluster. The OpenTelemetry Collector supports Jaegers Remote Sampling protocol and can either serve static configurations from config files directly, or proxy the requests to the Jaeger backend (e.g., when using adaptive sampling). Benefits: Downsides: Benefits: Downsides: This section details the constituent parts of Jaeger and how they relate to each other. It is arranged by the order in which spans from your application interact with them. In order to generate tracing data, the applications must be instrumented. An instrumented application creates spans when receiving new requests and attaches context information (trace id, span id, and baggage) to outgoing requests. Only the ids and baggage are propagated with requests; all other profiling data, like operation name, timing, tags and logs, is not propagated. Instead, it is exported out of process to the Jaeger backend asynchronously, in the background. There are many ways to instrument an application: Instrumentation typically should not depend on specific tracing SDKs, but only on abstract tracing APIs like the OpenTelemetry API. The tracing SDKs implement the tracing APIs and take care of data export. The instrumentation is designed to be always on in production. To minimize the overhead, the SDKs employ various sampling strategies. When a trace is sampled, the profiling span data is captured and transmitted to the Jaeger backend. When a trace is not sampled, no profiling data is collected at all, and the calls to the tracing API are short-circuited to incur the minimal amount of overhead. For more information, please refer to the Sampling page. jaeger-agent is a network daemon that listens for spans sent over UDP, which are batched and sent to the collector. It is designed to be deployed to all hosts as an infrastructure component. The agent abstracts the routing and discovery of the collectors away from the client. jaeger-agent is not a required component. jaeger-collector receives traces, runs them through a processing pipeline for validation and clean-up/enrichment, and stores them in a storage backend. Jaeger comes with built-in support for several storage backends (see Deployment ), as well as extensible plugin framework for implementing custom storage plugins. jaeger-query is a service that exposes the APIs for retrieving traces from storage and hosts a Web UI for searching and analyzing traces. jaeger-ingester is a service that reads traces from Kafka and writes them to a storage backend. Effectively, it is a stripped-down version of the Jaeger collector that supports Kafka as the only input protocol. 2024 The Jaeger Authors. Documentation distributed under CC-BY-4.0. 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Observability and Analysis", "file_name": "operator.md", "project_name": "Jaeger", "subcategory": "Observability" }
[ { "data": "We stand with our friends and colleagues in Ukraine. To support Ukraine in their time of need visit this page. External Guides Guides hosted outside of the main Jaeger documentation. 2024 The Jaeger Authors. Documentation distributed under CC-BY-4.0. 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Observability and Analysis", "file_name": "sampling.md", "project_name": "Jaeger", "subcategory": "Observability" }
[ { "data": "We stand with our friends and colleagues in Ukraine. To support Ukraine in their time of need visit this page. Frontend/UI Configuration Several aspects of the UI can be configured: These options can be configured by a JSON configuration file. The --query.ui-config command line parameter of the query service must then be set to the path to the JSON file when the query service is started. An example configuration file (see complete schema hereexternal link ): ``` { \"dependencies\": { \"dagMaxNumServices\": 200, \"menuEnabled\": true }, \"monitor\": { \"menuEnabled\": true }, \"archiveEnabled\": true, \"tracking\": { \"gaID\": \"UA-000000-2\", \"trackErrors\": true }, \"menu\": [ { \"label\": \"About Jaeger\", \"items\": [ { \"label\": \"GitHub\", \"url\": \"https://github.com/jaegertracing/jaeger\" }, { \"label\": \"Docs\", \"url\": \"http://jaeger.readthedocs.io/en/latest/\" } ] } ], \"search\": { \"maxLookback\": { \"label\": \"2 Days\", \"value\": \"2d\" }, \"maxLimit\": 1500 }, \"linkPatterns\": [{ \"type\": \"process\", \"key\": \"jaeger.version\", \"url\": \"https://github.com/jaegertracing/jaeger-client-java/releases/tag/#{jaeger.version}\", \"text\": \"Information about Jaeger release #{jaeger.version}\" }, { \"type\": \"tags\", \"key\": \"uniqueId\", \"url\": \"https://mykibana.com/uniqueId=#{uniqueId}&traceId=#{trace.traceID}\", \"text\": \"Redirect to kibana to view log\" }] } ``` dependencies.dagMaxNumServices defines the maximum number of services allowed before the DAG dependency view is disabled. Default: 200. dependencies.menuEnabled enables (true) or disables (false) the dependencies menu button. Default: true. monitor.menuEnabled enables (true) or disables (false) the Monitor menu button. Default: false. archiveEnabled enables (true) or disables (false) the archive traces button. Default: false. It requires a configuration of an archive storage in Query service. Archived traces are only accessible directly by ID, they are not searchable. tracking.gaID defines the Google Analytics tracking ID. This is required for Google Analytics tracking, and setting it to a non-null value enables Google Analytics tracking. Default: null. tracking.customWebAnalytics defines a factory function for a custom tracking plugin (only when using Javascript-form of UI configuration). tracking.trackErrors enables (true) or disables (false) error tracking. Errors can only be tracked if a valid analytics tracker is configured. Default: true. For additional details on app analytics see the tracking READMEexternal link in the UI repo. menu allows additional links to be added to the global nav. The additional links are right-aligned. In the sample JSON config above, the configured menu will have a dropdown labeled About Jaeger with sub-options for GitHub and Docs. The format for a link in the top right menu is as follows: ``` { \"label\": \"Some text here\", \"url\": \"https://example.com\" } ``` Links can either be members of the menu Array, directly, or they can be grouped into a dropdown menu option. The format for a group of links is: ``` { \"label\": \"Dropdown button\", \"items\": [ ] } ``` The items Array should contain one or more link configurations. The search.maxLimit configures the maximum results that the input let you search. The search.maxLookback configures the maximum time before the present users can query for traces. The options in the Lookback dropdown greater than this value will not be" }, { "data": "| Field | Description | |:--|:--| | label | The text displayed in the search form dropdown | | value | The value submitted in the search query if the label is selected | The linkPatterns node can be used to create links from fields displayed in the Jaeger UI. | Field | Description | |:--|:-| | type | The metadata section in which your link will be added: process, tags, logs, traces | | key | The name of tag/process/log attribute which value will be displayed as a link, this field is not necessary for type traces. | | url | The URL where the link should point to, it can be an external site or relative path in Jaeger UI | | text | The text displayed in the tooltip for the link | Both url and text can be defined as templates (i.e. using #{field-name}) where Jaeger UI will dynamically substitute values based on tags/logs/traces data. For traces, the supported template fields are: duration, endTime, startTime, traceName and traceID. Further, the trace template fields are available for substitution in process/logs/tags type when the trace template fields are prefixed with trace.. For example: trace.traceID, trace.startTime. Starting with version 1.9, Jaeger UI provides an embedded layout mode which is intended to support integrating Jaeger UI into other applications. Currently (as of v0), the approach taken is to remove various UI elements from the page to make the UI better suited for space-constrained layouts. The embedded mode is induced and configured via URL query parameters. To enter embedded mode, the uiEmbed=v0 query parameter and value must be added to the URL. For example, the following URL will show the trace with ID abc123 in embedded mode: ``` http://localhost:16686/trace/abc123?uiEmbed=v0 ``` uiEmbed=v0 is required. Further, each page supported has an button added that will open the non-embedded page in a new tab. The following pages support embedded mode: To integrate the Search Trace Page to our application we have to indicate to the Jaeger UI that we want to use the embed mode with uiEmbed=v0. For example: ``` http://localhost:16686/search? service=my-service& start=1543917759557000& end=1543921359557000& limit=20& lookback=1h& maxDuration& minDuration& uiEmbed=v0 ``` The following query parameter can be used to configure the layout of the search page : ``` http://localhost:16686/search? service=my-service& start=1543917759557000& end=1543921359557000& limit=20& lookback=1h& maxDuration& minDuration& uiEmbed=v0& uiSearchHideGraph=1 ``` To integrate the Trace Page to our application we have to indicate to the Jaeger UI that we want to use the embed mode with uiEmbed=v0. For example: ``` http://localhost:16686/trace/{trace-id}?uiEmbed=v0 ``` If we have navigated to this view from the search traces page well have a button to go back to the results page. The following query parameters can be used to configure the layout of the trace page : ``` http://localhost:16686/trace/{trace-id}? uiEmbed=v0& uiTimelineCollapseTitle=1 ``` ``` http://localhost:16686/trace/{trace-id}? uiEmbed=v0& uiTimelineHideMinimap=1 ``` ``` http://localhost:16686/trace/{trace-id}? uiEmbed=v0& uiTimelineHideSummary=1 ``` We can also combine the options: ``` http://localhost:16686/trace/{trace-id}? uiEmbed=v0& uiTimelineHideMinimap=1& uiTimelineHideSummary=1 ``` 2024 The Jaeger Authors. Documentation distributed under CC-BY-4.0. 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Observability and Analysis", "file_name": "docs.keephq.dev.md", "project_name": "Keep", "subcategory": "Observability" }
[ { "data": "Enrichments Supported Providers Syntax Conditions Functions Throttles Examples Providers Healthcheck Alerts Webhook settings Workflows Commands Keep is an open-source alert management and automation tool that provides everything you need to create and manage alerts effectively. An alert is an event that is triggered when something undesirable occurs or is about to occur. It is usually triggered by monitoring tools such as Prometheus, Grafana, or CloudWatch, and in some cases, proprietary tools. Alerts usually categorized into three different groups: Keep helps with every step of the alert lifecycle: Alerts can either be pulled by Keep or pushed into it. Keep also offers zero-click alert instrumentation through webhook installation." } ]
{ "category": "Observability and Analysis", "file_name": "github-privacy-statement.md", "project_name": "Kuberhealthy", "subcategory": "Observability" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Kuberhealthy", "subcategory": "Observability" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": "docs.github.com.md", "project_name": "Kuberhealthy", "subcategory": "Observability" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Last9", "subcategory": "Observability" }
[ { "data": "Understand what Levitate is and how to quickly setup your first cluster Step-by-step notes on some common Levitate workflows Send data from Prometheus, Cloudwatch, OTel Collector, and more Tiering stored data as per usage and inflight cardinality controls Goverance for reading data and using Levitate as a datasource Set up alerts, pattern matching, receive notifications, an IaC tool for alerting APM toolkit for Node.js, Ruby, Golang, and Python applications Common how-tos for Prometheus, Kubernetes, VictoriaMetrics, etc. Frequently asked questions about Levitate what, why, how Cloud Native Monitoring 2024 Last9, Inc All rights reserved. SOC2 Type 2 certified. Contact us for the report." } ]
{ "category": "Observability and Analysis", "file_name": "docs.md", "project_name": "Kuberhealthy", "subcategory": "Observability" }
[ { "data": "Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop. The operator pattern aims to capture the key aim of a human operator who is managing a service or set of services. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems. People who run workloads on Kubernetes often like to use automation to take care of repeatable tasks. The operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides. Kubernetes is designed for automation. Out of the box, you get lots of built-in automation from the core of Kubernetes. You can use Kubernetes to automate deploying and running workloads, and you can automate how Kubernetes does that. Kubernetes' operator pattern concept lets you extend the cluster's behaviour without modifying the code of Kubernetes itself by linking controllers to one or more custom resources. Operators are clients of the Kubernetes API that act as controllers for a Custom Resource. Some of the things that you can use an operator to automate include: What might an operator look like in more detail? Here's an example: The most common way to deploy an operator is to add the Custom Resource Definition and its associated Controller to your cluster. The Controller will normally run outside of the control plane, much as you would run any containerized application. For example, you can run the controller in your cluster as a Deployment. Once you have an operator deployed, you'd use it by adding, modifying or deleting the kind of resource that the operator uses. Following the above example, you would set up a Deployment for the operator itself, and then: ``` kubectl get SampleDB # find configured databases kubectl edit SampleDB/example-database # manually change some settings ``` and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape. If there isn't an operator in the ecosystem that implements the behavior you want, you can code your own. You also implement an operator (that is, a Controller) using any language / runtime that can act as a client for the Kubernetes API. Following are a few libraries and tools you can use to write your own cloud native operator. Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details. You should read the content guide before proposing a change that adds an extra third-party link. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Observability and Analysis", "file_name": "quick-start.md", "project_name": "KubeSkoop", "subcategory": "Observability" }
[ { "data": "KubeSkoop is a network diagnosis and monitoring suite for Kubernetes, and supports multiple CNI plugins and IaaS cloud providers. It is designed to help users quickly diagnose and troubleshoot container network problems, and provide the ability to locate and track network events. For different network plugins and IaaS providers, KubeSkoop automatically builds network links in Kubernetes clusters, combined with eBPF's in-depth monitoring of the kernel's critical paths, to analyze common Kubernetes cluster network problems. By analyzing link configurations and backtracking historical network anomalies, it can significantly simplify the difficulty and time consuming of diagnosing Kubernetes network problems. You can view Quick Start to get started with KubeSkoop, or Installation to deploy a production-ready KubeSkoop instance. Feel free to open issues and pull requests. Any feedback is much appreciated! Most source code in KubeSkoop which running on userspace are licensed under the Apache License, Version 2.0. The BPF code in /bpf directory are licensed under the GPL v2.0 to compact with Linux kernel helper functions." } ]
{ "category": "Observability and Analysis", "file_name": "github-privacy-statement.md", "project_name": "Loggie", "subcategory": "Observability" }
[ { "data": "file source is used for log collection. Example ``` sources: type: file name: accesslog ``` Tips If you use logconfig/clusterlogconfig to collect container logs, additional fields are added to the file source, please refer to here. | field | type | required | default | description | |:--|:-|:--|:-|:-| | paths | string array | True | none | The collected paths are matched using glob expressions. Support glob expansion expressions Brace Expansion and Glob Star | Example Object files to be collected: ``` /tmp/loggie/service/order/access.log /tmp/loggie/service/order/access.log.2022-04-11 /tmp/loggie/service/pay/access.log /tmp/loggie/service/pay/access.log.2022-04-11 ``` Corresponding configuration: ``` sources: type: file paths: /tmp/loggie//access.log{,.--} ``` | field | type | required | default | description | |:-|:-|:--|:-|:-| | excludeFiles | string array | False | none | Exclude collected files regular expression | Example ``` sources: type: file paths: /tmp/*.log excludeFiles: \\.gz$ ``` | field | type | required | default | description | |:|:--|:--|:-|:| | ignoreOlder | time.Duration | False | none | for example, 48h, which means to ignore files whose update time is 2 days ago | | field | type | required | default | description | |:--|:-|:--|:-|:-| | ignoreSymlink | bool | False | False | whether to ignore symbolic links (soft links) files | | field | type | required | default | description | |:-|:-|:--|:-|:--| | addonMeta | bool | False | False | whether to add the default log collection state meta information | event example ``` { \"body\": \"this is test\", \"state\": { \"pipeline\": \"local\", \"source\": \"demo\", \"filename\": \"/var/log/a.log\", \"timestamp\": \"2006-01-02T15:04:05.000Z\", \"offset\": 1024, \"bytes\": 4096, \"hostname\": \"node-1\" } } ``` state explanation | field | type | required | default | description | |:|:-|:--|-:|:| | workerCount | int | False | 1 | The number of worker threads (goroutines) that read the contents of the file. Consider increasing it when there are more than 100 files on a single node | | field | type | required | default | description | |:|:-|:--|-:|:-| | readBufferSize | int | False | 65536 | The amount of data to read from the file at a time. Default 64K=65536 | | field | type | required | default | description | |:-|:-|:--|-:|:--| | maxContinueRead | int | False | 16 | The number of times the content of the same file is read continuously. Reaching this number of times cause forced switch to the next file to read. The main function is to prevent active files from occupying reading resources all the time, in which case inactive files cannot be read and collected for a long time. | | field | type | required | default | description | |:--|:--|:--|:-|:--| | maxContinueReadTimeout | time.Duration | False | 3s | The maximum reading time of the same file. If this time is exceeded, the next file will be forced to be read. Similar to maxContinueRead | | field | type | required | default | description | |:-|:--|:--|:-|:| | inactiveTimeout |" }, { "data": "| False | 3s | If the file has exceeded inactiveTimeout from the last collection, it is considered that the file has entered an inactive state (that is, the last log has been written), and that the last line of log can be collected safely. | | field | type | required | default | description | |:-|:-|:--|-:|:| | firstNBytesForIdentifier | int | False | 128 | Use the first n characters of the collected target file to generate the file unique code. If the size of the file is less than n, the file will not be collected temporarily. The main purpose is to accurately identify a file in combination with file inode information and to determine whether the file is deleted or renamed. | Encoding conversion, used to convert different encodings to utf8. Example ``` sources: type: file name: demo paths: /tmp/log/*.log fields: topic: \"loggie\" charset: \"gbk\" ``` | field | type | required | default | description | |:--|:-|:--|:-|:| | charset | string | False | utf-8 | Matching model for extracted fields | The currently supported encoding formats for converting to utf-8 are: Newline symbol configuration Example ``` sources: type: file name: demo lineDelimiter: type: carriagereturnline_feed value: \"\\r\\n\" charset: gbk ``` | field | type | required | default | description | |:--|:-|:--|:-|:-| | type | bool | False | auto | value is only valid when type is custom | Currently supported types are: The corresponding newline symbols are: ``` ``` auto: {'\\u000A'}, line_feed: {'\\u000A'}, vertical_tab: {'\\u000B'}, form_feed: {'\\u000C'}, carriage_return: {'\\u000D'}, carriagereturnline_feed: []byte(\"\\u000D\\u000A\"), next_line: {'\\u0085'}, line_separator: []byte(\"\\u2028\"), paragraph_separator: []byte(\"\\u2029\"), null_terminator: {'\\u0000'}, ``` ``` | field | type | required | default | description | |:--|:-|:--|:-|:| | value | string | False | \\n | newline symbol | | field | type | required | default | description | |:--|:-|:--|:-|:| | charset | string | False | utf-8 | newline symbol encoding | Multi-line collection configuration Example ``` sources: type: file name: accesslog multi: active: true ``` | field | type | required | default | description | |:--|:-|:--|:-|:--| | active | bool | False | False | whether to enable multi-line | | field | type | required | default | description | |:--|:-|:--|:-|:| | pattern | string | required when multi.active=true | False | A regular expression that is used to judge whether a line is a brand new log. For example, if it is configured as '^[', it is considered that a line beginning with [ is a new log, otherwise the content of this line is merged into the previous log as part of the previous log. | | field | type | required | default | description | |:|:-|:--|-:|:-| | maxLines | int | False | 500 | Number of lines a log can contains at most. The default is 500 lines. If the upper limit is exceeded, the current log will be forced to be sent, and the excess will be used as a new log. | | field | type | required | default | description | |:|:-|:--|-:|:--| | maxBytes | int64 | False | 131072 | Number of bytes a log can contains at most. The default is" }, { "data": "If the upper limit is exceeded, the current log will be forced to be sent, and the excess will be used as a new log. | | field | type | required | default | description | |:--|:--|:--|:-|:--| | timeout | time.Duration | False | 5s | How long to wait for a log to be collected as a complete log. The default is 5s. If the upper limit is exceeded, the current log will be sent, and the excess will be used as a new log. | Configuration related to the confirmation of the source. If you need to make sure at least once, you need to turn on the ack mechanism, but there will be a certain performance loss. Caution This configuration can only be configured in defaults Example ``` defaults: sources: type: file ack: enable: true ``` | field | type | required | default | description | |:--|:-|:--|:-|:-| | enable | bool | False | True | Whether to enable confirmation | | field | type | required | default | description | |:--|:--|:--|:-|:--| | maintenanceInterval | time.Duration | False | 20h | maintenance cycle. Used to regularly clean up expired confirmation data (such as the ack information of files that are no longer collected) | Use sqlite3 as database. Save the file name, file inode, offset of file collection and other information during the collection process. Used to restore the last collection progress after logie reload or restart. Caution This configuration can only be configured in defaults. Example ``` defaults: sources: type: file db: file: \"./data/loggie.db\" ``` | field | type | required | default | description | |:--|:-|:--|:--|:-| | file | string | False | ./data/loggie.db | database file path | | field | type | required | default | description | |:-|:-|:--|:-|:--| | tableName | string | False | registry | database table name | | field | type | required | default | description | |:-|:--|:--|:-|:-| | flushTimeout | time.Duration | False | 2s | write the collected information to the database regularly | | field | type | required | default | description | |:--|:-|:--|-:|:| | bufferSize | int | False | 2048 | The buffer size of the collection information written into the database | | field | type | required | default | description | |:|:--|:--|:-|:| | cleanInactiveTimeout | time.Duration | False | 504h | Clean up outdated data in the database. If the update time of the data exceeds the configured value, the data will be deleted. 21 days by default. | | field | type | required | default | description | |:|:--|:--|:-|:| | cleanScanInterval | time.Duration | False | 1h | Periodically check the database for outdated data. Check every 1 hour by default | Configuration for monitoring file changes Caution This configuration can only be configured in defaults Example ``` defaults: sources: type: file watcher: enableOsWatch: true ``` | field | type | required | default | description | |:--|:-|:--|:-|:-| | enableOsWatch | bool | False | True | Whether to enable the monitoring notification mechanism of the" }, { "data": "For example, inotify of linux | | field | type | required | default | description | |:--|:--|:--|:-|:--| | scanTimeInterval | time.Duration | False | 10s | Periodically check file status changes (such as file creation, deletion, etc.). Check every 10s by default | | field | type | required | default | description | |:--|:--|:--|:-|:-| | maintenanceInterval | time.Duration | False | 5m | Periodic maintenance work (such as reporting and collecting statistics, cleaning files, etc.) | | field | type | required | default | description | |:--|:--|:--|:-|:-| | fdHoldTimeoutWhenInactive | time.Duration | False | 5m | When the time from the last collection of the file to the present exceeds the limit (the file has not been written for a long time, it is considered that there is a high probability that the content will not be written again), the handle of the file will be released to release system resources | | field | type | required | default | description | |:|:--|:--|:-|:-| | fdHoldTimeoutWhenRemove | time.Duration | False | 5m | When the file is deleted and the collection is not completed, it will wait for the maximum time to complete the collection. If the limit is exceeded, no matter whether the file is finally collected or not, the handle will be released directly and no longer collected. | | field | type | required | default | description | |:--|:-|:--|-:|:| | maxOpenFds | int | False | 512 | The maximum number of open file handles. If the limit is exceeded, the files will not be collected temporarily | | field | type | required | default | description | |:|:-|:--|-:|:| | maxEofCount | int | False | 3 | The maximum number of times EoF is encountered in consecutive reads of a file. If the limit is exceeded, it is considered that the file is temporarily inactive and will enter the \"zombie\" queue to wait for the update event to be activated. | | field | type | required | default | description | |:--|:-|:--|:-|:--| | cleanWhenRemoved | bool | False | True | When the file is deleted, whether to delete the collection-related information in the db synchronously. | | field | type | required | default | description | |:-|:-|:--|:-|:-| | readFromTail | bool | False | False | Whether to start collecting from the latest line of the file, regardless of writing history. It is suitable for scenarios such as migration of collection systems. | | field | type | required | default | description | |:-|:--|:--|:-|:| | taskStopTimeout | time.Duration | False | 30s | The timeout period for the collection task to exit. It is a bottom-up solution when Loggie cannot be reloaded. | File clearing related configuration. Expired and collected files will be deleted directly from the disk to free up disk space. | field | type | required | default | description | |:|:-|:--|:-|:--| | maxHistoryDays | int | False | none | Maximum number of days to keep files (after collection). If the limit is exceeded, the file will be deleted directly from the disk. If not configured, the file will never be deleted |" } ]
{ "category": "Observability and Analysis", "file_name": "getting-started.md", "project_name": "Jaeger", "subcategory": "Observability" }
[ { "data": "We stand with our friends and colleagues in Ukraine. To support Ukraine in their time of need visit this page. CLI flags This is auto-generated documentation for CLI flags supported by Jaeger binaries. The CLI flags for the following binaries are documented below: Jaeger all-in-one distribution with agent, collector and query. Use with caution this version by default uses only in-memory database. jaeger-all-in-one can be used with these storage backends: (Experimental) jaeger-all-in-one can be used with these metrics storage types: | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --cassandra-archive.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra-archive.connections-per-host | 0 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra-archive.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra-archive.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra-archive.enabled | false | | Enable extra storage | Enable extra storage | | --cassandra-archive.keyspace | nan | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra-archive.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | |" }, { "data": "| 0 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra-archive.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra-archive.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra-archive.proto-version | 0 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra-archive.reconnect-interval | 0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra-archive.servers | nan | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra-archive.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra-archive.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --cassandra.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra.connections-per-host | 2 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra.index.logs | true | | Controls log field indexing. Set to false to" }, { "data": "| Controls log field indexing. Set to false to disable. | | --cassandra.index.process-tags | true | | Controls process tag indexing. Set to false to disable. | Controls process tag indexing. Set to false to disable. | | --cassandra.index.tag-blacklist | nan | | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | | --cassandra.index.tag-whitelist | nan | | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | | --cassandra.index.tags | true | | Controls tag indexing. Set to false to disable. | Controls tag indexing. Set to false to disable. | | --cassandra.keyspace | jaegerv1test | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra.max-retry-attempts | 3 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra.proto-version | 4 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra.reconnect-interval | 1m0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra.servers | 127.0.0.1 | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra.span-store-write-cache-ttl | 12h0m0s | | The duration to wait before rewriting an existing service or operation name | The duration to wait before rewriting an existing service or operation name | | --cassandra.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | |" }, { "data": "| nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | |" }, { "data": "| nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-all-in-one | help for jaeger-all-in-one | | --http-server.host-port | :5778 | | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release" }, { "data": "whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --processor.jaeger-binary.server-host-port | :6832 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-binary.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-binary.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-binary.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-binary.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.jaeger-compact.server-host-port | :6831 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.zipkin-compact.server-host-port | :5775 | | host:port for the UDP server | host:port for the UDP server | | --processor.zipkin-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.zipkin-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.zipkin-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.zipkin-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | | --query.max-clock-skew-adjustment | 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | |" }, { "data": "| nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --reporter.grpc.discovery.min-peers | 3 | | Max number of collectors to which the agent will try to connect at any given time | Max number of collectors to which the agent will try to connect at any given time | | --reporter.grpc.host-port | nan | | Comma-separated string representing host:port of a static list of collectors to connect to directly | Comma-separated string representing host:port of a static list of collectors to connect to directly | | --reporter.grpc.retry.max | 3 | | Sets the maximum number of retries for a call | Sets the maximum number of retries for a call | | --reporter.grpc.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --reporter.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --reporter.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --reporter.grpc.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --reporter.type | grpc | | Reporter type to use e.g. grpc | Reporter type to use e.g. grpc | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | |" }, { "data": "| nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | |" }, { "data": "| nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package" }, { "data": "(https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | |" }, { "data": "| 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | |" }, { "data": "| 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | | --es-archive.bulk.actions | 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es-archive.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es-archive.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es-archive.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es-archive.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application startup. Set to false when templates are installed manually. | | --es-archive.enabled | false | | Enable extra storage | Enable extra storage | | --es-archive.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | | --es-archive.index-prefix | nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | | --es-archive.index-rollover-frequency-adaptive-sampling | day | | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es-archive.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to" }, { "data": "| The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es-archive.num-replicas | 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es-archive.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es-archive.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es-archive.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es-archive.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es-archive.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es-archive.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es-archive.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es-archive.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET, POST]. | HTTP verb for requests that contain a body [GET, POST]. | | --es-archive.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es-archive.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es-archive.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es-archive.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es-archive.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es-archive.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es-archive.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es-archive.timeout | 0s | | Timeout used for" }, { "data": "A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --es-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --es-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --es-archive.token-file | nan | | Path to a file containing bearer token. This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is specified. | | --es-archive.use-aliases | false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es-archive.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es-archive.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es-archive.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --es.adaptive-sampling.lookback | 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | |" }, { "data": "| 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application startup. Set to false when templates are installed manually. | | --es.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | | --es.index-prefix | nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | | --es.index-rollover-frequency-adaptive-sampling | day | | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es.max-span-age | 72h0m0s | | The maximum lookback for spans in Elasticsearch | The maximum lookback for spans in Elasticsearch | |" }, { "data": "| 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET, POST]. | HTTP verb for requests that contain a body [GET, POST]. | | --es.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --es.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --es.token-file | nan | | Path to a file containing bearer token. This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is specified. | | --es.use-aliases | false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --help | false | | help for jaeger-all-in-one | help for jaeger-all-in-one | | --http-server.host-port | :5778 | | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see" }, { "data": "| | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --processor.jaeger-binary.server-host-port | :6832 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-binary.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-binary.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-binary.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-binary.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.jaeger-compact.server-host-port | :6831 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.zipkin-compact.server-host-port | :5775 | | host:port for the UDP server | host:port for the UDP server | | --processor.zipkin-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.zipkin-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.zipkin-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.zipkin-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | |" }, { "data": "| false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | | --query.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | |" }, { "data": "| 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | | --query.ui-config | nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --reporter.grpc.discovery.min-peers | 3 | | Max number of collectors to which the agent will try to connect at any given time | Max number of collectors to which the agent will try to connect at any given time | | --reporter.grpc.host-port | nan | | Comma-separated string representing host:port of a static list of collectors to connect to directly | Comma-separated string representing host:port of a static list of collectors to connect to directly | | --reporter.grpc.retry.max | 3 | | Sets the maximum number of retries for a call | Sets the maximum number of retries for a call | | --reporter.grpc.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --reporter.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --reporter.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --reporter.grpc.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --reporter.type | grpc | | Reporter type to use e.g. grpc | Reporter type to use e.g. grpc | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g." }, { "data": "or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values:" }, { "data": "1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables" }, { "data": "| Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-all-in-one | help for jaeger-all-in-one | | --http-server.host-port | :5778 | | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --memory.max-traces | 0 | | The maximum amount of traces to store in memory. The default number of traces is unbounded. | The maximum amount of traces to store in memory. The default number of traces is unbounded. | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --processor.jaeger-binary.server-host-port | :6832 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-binary.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-binary.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-binary.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-binary.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.jaeger-compact.server-host-port | :6831 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.zipkin-compact.server-host-port | :5775 | | host:port for the UDP server | host:port for the UDP server | | --processor.zipkin-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.zipkin-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | |" }, { "data": "| 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.zipkin-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | | --query.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | | --query.max-clock-skew-adjustment | 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | | --query.ui-config | nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --reporter.grpc.discovery.min-peers | 3 | | Max number of collectors to which the agent will try to connect at any given time | Max number of collectors to which the agent will try to connect at any given time | | --reporter.grpc.host-port | nan | | Comma-separated string representing host:port of a static list of collectors to connect to directly | Comma-separated string representing host:port of a static list of collectors to connect to directly | | --reporter.grpc.retry.max | 3 | | Sets the maximum number of retries for a call | Sets the maximum number of retries for a call | | --reporter.grpc.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --reporter.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --reporter.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --reporter.grpc.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --reporter.type | grpc | | Reporter type to use e.g. grpc | Reporter type to use e.g. grpc | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON" }, { "data": "See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --badger.consistency | false | | If all writes should be synced immediately to physical disk. This will impact write performance. | If all writes should be synced immediately to physical disk. This will impact write performance. | | --badger.directory-key | /go/bin/data/keys | | Path to store the keys (indexes), this directory should reside in SSD disk. Set ephemeral to false if you want to define this setting. | Path to store the keys (indexes), this directory should reside in SSD disk. Set ephemeral to false if you want to define this setting. | | --badger.directory-value | /go/bin/data/values | | Path to store the values (spans). Set ephemeral to false if you want to define this setting. | Path to store the values (spans). Set ephemeral to false if you want to define this setting. | | --badger.ephemeral | true | | Mark this storage ephemeral, data is stored in tmpfs. | Mark this storage ephemeral, data is stored in tmpfs. | | --badger.maintenance-interval | 5m0s | | How often the maintenance thread for values is ran. Format is time.Duration (https://golang.org/pkg/time/#Duration) | How often the maintenance thread for values is ran. Format is time.Duration (https://golang.org/pkg/time/#Duration) | |" }, { "data": "| 10s | | How often the badger metrics are collected by Jaeger. Format is time.Duration (https://golang.org/pkg/time/#Duration) | How often the badger metrics are collected by Jaeger. Format is time.Duration (https://golang.org/pkg/time/#Duration) | | --badger.read-only | false | | Allows to open badger database in read only mode. Multiple instances can open same database in read-only mode. Values still in the write-ahead-log must be replayed before opening. | Allows to open badger database in read only mode. Multiple instances can open same database in read-only mode. Values still in the write-ahead-log must be replayed before opening. | | --badger.span-store-ttl | 72h0m0s | | How long to store the data. Format is time.Duration (https://golang.org/pkg/time/#Duration) | How long to store the data. Format is time.Duration (https://golang.org/pkg/time/#Duration) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | |" }, { "data": "| 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | |" }, { "data": "| false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | |" }, { "data": "| nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-all-in-one | help for jaeger-all-in-one | | --http-server.host-port | :5778 | | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see" }, { "data": "| | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --processor.jaeger-binary.server-host-port | :6832 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-binary.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-binary.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-binary.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-binary.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.jaeger-compact.server-host-port | :6831 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.zipkin-compact.server-host-port | :5775 | | host:port for the UDP server | host:port for the UDP server | | --processor.zipkin-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.zipkin-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.zipkin-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.zipkin-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | |" }, { "data": "| false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | | --query.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | |" }, { "data": "| 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | | --query.ui-config | nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --reporter.grpc.discovery.min-peers | 3 | | Max number of collectors to which the agent will try to connect at any given time | Max number of collectors to which the agent will try to connect at any given time | | --reporter.grpc.host-port | nan | | Comma-separated string representing host:port of a static list of collectors to connect to directly | Comma-separated string representing host:port of a static list of collectors to connect to directly | | --reporter.grpc.retry.max | 3 | | Sets the maximum number of retries for a call | Sets the maximum number of retries for a call | | --reporter.grpc.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --reporter.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --reporter.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --reporter.grpc.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --reporter.type | grpc | | Reporter type to use e.g. grpc | Reporter type to use e.g. grpc | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g." }, { "data": "or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1," }, { "data": "1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables" }, { "data": "| Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --grpc-storage-plugin.binary | nan | | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | | --grpc-storage-plugin.configuration-file | nan | | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | | --grpc-storage-plugin.log-level | warn | | Set the log level of the plugin's logger | Set the log level of the plugin's logger | | --grpc-storage.connection-timeout | 5s | | The remote storage gRPC server connection timeout | The remote storage gRPC server connection timeout | | --grpc-storage.server | nan | | The remote storage gRPC server address as host:port | The remote storage gRPC server address as host:port | | --grpc-storage.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --grpc-storage.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --grpc-storage.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --grpc-storage.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --grpc-storage.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --grpc-storage.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --help | false | | help for jaeger-all-in-one | help for jaeger-all-in-one | | --http-server.host-port | :5778 | | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | |" }, { "data": "| nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --processor.jaeger-binary.server-host-port | :6832 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-binary.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-binary.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-binary.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-binary.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.jaeger-compact.server-host-port | :6831 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.zipkin-compact.server-host-port | :5775 | | host:port for the UDP server | host:port for the UDP server | | --processor.zipkin-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.zipkin-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.zipkin-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.zipkin-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | | --query.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | | --query.max-clock-skew-adjustment | 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | | --query.ui-config | nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --reporter.grpc.discovery.min-peers | 3 | | Max number of collectors to which the agent will try to connect at any given time | Max number of collectors to which the agent will try to connect at any given time | |" }, { "data": "| nan | | Comma-separated string representing host:port of a static list of collectors to connect to directly | Comma-separated string representing host:port of a static list of collectors to connect to directly | | --reporter.grpc.retry.max | 3 | | Sets the maximum number of retries for a call | Sets the maximum number of retries for a call | | --reporter.grpc.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --reporter.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --reporter.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --reporter.grpc.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --reporter.type | grpc | | Reporter type to use e.g. grpc | Reporter type to use e.g. grpc | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:--|:--| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server |" }, { "data": "https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | |" }, { "data": "| nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-all-in-one | help for jaeger-all-in-one | | --http-server.host-port | :5778 | | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --memory.max-traces | 0 | | The maximum amount of traces to store in" }, { "data": "The default number of traces is unbounded. | The maximum amount of traces to store in memory. The default number of traces is unbounded. | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --processor.jaeger-binary.server-host-port | :6832 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-binary.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-binary.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-binary.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-binary.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.jaeger-compact.server-host-port | :6831 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.zipkin-compact.server-host-port | :5775 | | host:port for the UDP server | host:port for the UDP server | | --processor.zipkin-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.zipkin-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.zipkin-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.zipkin-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --prometheus.connect-timeout | 30s | | The period to wait for a connection to Prometheus when executing queries. | The period to wait for a connection to Prometheus when executing queries. | | --prometheus.query.duration-unit | ms | | The units used for the \"latency\" histogram. It can be either \"ms\" or \"s\" and should be consistent with the histogram unit value set in the spanmetrics connector" }, { "data": "https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/spanmetricsconnector#configurations). This also helps jaeger-query determine the metric name when querying for \"latency\" metrics. | The units used for the \"latency\" histogram. It can be either \"ms\" or \"s\" and should be consistent with the histogram unit value set in the spanmetrics connector (see: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/spanmetricsconnector#configurations). This also helps jaeger-query determine the metric name when querying for \"latency\" metrics. | | --prometheus.query.namespace | nan | | The metric namespace that is prefixed to the metric name. A '.' separator will be added between the namespace and the metric name. | The metric namespace that is prefixed to the metric name. A '.' separator will be added between the namespace and the metric name. | | --prometheus.query.normalize-calls | false | | Whether to normalize the \"calls\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"calls\" (not normalized) -> \"callstotal\" (normalized), | Whether to normalize the \"calls\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"calls\" (not normalized) -> \"callstotal\" (normalized), | | --prometheus.query.normalize-duration | false | | Whether to normalize the \"duration\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"durationbucket\" (not normalized) -> \"durationmillisecondsbucket (normalized)\" | Whether to normalize the \"duration\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"durationbucket\" (not normalized) -> \"durationmillisecondsbucket (normalized)\" | | --prometheus.query.support-spanmetrics-connector | true | | (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) Controls whether the metrics queries should match the OpenTelemetry Collector's spanmetrics connector naming (when true) or spanmetrics processor naming (when false). | (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) Controls whether the metrics queries should match the OpenTelemetry Collector's spanmetrics connector naming (when true) or spanmetrics processor naming (when false). | | --prometheus.server-url | http://localhost:9090 | | The Prometheus server's URL, must include the protocol scheme e.g. http://localhost:9090 | The Prometheus server's URL, must include the protocol scheme e.g. http://localhost:9090 | | --prometheus.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --prometheus.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --prometheus.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --prometheus.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --prometheus.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --prometheus.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --prometheus.token-file | nan | | The path to a file containing the bearer token which will be included when executing queries against the Prometheus API. | The path to a file containing the bearer token which will be included when executing queries against the Prometheus API. | | --prometheus.token-override-from-context | true | | Whether the bearer token should be overridden from context (incoming request) | Whether the bearer token should be overridden from context (incoming request) | |" }, { "data": "| [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | | --query.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2," }, { "data": "| Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | | --query.max-clock-skew-adjustment | 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | | --query.ui-config | nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --reporter.grpc.discovery.min-peers | 3 | | Max number of collectors to which the agent will try to connect at any given time | Max number of collectors to which the agent will try to connect at any given time | | --reporter.grpc.host-port | nan | | Comma-separated string representing host:port of a static list of collectors to connect to directly | Comma-separated string representing host:port of a static list of collectors to connect to directly | | --reporter.grpc.retry.max | 3 | | Sets the maximum number of retries for a call | Sets the maximum number of retries for a call | | --reporter.grpc.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --reporter.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --reporter.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --reporter.grpc.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --reporter.type | grpc | | Reporter type to use e.g. grpc | Reporter type to use e.g. grpc | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies" }, { "data": "Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) Jaeger agent is a daemon program that runs on every host and receives tracing data submitted by Jaeger client libraries. | Flag | Default Value | |:|:| | --admin.http.host-port | :14271 | | The host:port (e.g. 127.0.0.1:14271 or :14271) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14271 or :14271) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --agent.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this agent. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this agent. Ex: key1=value1,key2=${envVar:defaultValue} | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --help | false | | help for jaeger-agent | help for jaeger-agent | | --http-server.host-port | :5778 | | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | host:port of the http server (e.g. for /sampling point and /baggageRestrictions endpoint) | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release" }, { "data": "whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --processor.jaeger-binary.server-host-port | :6832 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-binary.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-binary.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-binary.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-binary.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.jaeger-compact.server-host-port | :6831 | | host:port for the UDP server | host:port for the UDP server | | --processor.jaeger-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.jaeger-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.jaeger-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.jaeger-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --processor.zipkin-compact.server-host-port | :5775 | | host:port for the UDP server | host:port for the UDP server | | --processor.zipkin-compact.server-max-packet-size | 65000 | | max packet size for the UDP server | max packet size for the UDP server | | --processor.zipkin-compact.server-queue-size | 1000 | | length of the queue for the UDP server | length of the queue for the UDP server | | --processor.zipkin-compact.server-socket-buffer-size | 0 | | socket buffer size for UDP packets in bytes | socket buffer size for UDP packets in bytes | | --processor.zipkin-compact.workers | 10 | | how many workers the processor should run | how many workers the processor should run | | --reporter.grpc.discovery.min-peers | 3 | | Max number of collectors to which the agent will try to connect at any given time | Max number of collectors to which the agent will try to connect at any given time | | --reporter.grpc.host-port | nan | | Comma-separated string representing host:port of a static list of collectors to connect to directly | Comma-separated string representing host:port of a static list of collectors to connect to directly | | --reporter.grpc.retry.max | 3 | | Sets the maximum number of retries for a call | Sets the maximum number of retries for a call | | --reporter.grpc.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --reporter.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --reporter.grpc.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --reporter.grpc.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --reporter.type | grpc | | Reporter type to use e.g. grpc | Reporter type to use e.g. grpc | Jaeger collector receives traces from Jaeger agents and runs them through a processing pipeline. jaeger-collector can be used with these storage backends: jaeger-collector can be used with these sampling types: | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --cassandra-archive.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra-archive.connections-per-host | 0 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra-archive.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra-archive.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra-archive.enabled | false | | Enable extra storage | Enable extra storage | |" }, { "data": "| nan | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra-archive.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra-archive.max-retry-attempts | 0 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra-archive.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra-archive.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra-archive.proto-version | 0 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra-archive.reconnect-interval | 0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra-archive.servers | nan | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra-archive.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra-archive.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --cassandra.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra.connections-per-host | 2 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to" }, { "data": "This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra.index.logs | true | | Controls log field indexing. Set to false to disable. | Controls log field indexing. Set to false to disable. | | --cassandra.index.process-tags | true | | Controls process tag indexing. Set to false to disable. | Controls process tag indexing. Set to false to disable. | | --cassandra.index.tag-blacklist | nan | | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | | --cassandra.index.tag-whitelist | nan | | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | | --cassandra.index.tags | true | | Controls tag indexing. Set to false to disable. | Controls tag indexing. Set to false to disable. | | --cassandra.keyspace | jaegerv1test | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra.max-retry-attempts | 3 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra.proto-version | 4 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra.reconnect-interval | 1m0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra.servers | 127.0.0.1 | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra.span-store-write-cache-ttl | 12h0m0s | | The duration to wait before rewriting an existing service or operation name | The duration to wait before rewriting an existing service or operation name | | --cassandra.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server |" }, { "data": "https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | |" }, { "data": "| false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this" }, { "data": "Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-collector | help for jaeger-collector | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release" }, { "data": "whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g." }, { "data": "or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See" }, { "data": "| Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | |" }, { "data": "| nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --es-archive.adaptive-sampling.lookback | 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | | --es-archive.bulk.actions | 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es-archive.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es-archive.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es-archive.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es-archive.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application startup. Set to false when templates are installed manually. | | --es-archive.enabled | false | | Enable extra storage | Enable extra storage | | --es-archive.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates" }, { "data": "| Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | | --es-archive.index-prefix | nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | | --es-archive.index-rollover-frequency-adaptive-sampling | day | | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es-archive.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es-archive.num-replicas | 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es-archive.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es-archive.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es-archive.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es-archive.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es-archive.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es-archive.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es-archive.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es-archive.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET," }, { "data": "| HTTP verb for requests that contain a body [GET, POST]. | | --es-archive.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es-archive.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es-archive.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es-archive.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es-archive.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es-archive.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es-archive.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --es-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --es-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --es-archive.token-file | nan | | Path to a file containing bearer" }, { "data": "This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is specified. | | --es-archive.use-aliases | false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es-archive.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es-archive.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es-archive.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --es.adaptive-sampling.lookback | 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | | --es.bulk.actions | 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application startup. Set to false when templates are installed manually. | | --es.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | | --es.index-prefix | nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | |" }, { "data": "| day | | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es.max-span-age | 72h0m0s | | The maximum lookback for spans in Elasticsearch | The maximum lookback for spans in Elasticsearch | | --es.num-replicas | 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET, POST]. | HTTP verb for requests that contain a body [GET, POST]. | | --es.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url" }, { "data": "http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --es.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --es.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --es.token-file | nan | | Path to a file containing bearer token. This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is" }, { "data": "| | --es.use-aliases | false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --help | false | | help for jaeger-collector | help for jaeger-collector | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment" }, { "data": "Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | |" }, { "data": "| 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | |" }, { "data": "| nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-collector | help for jaeger-collector | | --kafka.producer.authentication | none | | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | | --kafka.producer.batch-linger | 0s | | (experimental) Time interval to wait before sending records to Kafka. Higher value reduce request to Kafka but increase latency and the possibility of data loss in case of process restart. See https://kafka.apache.org/documentation/ | (experimental) Time interval to wait before sending records to Kafka. Higher value reduce request to Kafka but increase latency and the possibility of data loss in case of process restart. See https://kafka.apache.org/documentation/ | | --kafka.producer.batch-max-messages | 0 | | (experimental) Maximum number of message to batch before sending records to Kafka | (experimental) Maximum number of message to batch before sending records to Kafka | | --kafka.producer.batch-min-messages | 0 | | (experimental) The best-effort minimum number of messages needed to send a batch of records to Kafka. Higher value reduce request to Kafka but increase latency and the possibility of data loss in case of process restart. See https://kafka.apache.org/documentation/ | (experimental) The best-effort minimum number of messages needed to send a batch of records to Kafka. Higher value reduce request to Kafka but increase latency and the possibility of data loss in case of process restart. See https://kafka.apache.org/documentation/ | | --kafka.producer.batch-size | 0 | | (experimental) Number of bytes to batch before sending records to Kafka. Higher value reduce request to Kafka but increase latency and the possibility of data loss in case of process restart. See https://kafka.apache.org/documentation/ | (experimental) Number of bytes to batch before sending records to Kafka. Higher value reduce request to Kafka but increase latency and the possibility of data loss in case of process restart. See https://kafka.apache.org/documentation/ | | --kafka.producer.brokers | 127.0.0.1:9092 | | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | | --kafka.producer.compression | none | | (experimental) Type of compression (none, gzip, snappy, lz4, zstd) to use on messages | (experimental) Type of compression (none, gzip, snappy, lz4, zstd) to use on messages | | --kafka.producer.compression-level | 0 | | (experimental) compression level to use on messages. gzip = 1-9 (default = 6), snappy = none, lz4 = 1-17 (default = 9), zstd = -131072 - 22 (default = 3) | (experimental) compression level to use on messages. gzip = 1-9 (default = 6), snappy = none, lz4 = 1-17 (default = 9), zstd = -131072 - 22 (default = 3) | | --kafka.producer.encoding | protobuf | | Encoding of spans (\"json\" or \"protobuf\") sent to kafka. | Encoding of spans (\"json\" or \"protobuf\") sent to kafka. | | --kafka.producer.kerberos.config-file | /etc/krb5.conf | | Path to Kerberos configuration. i.e /etc/krb5.conf | Path to Kerberos configuration. i.e /etc/krb5.conf | | --kafka.producer.kerberos.disable-fast-negotiation | false | | Disable FAST negotiation when not supported by KDC's like Active Directory." }, { "data": "https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | Disable FAST negotiation when not supported by KDC's like Active Directory. See https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | | --kafka.producer.kerberos.keytab-file | /etc/security/kafka.keytab | | Path to keytab file. i.e /etc/security/kafka.keytab | Path to keytab file. i.e /etc/security/kafka.keytab | | --kafka.producer.kerberos.password | nan | | The Kerberos password used for authenticate with KDC | The Kerberos password used for authenticate with KDC | | --kafka.producer.kerberos.realm | nan | | Kerberos realm | Kerberos realm | | --kafka.producer.kerberos.service-name | kafka | | Kerberos service name | Kerberos service name | | --kafka.producer.kerberos.use-keytab | false | | Use of keytab instead of password, if this is true, keytab file will be used instead of password | Use of keytab instead of password, if this is true, keytab file will be used instead of password | | --kafka.producer.kerberos.username | nan | | The Kerberos username used for authenticate with KDC | The Kerberos username used for authenticate with KDC | | --kafka.producer.max-message-bytes | 1000000 | | (experimental) The maximum permitted size of a message. Should be set equal to or smaller than the broker's `message.max.bytes`. | (experimental) The maximum permitted size of a message. Should be set equal to or smaller than the broker's `message.max.bytes`. | | --kafka.producer.plaintext.mechanism | PLAIN | | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | | --kafka.producer.plaintext.password | nan | | The plaintext Password for SASL/PLAIN authentication | The plaintext Password for SASL/PLAIN authentication | | --kafka.producer.plaintext.username | nan | | The plaintext Username for SASL/PLAIN authentication | The plaintext Username for SASL/PLAIN authentication | | --kafka.producer.protocol-version | nan | | Kafka protocol version - must be supported by kafka server | Kafka protocol version - must be supported by kafka server | | --kafka.producer.required-acks | local | | (experimental) Required kafka broker acknowledgement. i.e. noack, local, all | (experimental) Required kafka broker acknowledgement. i.e. noack, local, all | | --kafka.producer.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --kafka.producer.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --kafka.producer.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --kafka.producer.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --kafka.producer.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --kafka.producer.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --kafka.producer.topic | jaeger-spans | | The name of the kafka topic | The name of the kafka topic | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see" }, { "data": "| | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to" }, { "data": "| Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | |" }, { "data": "| 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --grpc-storage-plugin.binary | nan | | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | | --grpc-storage-plugin.configuration-file | nan | | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | | --grpc-storage-plugin.log-level | warn | | Set the log level of the plugin's logger | Set the log level of the plugin's logger | | --grpc-storage.connection-timeout | 5s | | The remote storage gRPC server connection timeout | The remote storage gRPC server connection timeout | | --grpc-storage.server | nan | | The remote storage gRPC server address as host:port | The remote storage gRPC server address as host:port | | --grpc-storage.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --grpc-storage.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --grpc-storage.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --grpc-storage.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --grpc-storage.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --help | false | | help for jaeger-collector | help for jaeger-collector | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | |" }, { "data": "| nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --cassandra-archive.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra-archive.connections-per-host | 0 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra-archive.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra-archive.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra-archive.enabled | false | | Enable extra storage | Enable extra storage | | --cassandra-archive.keyspace | nan | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra-archive.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra-archive.max-retry-attempts | 0 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra-archive.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra-archive.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra-archive.proto-version | 0 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra-archive.reconnect-interval | 0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra-archive.servers | nan | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra-archive.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra-archive.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --cassandra.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra.connections-per-host | 2 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra.index.logs | true | | Controls log field indexing. Set to false to disable. | Controls log field indexing. Set to false to disable. | | --cassandra.index.process-tags | true | | Controls process tag indexing. Set to false to disable. | Controls process tag indexing. Set to false to disable. | | --cassandra.index.tag-blacklist | nan | | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | | --cassandra.index.tag-whitelist | nan | | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | | --cassandra.index.tags | true | | Controls tag indexing. Set to false to disable. | Controls tag indexing. Set to false to disable. | |" }, { "data": "| jaegerv1test | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra.max-retry-attempts | 3 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra.proto-version | 4 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra.reconnect-interval | 1m0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra.servers | 127.0.0.1 | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra.span-store-write-cache-ttl | 12h0m0s | | The duration to wait before rewriting an existing service or operation name | The duration to wait before rewriting an existing service or operation name | | --cassandra.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently." }, { "data": "https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | |" }, { "data": "| nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server |" }, { "data": "https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-collector | help for jaeger-collector | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --sampling.aggregation-buckets | 10 | | Amount of historical data to keep in memory. | Amount of historical data to keep in memory. | | --sampling.buckets-for-calculation | 1 | | This determines how much of the previous data is used in calculating the weighted QPS, ie. if BucketsForCalculation is 1, only the most recent data will be used in calculating the weighted QPS. | This determines how much of the previous data is used in calculating the weighted QPS, ie. if BucketsForCalculation is 1, only the most recent data will be used in calculating the weighted QPS. | | --sampling.calculation-interval | 1m0s | | How often new sampling probabilities are" }, { "data": "Recommended to be greater than the polling interval of your clients. | How often new sampling probabilities are calculated. Recommended to be greater than the polling interval of your clients. | | --sampling.delay | 2m0s | | Determines how far back the most recent state is. Use this if you want to add some buffer time for the aggregation to finish. | Determines how far back the most recent state is. Use this if you want to add some buffer time for the aggregation to finish. | | --sampling.delta-tolerance | 0.3 | | The acceptable amount of deviation between the observed samples-per-second and the desired (target) samples-per-second, expressed as a ratio. | The acceptable amount of deviation between the observed samples-per-second and the desired (target) samples-per-second, expressed as a ratio. | | --sampling.follower-lease-refresh-interval | 1m0s | | The duration to sleep if this processor is a follower. | The duration to sleep if this processor is a follower. | | --sampling.initial-sampling-probability | 0.001 | | The initial sampling probability for all new operations. | The initial sampling probability for all new operations. | | --sampling.leader-lease-refresh-interval | 5s | | The duration to sleep if this processor is elected leader before attempting to renew the lease on the leader lock. This should be less than follower-lease-refresh-interval to reduce lock thrashing. | The duration to sleep if this processor is elected leader before attempting to renew the lease on the leader lock. This should be less than follower-lease-refresh-interval to reduce lock thrashing. | | --sampling.min-samples-per-second | 0.016666666666666666 | | The minimum number of traces that are sampled per second. | The minimum number of traces that are sampled per second. | | --sampling.min-sampling-probability | 1e-05 | | The minimum sampling probability for all operations. | The minimum sampling probability for all operations. | | --sampling.target-samples-per-second | 1 | | The global target rate of samples per operation. | The global target rate of samples per operation. | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14269 | | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14269 or :14269) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2," }, { "data": "| Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --cassandra-archive.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra-archive.connections-per-host | 0 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra-archive.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra-archive.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra-archive.enabled | false | | Enable extra storage | Enable extra storage | | --cassandra-archive.keyspace | nan | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra-archive.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra-archive.max-retry-attempts | 0 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra-archive.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra-archive.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra-archive.proto-version | 0 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra-archive.reconnect-interval | 0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra-archive.servers | nan | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra-archive.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | |" }, { "data": "| nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra-archive.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --cassandra.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra.connections-per-host | 2 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra.index.logs | true | | Controls log field indexing. Set to false to disable. | Controls log field indexing. Set to false to disable. | | --cassandra.index.process-tags | true | | Controls process tag indexing. Set to false to disable. | Controls process tag indexing. Set to false to disable. | | --cassandra.index.tag-blacklist | nan | | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | | --cassandra.index.tag-whitelist | nan | | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | | --cassandra.index.tags | true | | Controls tag indexing. Set to false to disable. | Controls tag indexing. Set to false to disable. | | --cassandra.keyspace | jaegerv1test | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra.max-retry-attempts | 3 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra.proto-version | 4 | | The Cassandra protocol version | The Cassandra protocol version | |" }, { "data": "| 1m0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra.servers | 127.0.0.1 | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra.span-store-write-cache-ttl | 12h0m0s | | The duration to wait before rewriting an existing service or operation name | The duration to wait before rewriting an existing service or operation name | | --cassandra.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --collector.enable-span-size-metrics | false | | Enables metrics based on processed span size, which are more expensive to calculate. | Enables metrics based on processed span size, which are more expensive to calculate. | | --collector.grpc-server.host-port | :14250 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.grpc-server.max-connection-age | 0s | | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.grpc-server.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | |" }, { "data": "| nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http-server.host-port | :14268 | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.http-server.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http-server.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.num-workers | 50 | | The number of workers pulling items from the queue | The number of workers pulling items from the queue | | --collector.otlp.enabled | true | | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | Enables OpenTelemetry OTLP receiver on dedicated HTTP and gRPC ports | | --collector.otlp.grpc.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's gRPC server | | --collector.otlp.grpc.max-connection-age | 0s | | The maximum amount of time a connection may" }, { "data": "Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The maximum amount of time a connection may exist. Set this value to a few seconds or minutes on highly elastic environments, so that clients discover new collector nodes frequently. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-connection-age-grace | 0s | | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | The additive period after MaxConnectionAge after which the connection will be forcibly closed. See https://pkg.go.dev/google.golang.org/grpc/keepalive#ServerParameters | | --collector.otlp.grpc.max-message-size | 4194304 | | The maximum receivable message size for the collector's gRPC server | The maximum receivable message size for the collector's gRPC server | | --collector.otlp.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.otlp.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.grpc.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.otlp.http.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.otlp.http.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.otlp.http.host-port | nan | | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | The host:port (e.g. 127.0.0.1:12345 or :12345) of the collector's HTTP server | | --collector.otlp.http.idle-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-header-timeout | 2s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.read-timeout | 0s | | See https://pkg.go.dev/net/http#Server | See https://pkg.go.dev/net/http#Server | | --collector.otlp.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.otlp.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.otlp.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.otlp.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.otlp.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.otlp.http.tls.reload-interval | 0s | | The duration after which the certificate will be reloaded (0s means will not be reloaded) | The duration after which the certificate will be reloaded (0s means will not be reloaded) | | --collector.queue-size | 2000 | | The queue size of the collector | The queue size of the collector | | --collector.queue-size-memory | 0 | | (experimental) The max memory size in MiB to use for the dynamic queue. | (experimental) The max memory size in MiB to use for the dynamic queue. | | --collector.tags | nan | | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | One or more tags to be added to the Process tags of all spans passing through this collector. Ex: key1=value1,key2=${envVar:defaultValue} | | --collector.zipkin.cors.allowed-headers | nan | | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | Comma-separated CORS allowed headers. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers | | --collector.zipkin.cors.allowed-origins | nan | | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | Comma-separated CORS allowed origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin | | --collector.zipkin.host-port | nan | | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | The host:port (e.g. 127.0.0.1:9411 or :9411) of the collector's Zipkin server (disabled by default) | | --collector.zipkin.keep-alive | true | | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | KeepAlive configures allow Keep-Alive for Zipkin HTTP server (enabled by default) | | --collector.zipkin.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --collector.zipkin.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --collector.zipkin.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --collector.zipkin.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --collector.zipkin.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --collector.zipkin.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --collector.zipkin.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2," }, { "data": "| | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-collector | help for jaeger-collector | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --sampling.strategies-file | nan | | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | The path for the sampling strategies file in JSON format. See sampling documentation to see format of the file | | --sampling.strategies-reload-interval | 0s | | Reload interval to check and reload sampling strategies file. Zero value means no reloading | Reload interval to check and reload sampling strategies file. Zero value means no reloading | | --sampling.strategies.bugfix-5270 | false | | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | Include default operation level strategies for Ratesampling type service level strategy. Cf. https://github.com/jaegertracing/jaeger/issues/5270 | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | Jaeger ingester consumes spans from a particular Kafka topic and writes them to a configured storage. jaeger-ingester can be used with these storage backends: | Flag | Default Value | |:|:| | --admin.http.host-port | :14270 | | The host:port (e.g. 127.0.0.1:14270 or :14270) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14270 or :14270) for the admin server, including health check, /metrics, etc. | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --cassandra-archive.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra-archive.connections-per-host | 0 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra-archive.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra-archive.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra-archive.enabled | false | | Enable extra storage | Enable extra storage | | --cassandra-archive.keyspace | nan | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra-archive.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra-archive.max-retry-attempts | 0 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra-archive.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra-archive.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra-archive.proto-version | 0 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra-archive.reconnect-interval | 0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra-archive.servers | nan | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra-archive.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra-archive.timeout | 0s | | Timeout used for" }, { "data": "A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra-archive.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --cassandra.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra.connections-per-host | 2 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra.index.logs | true | | Controls log field indexing. Set to false to disable. | Controls log field indexing. Set to false to disable. | | --cassandra.index.process-tags | true | | Controls process tag indexing. Set to false to disable. | Controls process tag indexing. Set to false to disable. | | --cassandra.index.tag-blacklist | nan | | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | | --cassandra.index.tag-whitelist | nan | | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | | --cassandra.index.tags | true | | Controls tag" }, { "data": "Set to false to disable. | Controls tag indexing. Set to false to disable. | | --cassandra.keyspace | jaegerv1test | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra.max-retry-attempts | 3 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra.proto-version | 4 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra.reconnect-interval | 1m0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra.servers | 127.0.0.1 | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra.span-store-write-cache-ttl | 12h0m0s | | The duration to wait before rewriting an existing service or operation name | The duration to wait before rewriting an existing service or operation name | | --cassandra.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio =" }, { "data": "means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --help | false | | help for jaeger-ingester | help for jaeger-ingester | | --ingester.deadlockInterval | 0s | | Interval to check for deadlocks. If no messages gets processed in given time, ingester app will exit. Value of 0 disables deadlock check. | Interval to check for deadlocks. If no messages gets processed in given time, ingester app will exit. Value of 0 disables deadlock check. | | --ingester.parallelism | 1000 | | The number of messages to process in parallel | The number of messages to process in parallel | | --kafka.consumer.authentication | none | | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | | --kafka.consumer.brokers | 127.0.0.1:9092 | | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | | --kafka.consumer.client-id | jaeger-ingester | | The Consumer Client ID that ingester will use | The Consumer Client ID that ingester will use | | --kafka.consumer.encoding | protobuf | | The encoding of spans (\"json\", \"protobuf\", \"zipkin-thrift\") consumed from kafka | The encoding of spans (\"json\", \"protobuf\", \"zipkin-thrift\") consumed from kafka | | --kafka.consumer.fetch-max-message-bytes | 1048576 | | The maximum number of message bytes to fetch from the broker in a single request. So you must be sure this is at least as large as your largest message. | The maximum number of message bytes to fetch from the broker in a single request. So you must be sure this is at least as large as your largest message. | | --kafka.consumer.group-id | jaeger-ingester | | The Consumer Group that ingester will be consuming on behalf of | The Consumer Group that ingester will be consuming on behalf of | | --kafka.consumer.kerberos.config-file | /etc/krb5.conf | | Path to Kerberos configuration. i.e /etc/krb5.conf | Path to Kerberos configuration. i.e /etc/krb5.conf | | --kafka.consumer.kerberos.disable-fast-negotiation | false | | Disable FAST negotiation when not supported by KDC's like Active Directory. See https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | Disable FAST negotiation when not supported by KDC's like Active Directory. See https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | | --kafka.consumer.kerberos.keytab-file | /etc/security/kafka.keytab | | Path to keytab file. i.e /etc/security/kafka.keytab | Path to keytab file. i.e /etc/security/kafka.keytab | | --kafka.consumer.kerberos.password | nan | | The Kerberos password used for authenticate with KDC | The Kerberos password used for authenticate with KDC | | --kafka.consumer.kerberos.realm | nan | | Kerberos realm | Kerberos realm | | --kafka.consumer.kerberos.service-name | kafka | | Kerberos service name | Kerberos service name | | --kafka.consumer.kerberos.use-keytab | false | | Use of keytab instead of password, if this is true, keytab file will be used instead of password | Use of keytab instead of password, if this is true, keytab file will be used instead of password | | --kafka.consumer.kerberos.username | nan | | The Kerberos username used for authenticate with KDC | The Kerberos username used for authenticate with KDC | | --kafka.consumer.plaintext.mechanism | PLAIN | | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | | --kafka.consumer.plaintext.password | nan | | The plaintext Password for SASL/PLAIN authentication | The plaintext Password for SASL/PLAIN authentication | |" }, { "data": "| nan | | The plaintext Username for SASL/PLAIN authentication | The plaintext Username for SASL/PLAIN authentication | | --kafka.consumer.protocol-version | nan | | Kafka protocol version - must be supported by kafka server | Kafka protocol version - must be supported by kafka server | | --kafka.consumer.rack-id | nan | | Rack identifier for this client. This can be any string value which indicates where this client is located. It corresponds with the broker config `broker.rack` | Rack identifier for this client. This can be any string value which indicates where this client is located. It corresponds with the broker config `broker.rack` | | --kafka.consumer.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --kafka.consumer.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --kafka.consumer.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --kafka.consumer.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --kafka.consumer.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --kafka.consumer.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --kafka.consumer.topic | jaeger-spans | | The name of the kafka topic to consume from | The name of the kafka topic to consume from | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :14270 | | The host:port (e.g. 127.0.0.1:14270 or :14270) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14270 or :14270) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package" }, { "data": "(https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --es-archive.adaptive-sampling.lookback | 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | | --es-archive.bulk.actions | 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es-archive.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es-archive.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es-archive.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es-archive.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application startup. Set to false when templates are installed manually. | | --es-archive.enabled | false | | Enable extra storage | Enable extra storage | | --es-archive.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | Optional date separator of Jaeger" }, { "data": "For example \".\" creates \"jaeger-span-2020.11.20\". | | --es-archive.index-prefix | nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | | --es-archive.index-rollover-frequency-adaptive-sampling | day | | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es-archive.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es-archive.num-replicas | 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es-archive.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es-archive.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es-archive.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es-archive.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es-archive.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es-archive.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es-archive.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es-archive.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET, POST]. | HTTP verb for requests that contain a body [GET," }, { "data": "| | --es-archive.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es-archive.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es-archive.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es-archive.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es-archive.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es-archive.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es-archive.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --es-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --es-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --es-archive.token-file | nan | | Path to a file containing bearer" }, { "data": "This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is specified. | | --es-archive.use-aliases | false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es-archive.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es-archive.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es-archive.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --es.adaptive-sampling.lookback | 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | | --es.bulk.actions | 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application startup. Set to false when templates are installed manually. | | --es.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | | --es.index-prefix | nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | | --es.index-rollover-frequency-adaptive-sampling | day | | Rotates jaeger-sampling indices over the given" }, { "data": "For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es.max-span-age | 72h0m0s | | The maximum lookback for spans in Elasticsearch | The maximum lookback for spans in Elasticsearch | | --es.num-replicas | 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET, POST]. | HTTP verb for requests that contain a body [GET, POST]. | | --es.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url" }, { "data": "http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --es.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --es.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --es.token-file | nan | | Path to a file containing bearer token. This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is specified. | |" }, { "data": "| false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --help | false | | help for jaeger-ingester | help for jaeger-ingester | | --ingester.deadlockInterval | 0s | | Interval to check for deadlocks. If no messages gets processed in given time, ingester app will exit. Value of 0 disables deadlock check. | Interval to check for deadlocks. If no messages gets processed in given time, ingester app will exit. Value of 0 disables deadlock check. | | --ingester.parallelism | 1000 | | The number of messages to process in parallel | The number of messages to process in parallel | | --kafka.consumer.authentication | none | | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | | --kafka.consumer.brokers | 127.0.0.1:9092 | | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | | --kafka.consumer.client-id | jaeger-ingester | | The Consumer Client ID that ingester will use | The Consumer Client ID that ingester will use | | --kafka.consumer.encoding | protobuf | | The encoding of spans (\"json\", \"protobuf\", \"zipkin-thrift\") consumed from kafka | The encoding of spans (\"json\", \"protobuf\", \"zipkin-thrift\") consumed from kafka | | --kafka.consumer.fetch-max-message-bytes | 1048576 | | The maximum number of message bytes to fetch from the broker in a single request. So you must be sure this is at least as large as your largest message. | The maximum number of message bytes to fetch from the broker in a single request. So you must be sure this is at least as large as your largest message. | | --kafka.consumer.group-id | jaeger-ingester | | The Consumer Group that ingester will be consuming on behalf of | The Consumer Group that ingester will be consuming on behalf of | | --kafka.consumer.kerberos.config-file | /etc/krb5.conf | | Path to Kerberos configuration. i.e /etc/krb5.conf | Path to Kerberos configuration. i.e /etc/krb5.conf | | --kafka.consumer.kerberos.disable-fast-negotiation | false | | Disable FAST negotiation when not supported by KDC's like Active Directory." }, { "data": "https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | Disable FAST negotiation when not supported by KDC's like Active Directory. See https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | | --kafka.consumer.kerberos.keytab-file | /etc/security/kafka.keytab | | Path to keytab file. i.e /etc/security/kafka.keytab | Path to keytab file. i.e /etc/security/kafka.keytab | | --kafka.consumer.kerberos.password | nan | | The Kerberos password used for authenticate with KDC | The Kerberos password used for authenticate with KDC | | --kafka.consumer.kerberos.realm | nan | | Kerberos realm | Kerberos realm | | --kafka.consumer.kerberos.service-name | kafka | | Kerberos service name | Kerberos service name | | --kafka.consumer.kerberos.use-keytab | false | | Use of keytab instead of password, if this is true, keytab file will be used instead of password | Use of keytab instead of password, if this is true, keytab file will be used instead of password | | --kafka.consumer.kerberos.username | nan | | The Kerberos username used for authenticate with KDC | The Kerberos username used for authenticate with KDC | | --kafka.consumer.plaintext.mechanism | PLAIN | | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | | --kafka.consumer.plaintext.password | nan | | The plaintext Password for SASL/PLAIN authentication | The plaintext Password for SASL/PLAIN authentication | | --kafka.consumer.plaintext.username | nan | | The plaintext Username for SASL/PLAIN authentication | The plaintext Username for SASL/PLAIN authentication | | --kafka.consumer.protocol-version | nan | | Kafka protocol version - must be supported by kafka server | Kafka protocol version - must be supported by kafka server | | --kafka.consumer.rack-id | nan | | Rack identifier for this client. This can be any string value which indicates where this client is located. It corresponds with the broker config `broker.rack` | Rack identifier for this client. This can be any string value which indicates where this client is located. It corresponds with the broker config `broker.rack` | | --kafka.consumer.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --kafka.consumer.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --kafka.consumer.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --kafka.consumer.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --kafka.consumer.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --kafka.consumer.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --kafka.consumer.topic | jaeger-spans | | The name of the kafka topic to consume from | The name of the kafka topic to consume from | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see" }, { "data": "| | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:|:| | --admin.http.host-port | :14270 | | The host:port (e.g. 127.0.0.1:14270 or :14270) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:14270 or :14270) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --downsampling.hashsalt | nan | | Salt used when hashing trace id for downsampling. | Salt used when hashing trace id for downsampling. | | --downsampling.ratio | 1 | | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | Ratio of spans passed to storage after downsampling (between 0 and 1), e.g ratio = 0.3 means we are keeping 30% of spans and dropping 70% of spans; ratio = 1.0 disables downsampling. | | --grpc-storage-plugin.binary | nan | | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | |" }, { "data": "| nan | | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | | --grpc-storage-plugin.log-level | warn | | Set the log level of the plugin's logger | Set the log level of the plugin's logger | | --grpc-storage.connection-timeout | 5s | | The remote storage gRPC server connection timeout | The remote storage gRPC server connection timeout | | --grpc-storage.server | nan | | The remote storage gRPC server address as host:port | The remote storage gRPC server address as host:port | | --grpc-storage.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --grpc-storage.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --grpc-storage.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --grpc-storage.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --grpc-storage.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --grpc-storage.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --help | false | | help for jaeger-ingester | help for jaeger-ingester | | --ingester.deadlockInterval | 0s | | Interval to check for deadlocks. If no messages gets processed in given time, ingester app will exit. Value of 0 disables deadlock check. | Interval to check for deadlocks. If no messages gets processed in given time, ingester app will exit. Value of 0 disables deadlock check. | | --ingester.parallelism | 1000 | | The number of messages to process in parallel | The number of messages to process in parallel | | --kafka.consumer.authentication | none | | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext | | --kafka.consumer.brokers | 127.0.0.1:9092 | | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' | | --kafka.consumer.client-id | jaeger-ingester | | The Consumer Client ID that ingester will use | The Consumer Client ID that ingester will use | | --kafka.consumer.encoding | protobuf | | The encoding of spans (\"json\", \"protobuf\", \"zipkin-thrift\") consumed from kafka | The encoding of spans (\"json\", \"protobuf\", \"zipkin-thrift\") consumed from kafka | | --kafka.consumer.fetch-max-message-bytes | 1048576 | | The maximum number of message bytes to fetch from the broker in a single request. So you must be sure this is at least as large as your largest message. | The maximum number of message bytes to fetch from the broker in a single request. So you must be sure this is at least as large as your largest message. | | --kafka.consumer.group-id | jaeger-ingester | | The Consumer Group that ingester will be consuming on behalf of | The Consumer Group that ingester will be consuming on behalf of | | --kafka.consumer.kerberos.config-file |" }, { "data": "| | Path to Kerberos configuration. i.e /etc/krb5.conf | Path to Kerberos configuration. i.e /etc/krb5.conf | | --kafka.consumer.kerberos.disable-fast-negotiation | false | | Disable FAST negotiation when not supported by KDC's like Active Directory. See https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | Disable FAST negotiation when not supported by KDC's like Active Directory. See https://github.com/jcmturner/gokrb5/blob/master/USAGE.md#active-directory-kdc-and-fast-negotiation. | | --kafka.consumer.kerberos.keytab-file | /etc/security/kafka.keytab | | Path to keytab file. i.e /etc/security/kafka.keytab | Path to keytab file. i.e /etc/security/kafka.keytab | | --kafka.consumer.kerberos.password | nan | | The Kerberos password used for authenticate with KDC | The Kerberos password used for authenticate with KDC | | --kafka.consumer.kerberos.realm | nan | | Kerberos realm | Kerberos realm | | --kafka.consumer.kerberos.service-name | kafka | | Kerberos service name | Kerberos service name | | --kafka.consumer.kerberos.use-keytab | false | | Use of keytab instead of password, if this is true, keytab file will be used instead of password | Use of keytab instead of password, if this is true, keytab file will be used instead of password | | --kafka.consumer.kerberos.username | nan | | The Kerberos username used for authenticate with KDC | The Kerberos username used for authenticate with KDC | | --kafka.consumer.plaintext.mechanism | PLAIN | | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' | | --kafka.consumer.plaintext.password | nan | | The plaintext Password for SASL/PLAIN authentication | The plaintext Password for SASL/PLAIN authentication | | --kafka.consumer.plaintext.username | nan | | The plaintext Username for SASL/PLAIN authentication | The plaintext Username for SASL/PLAIN authentication | | --kafka.consumer.protocol-version | nan | | Kafka protocol version - must be supported by kafka server | Kafka protocol version - must be supported by kafka server | | --kafka.consumer.rack-id | nan | | Rack identifier for this client. This can be any string value which indicates where this client is located. It corresponds with the broker config `broker.rack` | Rack identifier for this client. This can be any string value which indicates where this client is located. It corresponds with the broker config `broker.rack` | | --kafka.consumer.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --kafka.consumer.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --kafka.consumer.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --kafka.consumer.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --kafka.consumer.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --kafka.consumer.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --kafka.consumer.topic | jaeger-spans | | The name of the kafka topic to consume from | The name of the kafka topic to consume from | | --log-level | info | | Minimal allowed log" }, { "data": "For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | Jaeger query service provides a Web UI and an API for accessing trace data. jaeger-query can be used with these storage backends: (Experimental) jaeger-query can be used with these metrics storage types: | Flag | Default Value | |:|:| | --admin.http.host-port | :16687 | | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --cassandra-archive.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra-archive.connections-per-host | 0 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra-archive.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra-archive.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to" }, { "data": "This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra-archive.enabled | false | | Enable extra storage | Enable extra storage | | --cassandra-archive.keyspace | nan | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra-archive.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra-archive.max-retry-attempts | 0 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra-archive.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra-archive.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra-archive.proto-version | 0 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra-archive.reconnect-interval | 0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra-archive.servers | nan | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra-archive.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra-archive.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --cassandra.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra.connections-per-host | 2 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | |" }, { "data": "| false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra.index.logs | true | | Controls log field indexing. Set to false to disable. | Controls log field indexing. Set to false to disable. | | --cassandra.index.process-tags | true | | Controls process tag indexing. Set to false to disable. | Controls process tag indexing. Set to false to disable. | | --cassandra.index.tag-blacklist | nan | | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | | --cassandra.index.tag-whitelist | nan | | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | | --cassandra.index.tags | true | | Controls tag indexing. Set to false to disable. | Controls tag indexing. Set to false to disable. | | --cassandra.keyspace | jaegerv1test | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra.max-retry-attempts | 3 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra.proto-version | 4 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra.reconnect-interval | 1m0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra.servers | 127.0.0.1 | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra.span-store-write-cache-ttl | 12h0m0s | | The duration to wait before rewriting an existing service or operation name | The duration to wait before rewriting an existing service or operation name | | --cassandra.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --help | false | | help for jaeger-query | help for jaeger-query | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | | --query.max-clock-skew-adjustment | 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | |" }, { "data": "| nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:-|:-| | --admin.http.host-port | :16687 | | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --es-archive.adaptive-sampling.lookback | 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | | --es-archive.bulk.actions | 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es-archive.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es-archive.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es-archive.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es-archive.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application" }, { "data": "Set to false when templates are installed manually. | | --es-archive.enabled | false | | Enable extra storage | Enable extra storage | | --es-archive.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | | --es-archive.index-prefix | nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | | --es-archive.index-rollover-frequency-adaptive-sampling | day | | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es-archive.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es-archive.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es-archive.num-replicas | 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es-archive.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es-archive.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es-archive.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es-archive.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es-archive.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es-archive.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es-archive.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query" }, { "data": "| Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es-archive.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET, POST]. | HTTP verb for requests that contain a body [GET, POST]. | | --es-archive.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es-archive.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es-archive.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es-archive.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es-archive.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es-archive.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es-archive.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --es-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | |" }, { "data": "| false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --es-archive.token-file | nan | | Path to a file containing bearer token. This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is specified. | | --es-archive.use-aliases | false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es-archive.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es-archive.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es-archive.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es-archive.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --es.adaptive-sampling.lookback | 72h0m0s | | How far back to look for the latest adaptive sampling probabilities | How far back to look for the latest adaptive sampling probabilities | | --es.bulk.actions | 1000 | | The number of requests that can be enqueued before the bulk processor decides to commit | The number of requests that can be enqueued before the bulk processor decides to commit | | --es.bulk.flush-interval | 200ms | | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | A time.Duration after which bulk requests are committed, regardless of other thresholds. Set to zero to disable. By default, this is disabled. | | --es.bulk.size | 5000000 | | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | The number of bytes that the bulk requests can take up before the bulk processor decides to commit | | --es.bulk.workers | 1 | | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | The number of workers that are able to receive bulk requests and eventually commit them to Elasticsearch | | --es.create-index-templates | true | | Create index templates at application startup. Set to false when templates are installed manually. | Create index templates at application startup. Set to false when templates are installed manually. | | --es.index-date-separator | - | | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | Optional date separator of Jaeger indices. For example \".\" creates \"jaeger-span-2020.11.20\". | |" }, { "data": "| nan | | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | Optional prefix of Jaeger indices. For example \"production\" creates \"production-jaeger-\". | | --es.index-rollover-frequency-adaptive-sampling | day | | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-sampling indices over the given period. For example \"day\" creates \"jaeger-sampling-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-services | day | | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-service indices over the given period. For example \"day\" creates \"jaeger-service-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.index-rollover-frequency-spans | day | | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | Rotates jaeger-span indices over the given period. For example \"day\" creates \"jaeger-span-yyyy-MM-dd\" every day after UTC 12AM. Valid options: [hour, day]. This does not delete old indices. For details on complete index management solutions supported by Jaeger, refer to: https://www.jaegertracing.io/docs/deployment/#elasticsearch-rollover | | --es.log-level | error | | The Elasticsearch client log-level. Valid levels: [debug, info, error] | The Elasticsearch client log-level. Valid levels: [debug, info, error] | | --es.max-doc-count | 10000 | | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. | | --es.max-span-age | 72h0m0s | | The maximum lookback for spans in Elasticsearch | The maximum lookback for spans in Elasticsearch | | --es.num-replicas | 1 | | The number of replicas per index in Elasticsearch | The number of replicas per index in Elasticsearch | | --es.num-shards | 5 | | The number of shards per index in Elasticsearch | The number of shards per index in Elasticsearch | | --es.password | nan | | The password required by Elasticsearch | The password required by Elasticsearch | | --es.password-file | nan | | Path to a file containing password. This file is watched for changes. | Path to a file containing password. This file is watched for changes. | | --es.prioirity-dependencies-template | 0 | | Priority of jaeger-dependecies index template (ESv8 only) | Priority of jaeger-dependecies index template (ESv8 only) | | --es.prioirity-service-template | 0 | | Priority of jaeger-service index template (ESv8 only) | Priority of jaeger-service index template (ESv8 only) | | --es.prioirity-span-template | 0 | | Priority of jaeger-span index template (ESv8 only) | Priority of jaeger-span index template (ESv8 only) | | --es.remote-read-clusters | nan | | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | Comma-separated list of Elasticsearch remote cluster names for cross-cluster querying.See Elasticsearch remote clusters and cross-cluster query api. | | --es.send-get-body-as | nan | | HTTP verb for requests that contain a body [GET," }, { "data": "| HTTP verb for requests that contain a body [GET, POST]. | | --es.server-urls | http://127.0.0.1:9200 | | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | The comma-separated list of Elasticsearch servers, must be full url i.e. http://localhost:9200 | | --es.sniffer | false | | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | The sniffer config for Elasticsearch; client uses sniffing process to find all nodes automatically, disable if not required | | --es.sniffer-tls-enabled | false | | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | Option to enable TLS when sniffing an Elasticsearch Cluster ; client uses sniffing process to find all nodes automatically, disabled by default | | --es.tags-as-fields.all | false | | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | (experimental) Store all span and process tags as object fields. If true .tags-as-fields.config-file and .tags-as-fields.include is ignored. Binary tags are always stored as nested objects. | | --es.tags-as-fields.config-file | nan | | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | (experimental) Optional path to a file containing tag keys which will be stored as object fields. Each key should be on a separate line. Merged with .tags-as-fields.include | | --es.tags-as-fields.dot-replacement | @ | | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | (experimental) The character used to replace dots (\".\") in tag keys stored as object fields. | | --es.tags-as-fields.include | nan | | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | (experimental) Comma delimited list of tag keys which will be stored as object fields. Merged with the contents of .tags-as-fields.config-file | | --es.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --es.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --es.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --es.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --es.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --es.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --es.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | |" }, { "data": "| nan | | Path to a file containing bearer token. This flag also loads CA if it is specified. | Path to a file containing bearer token. This flag also loads CA if it is specified. | | --es.use-aliases | false | | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | Use read and write aliases for indices. Use this option with Elasticsearch rollover API. It requires an external component to create aliases before startup and then performing its management. Note that es.max-span-age will influence trace search window start times. | | --es.use-ilm | false | | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | (experimental) Option to enable ILM for jaeger span & service indices. Use this option with es.use-aliases. It requires an external component to create aliases before startup and then performing its management. ILM policy must be manually created in ES before startup. Supported only for elasticsearch version 7+. | | --es.username | nan | | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | The username required by Elasticsearch. The basic authentication also loads CA if it is specified. | | --es.version | 0 | | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. | | --help | false | | help for jaeger-query | help for jaeger-query | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | |" }, { "data": "| false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | | --query.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | |" }, { "data": "| 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | | --query.ui-config | nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:|:| | --admin.http.host-port | :16687 | | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --grpc-storage-plugin.binary | nan | | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | (deprecated, will be removed after 2024-03-01) The location of the plugin binary | | --grpc-storage-plugin.configuration-file | nan | | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | (deprecated, will be removed after 2024-03-01) A path pointing to the plugin's configuration file, made available to the plugin with the --config arg | | --grpc-storage-plugin.log-level | warn | | Set the log level of the plugin's logger | Set the log level of the plugin's logger | | --grpc-storage.connection-timeout | 5s | | The remote storage gRPC server connection timeout | The remote storage gRPC server connection timeout | |" }, { "data": "| nan | | The remote storage gRPC server address as host:port | The remote storage gRPC server address as host:port | | --grpc-storage.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --grpc-storage.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --grpc-storage.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --grpc-storage.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --grpc-storage.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --grpc-storage.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --help | false | | help for jaeger-query | help for jaeger-query | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | |" }, { "data": "| nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.grpc.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | | --query.max-clock-skew-adjustment | 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | |" }, { "data": "| nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | | Flag | Default Value | |:--|:--| | --admin.http.host-port | :16687 | | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | The host:port (e.g. 127.0.0.1:16687 or :16687) for the admin server, including health check, /metrics, etc. | | --admin.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --admin.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --admin.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --admin.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --admin.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --admin.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --admin.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --cassandra-archive.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra-archive.connections-per-host | 0 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra-archive.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra-archive.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra-archive.enabled | false | | Enable extra storage | Enable extra storage | | --cassandra-archive.keyspace | nan | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra-archive.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | |" }, { "data": "| 0 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra-archive.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra-archive.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra-archive.proto-version | 0 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra-archive.reconnect-interval | 0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra-archive.servers | nan | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra-archive.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra-archive.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra-archive.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra-archive.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra-archive.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra-archive.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --cassandra-archive.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra-archive.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --cassandra.connect-timeout | 0s | | Timeout used for connections to Cassandra Servers | Timeout used for connections to Cassandra Servers | | --cassandra.connections-per-host | 2 | | The number of Cassandra connections from a single backend instance | The number of Cassandra connections from a single backend instance | | --cassandra.consistency | nan | | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | The Cassandra consistency level, e.g. ANY, ONE, TWO, THREE, QUORUM, ALL, LOCALQUORUM, EACHQUORUM, LOCALONE (default LOCALONE) | | --cassandra.disable-compression | false | | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | Disables the use of the default Snappy Compression while connecting to the Cassandra Cluster if set to true. This is useful for connecting to Cassandra Clusters(like Azure Cosmos Db with Cassandra API) that do not support SnappyCompression | | --cassandra.index.logs | true | | Controls log field indexing. Set to false to disable. | Controls log field indexing. Set to false to disable. | | --cassandra.index.process-tags | true | | Controls process tag" }, { "data": "Set to false to disable. | Controls process tag indexing. Set to false to disable. | | --cassandra.index.tag-blacklist | nan | | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | The comma-separated list of span tags to blacklist from being indexed. All other tags will be indexed. Mutually exclusive with the whitelist option. | | --cassandra.index.tag-whitelist | nan | | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | The comma-separated list of span tags to whitelist for being indexed. All other tags will not be indexed. Mutually exclusive with the blacklist option. | | --cassandra.index.tags | true | | Controls tag indexing. Set to false to disable. | Controls tag indexing. Set to false to disable. | | --cassandra.keyspace | jaegerv1test | | The Cassandra keyspace for Jaeger data | The Cassandra keyspace for Jaeger data | | --cassandra.local-dc | nan | | The name of the Cassandra local data center for DC Aware host selection | The name of the Cassandra local data center for DC Aware host selection | | --cassandra.max-retry-attempts | 3 | | The number of attempts when reading from Cassandra | The number of attempts when reading from Cassandra | | --cassandra.password | nan | | Password for password authentication for Cassandra | Password for password authentication for Cassandra | | --cassandra.port | 0 | | The port for cassandra | The port for cassandra | | --cassandra.proto-version | 4 | | The Cassandra protocol version | The Cassandra protocol version | | --cassandra.reconnect-interval | 1m0s | | Reconnect interval to retry connecting to downed hosts | Reconnect interval to retry connecting to downed hosts | | --cassandra.servers | 127.0.0.1 | | The comma-separated list of Cassandra servers | The comma-separated list of Cassandra servers | | --cassandra.socket-keep-alive | 0s | | Cassandra's keepalive period to use, enabled if > 0 | Cassandra's keepalive period to use, enabled if > 0 | | --cassandra.span-store-write-cache-ttl | 12h0m0s | | The duration to wait before rewriting an existing service or operation name | The duration to wait before rewriting an existing service or operation name | | --cassandra.timeout | 0s | | Timeout used for queries. A Timeout of zero means no timeout | Timeout used for queries. A Timeout of zero means no timeout | | --cassandra.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --cassandra.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --cassandra.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --cassandra.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --cassandra.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | |" }, { "data": "| false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --cassandra.username | nan | | Username for password authentication for Cassandra | Username for password authentication for Cassandra | | --config-file | nan | | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | Configuration file in JSON, TOML, YAML, HCL, or Java properties formats (default none). See spf13/viper for precedence. | | --help | false | | help for jaeger-query | help for jaeger-query | | --log-level | info | | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | Minimal allowed log Level. For more levels see https://github.com/uber-go/zap | | --metrics-backend | prometheus | | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | Defines which metrics backend to use for metrics reporting: prometheus, none, or expvar (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) | | --metrics-http-route | /metrics | | Defines the route of HTTP endpoint for metrics backends that support scraping | Defines the route of HTTP endpoint for metrics backends that support scraping | | --multi-tenancy.enabled | false | | Enable tenancy header when receiving or querying | Enable tenancy header when receiving or querying | | --multi-tenancy.header | x-tenant | | HTTP header carrying tenant | HTTP header carrying tenant | | --multi-tenancy.tenants | nan | | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | comma-separated list of allowed values for --multi-tenancy.header header. (If not supplied, tenants are not restricted) | | --prometheus.connect-timeout | 30s | | The period to wait for a connection to Prometheus when executing queries. | The period to wait for a connection to Prometheus when executing queries. | | --prometheus.query.duration-unit | ms | | The units used for the \"latency\" histogram. It can be either \"ms\" or \"s\" and should be consistent with the histogram unit value set in the spanmetrics connector (see: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/spanmetricsconnector#configurations). This also helps jaeger-query determine the metric name when querying for \"latency\" metrics. | The units used for the \"latency\" histogram. It can be either \"ms\" or \"s\" and should be consistent with the histogram unit value set in the spanmetrics connector (see: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/spanmetricsconnector#configurations). This also helps jaeger-query determine the metric name when querying for \"latency\" metrics. | | --prometheus.query.namespace | nan | | The metric namespace that is prefixed to the metric name. A '.' separator will be added between the namespace and the metric name. | The metric namespace that is prefixed to the metric name. A '.' separator will be added between the namespace and the metric name. | | --prometheus.query.normalize-calls | false | | Whether to normalize the \"calls\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"calls\" (not normalized) -> \"callstotal\" (normalized), | Whether to normalize the \"calls\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"calls\" (not normalized) -> \"callstotal\" (normalized), | | --prometheus.query.normalize-duration | false | | Whether to normalize the \"duration\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"durationbucket\" (not normalized) -> \"durationmillisecondsbucket (normalized)\" | Whether to normalize the \"duration\" metric name according to https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheus/README.md. For example: \"durationbucket\" (not normalized) -> \"durationmillisecondsbucket (normalized)\" | | --prometheus.query.support-spanmetrics-connector | true | | (deprecated, will be removed after 2024-01-01 or in release" }, { "data": "whichever is later) Controls whether the metrics queries should match the OpenTelemetry Collector's spanmetrics connector naming (when true) or spanmetrics processor naming (when false). | (deprecated, will be removed after 2024-01-01 or in release v1.53.0, whichever is later) Controls whether the metrics queries should match the OpenTelemetry Collector's spanmetrics connector naming (when true) or spanmetrics processor naming (when false). | | --prometheus.server-url | http://localhost:9090 | | The Prometheus server's URL, must include the protocol scheme e.g. http://localhost:9090 | The Prometheus server's URL, must include the protocol scheme e.g. http://localhost:9090 | | --prometheus.tls.ca | nan | | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore) | | --prometheus.tls.cert | nan | | Path to a TLS Certificate file, used to identify this process to the remote server(s) | Path to a TLS Certificate file, used to identify this process to the remote server(s) | | --prometheus.tls.enabled | false | | Enable TLS when talking to the remote server(s) | Enable TLS when talking to the remote server(s) | | --prometheus.tls.key | nan | | Path to a TLS Private Key file, used to identify this process to the remote server(s) | Path to a TLS Private Key file, used to identify this process to the remote server(s) | | --prometheus.tls.server-name | nan | | Override the TLS server name we expect in the certificate of the remote server(s) | Override the TLS server name we expect in the certificate of the remote server(s) | | --prometheus.tls.skip-host-verify | false | | (insecure) Skip server's certificate chain and host name verification | (insecure) Skip server's certificate chain and host name verification | | --prometheus.token-file | nan | | The path to a file containing the bearer token which will be included when executing queries against the Prometheus API. | The path to a file containing the bearer token which will be included when executing queries against the Prometheus API. | | --prometheus.token-override-from-context | true | | Whether the bearer token should be overridden from context (incoming request) | Whether the bearer token should be overridden from context (incoming request) | | --query.additional-headers | [] | | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | Additional HTTP response headers. Can be specified multiple times. Format: \"Key: Value\" | | --query.base-path | / | | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | The base path for all HTTP routes, e.g. /jaeger; useful when running behind a reverse proxy. See https://github.com/jaegertracing/jaeger/blob/main/examples/reverse-proxy/README.md | | --query.bearer-token-propagation | false | | Allow propagation of bearer token to be used by storage plugins | Allow propagation of bearer token to be used by storage plugins | | --query.enable-tracing | false | | Enables emitting jaeger-query traces | Enables emitting jaeger-query traces | | --query.grpc-server.host-port | :16685 | | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | The host:port (e.g. 127.0.0.1:14250 or :14250) of the query's gRPC server | | --query.grpc.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.grpc.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | |" }, { "data": "| nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.grpc.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.grpc.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.grpc.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.grpc.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http-server.host-port | :16686 | | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | The host:port (e.g. 127.0.0.1:14268 or :14268) of the query's HTTP server | | --query.http.tls.cert | nan | | Path to a TLS Certificate file, used to identify this server to clients | Path to a TLS Certificate file, used to identify this server to clients | | --query.http.tls.cipher-suites | nan | | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | Comma-separated list of cipher suites for the server, values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). | | --query.http.tls.client-ca | nan | | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted) | | --query.http.tls.enabled | false | | Enable TLS on the server | Enable TLS on the server | | --query.http.tls.key | nan | | Path to a TLS Private Key file, used to identify this server to clients | Path to a TLS Private Key file, used to identify this server to clients | | --query.http.tls.max-version | nan | | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Maximum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.http.tls.min-version | nan | | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | Minimum TLS version supported (Possible values: 1.0, 1.1, 1.2, 1.3) | | --query.log-static-assets-access | false | | Log when static assets are accessed (for debugging) | Log when static assets are accessed (for debugging) | | --query.max-clock-skew-adjustment | 0s | | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | The maximum delta by which span timestamps may be adjusted in the UI due to clock skew; set to 0s to disable clock skew adjustments | | --query.static-files | nan | | The directory path override for the static assets for the UI | The directory path override for the static assets for the UI | | --query.ui-config | nan | | The path to the UI configuration file in JSON format | The path to the UI configuration file in JSON format | | --span-storage.type | nan | | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | (deprecated) please use SPANSTORAGETYPE environment variable. Run this binary with the 'env' command for help. | 2024 The" } ]
{ "category": "Observability and Analysis", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Loggie", "subcategory": "Observability" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Loggie", "subcategory": "Observability" }
[ { "data": "Used to limit the size of the original single-line log to prevent the large amount of single-line log data from affecting the memory and stability of Loggie. Source interceptor which is Built-in and loaded by default. Example ``` interceptors: type: maxbytes maxBytes: 102400 ``` | field | type | required | default | description | |:|:-|:--|-:|:| | maxBytes | int | False | nan | The maximum number of bytes in a single line. The excess part will be discarded. |" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Logz.io", "subcategory": "Observability" }
[ { "data": "Learn how to make the most out of the Logz.io platform. Engage in a dynamic conversation with your data. A unified dashboard to monitor and quickly troubleshoot your data. Monitor and troubleshoot applications deployed in Kubernetes environments. All the different ways to send your data to Logz.io. Grow your own integration. Send logs, metrics, and traces data quickly and easily. Troubleshoot common log related issues. Manage your accounts & optimize costs." } ]
{ "category": "Observability and Analysis", "file_name": "privacypolicy.md", "project_name": "Mackerel", "subcategory": "Observability" }
[ { "data": "List Services GET /api/v0/services Register Services POST /api/v0/services Delete Services DELETE /api/v0/services/<serviceName> List Roles GET /api/v0/services/<serviceName>/roles Register Roles POST /api/v0/services/<serviceName>/roles Delete Roles DELETE /api/v0/services/<serviceName>/roles/<roleName> List Metric Names GET /api/v0/services/<serviceName>/metric-names Register Host Information POST /api/v0/hosts Get Host Information GET /api/v0/hosts/<hostId> Get Host Information By Custom Identifier GET /api/v0/hosts-by-custom-identifier/<customIdentifier> Update Host Information PUT /api/v0/hosts/<hostId> Update Host Status POST /api/v0/hosts/<hostId>/status Bulk Update Host Statuses POST /api/v0/hosts/bulk-update-statuses Update Host Roles PUT /api/v0/hosts/<hostId>/role-fullnames Retire Hosts POST /api/v0/hosts/<hostId>/retire Bulk Retire Hosts POST /api/v0/hosts/bulk-retire List Hosts GET /api/v0/hosts List Metric Names GET /api/v0/hosts/<hostId>/metric-names List Monitoring Statuses GET /api/v0/hosts/<hostId>/monitored-statuses Post Metrics POST /api/v0/tsdb Get Host Metrics GET /api/v0/hosts/<hostId>/metrics Get Latest Metrics GET /api/v0/tsdb/latest Post Graph Definitions POST /api/v0/graph-defs/create Delete Graph Definitions DELETE /api/v0/graph-defs Post Service Metrics POST /api/v0/services/<serviceName>/tsdb Get Service Metrics GET /api/v0/services/<serviceName>/metrics Post Monitoring Check Reports POST /api/v0/monitoring/checks/report Get Host Metadata GET /api/v0/hosts/<hostId>/metadata/<namespace&gt Register/Update Host Metadata PUT /api/v0/hosts/<hostId>/metadata/<namespace&gt Delete Host Metadata DELETE /api/v0/hosts/<hostId>/metadata/<namespace&gt List Host Metadata GET /api/v0/hosts/<hostId>/metadata Get Service Metadata GET /api/v0/services/<serviceName>/metadata/<namespace&gt Register/Update Service Metadata PUT /api/v0/services/<serviceName>/metadata/<namespace&gt Delete Service Metadata DELETE /api/v0/services/<serviceName>/metadata/<namespace&gt List Service Metadata GET /api/v0/services/<serviceName>/metadata Get Role Metadata GET /api/v0/services/<serviceName>/roles/<roleName>/metadata/<namespace&gt Register/Update Role Metadata PUT /api/v0/services/<serviceName>/roles/<roleName>/metadata/<namespace&gt Delete Role Metadata DELETE /api/v0/services/<serviceName>/roles/<roleName>/metadata/<namespace&gt List Role Metadata GET /api/v0/services/<serviceName>/roles/<roleName>/metadata Register Monitor Configurations POST /api/v0/monitors List Monitor Configurations GET /api/v0/monitors Get Monitor Configurations GET /api/v0/monitors/<monitorId> Update Monitor Configurations PUT /api/v0/monitors/<monitorId> Delete Monitor Configurations DELETE /api/v0/monitors/<monitorId> Register Downtime POST /api/v0/downtimes List Downtime GET /api/v0/downtimes Update Downtime PUT /api/v0/downtimes/<downtimeId> Delete Downtime DELETE /api/v0/downtimes/<downtimeId> Get Notification Channels GET /api/v0/channels Register Notification Channels POST /api/v0/channels Delete Notification Channels DELETE /api/v0/channels/<channelId> Register Notification Groups POST /api/v0/notification-groups Get Notification Groups GET /api/v0/notification-groups Update Notification Groups PUT /api/v0/notification-groups/<notificationGroupId> Delete Notification Groups DELETE /api/v0/notification-groups/<notificationGroupId> List Alerts GET /api/v0/alerts Get Alert GET /api/v0/alerts/<alertId> Update Alert PUT /api/v0/alerts/<alertId> Close Alerts POST /api/v0/alerts/<alertId>/close List Alert Group Settings GET /api/v0/alert-group-settings Register Alert Group Settings POST /api/v0/alert-group-settings Get Alert Group Settings GET /api/v0/alert-group-settings/<alertGroupSettingId> Update Alert Group Settings PUT /api/v0/alert-group-settings/<alertGroupSettingId> Delete Alert Group Settings DELETE /api/v0/alert-group-settings/<alertGroupSettingId> Create Dashboards POST /api/v0/dashboards Get Dashboards GET /api/v0/dashboards/<dashboardId> Update Dashboards PUT /api/v0/dashboards/<dashboardId> Delete Dashboards DELETE /api/v0/dashboards/<dashboardId> List Dashboards GET /api/v0/dashboards Create Graph Annotations POST /api/v0/graph-annotations Get Graph Annotations GET /api/v0/graph-annotations Update Graph Annotations PUT /api/v0/graph-annotations/<annotationId> Delete Graph Annotations DELETE /api/v0/graph-annotations/<annotationId> List Users that are Organization Members GET /api/v0/users Delete Users that are Organization Members DELETE /api/v0/users/<userId> List Invitations GET /api/v0/invitations Create Invitations POST /api/v0/invitations Cancel Invitations POST /api/v0/invitations/revoke Get Organization Information GET /api/v0/org List AWS Integration Settings GET /api/v0/aws-integrations Get AWS Integration Settings GET /api/v0/aws-integrations/<awsIntegrationId> Register AWS Integration Settings POST /api/v0/aws-integrations Update AWS Integration Settings PUT /api/v0/aws-integrations/<awsIntegrationId> Delete AWS Integration Settings DELETE /api/v0/aws-integrations/<awsIntegrationId> Generate AWS Integration External ID POST /api/v0/aws-integrations-external-id List Excludable Metrics for AWS Integration GET /api/v0/aws-integrations-excludable-metrics If the content covered in this help page doesn't solve your problem, please contact our support team. Contact Us This is the page containing Mackerel's https://mackerel.io instructional documents. mackerelio 2016-02-12 19:23 Back to Index Send us an email at support@mackerel.io. We'll get back to you right away. Contact Us See FAQ Product Support Service Quote saved. Login to quote this blog Failed to save quote. Please try again later. You cannot quote because this article is private." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Logging Operator (Kube Logging)", "subcategory": "Observability" }
[ { "data": "Caution: The master branch is under heavy development. Use releases instead of the master branch to get stable software. With the 4.3.0 release, the chart is now distributed through an OCI registry. For instructions on how to interact with OCI registries, please take a look at Use OCI-based registries. For instructions on installing the previous 4.2.3 version, see Installation for 4.2. To install the Logging operator using Helm, complete the following steps. Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry. Install the Logging operator into the logging namespace: ``` helm upgrade --install --wait --create-namespace --namespace logging logging-operator oci://ghcr.io/kube-logging/helm-charts/logging-operator ``` Expected output: ``` Release \"logging-operator\" does not exist. Installing it now. Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0 Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840 NAME: logging-operator LAST DEPLOYED: Wed Aug 9 11:02:12 2023 NAMESPACE: logging STATUS: deployed REVISION: 1 TEST SUITE: None ``` Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public. Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522 Note: By default, the Logging operator Helm chart doesnt install the logging resource. If you want to install it with Helm, set the logging.enabled value to true. For details on customizing the installation, see the Helm chart values. To verify that the installation was successful, complete the following steps. Check the status of the pods. You should see a new logging-operator pod. ``` kubectl -n logging get pods ``` Expected output: ``` NAME READY STATUS RESTARTS AGE logging-operator-5df66b87c9-wgsdf 1/1 Running 0 21s ``` Check the CRDs. You should see the following five new CRDs. ``` kubectl get crd ``` Expected output: ``` NAME CREATED AT clusterflows.logging.banzaicloud.io 2023-08-10T12:05:04Z clusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:04Z eventtailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z flows.logging.banzaicloud.io 2023-08-10T12:05:04Z fluentbitagents.logging.banzaicloud.io 2023-08-10T12:05:04Z hosttailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z loggings.logging.banzaicloud.io 2023-08-10T12:05:05Z nodeagents.logging.banzaicloud.io 2023-08-10T12:05:05Z outputs.logging.banzaicloud.io 2023-08-10T12:05:05Z syslogngclusterflows.logging.banzaicloud.io 2023-08-10T12:05:05Z syslogngclusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:05Z syslogngflows.logging.banzaicloud.io 2023-08-10T12:05:05Z syslogngoutputs.logging.banzaicloud.io 2023-08-10T12:05:06Z ```" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Micrometer", "subcategory": "Observability" }
[ { "data": "By using the Micrometer Docs Generator project and by implementing the ObservationDocumentation, SpanDocumentation or MeterDocumentation interfaces as an enum, you can scan your sources and generate Asciidoctor documentation. This lets you maintain the documentation for your observability instrumentation in code, and as long as you use the enum implementation in your instrumentation, it ensures that your documentation stays in-sync with the instrumentation. The following example shows a Maven pom.xml with the Micrometer Docs Generator project: ``` <?xml version=\"1.0\" encoding=\"UTF-8\"?> <!-- Copyright 2023 VMware, Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <project xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://maven.apache.org/POM/4.0.0\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>micrometer-docs-generator-example</artifactId> <packaging>jar</packaging> <name>micrometer-docs-generator-example</name> <description>micrometer-docs-generator-example</description> <properties> <micrometer-docs-generator.version>1.0.0</micrometer-docs-generator.version> <micrometer-docs-generator.inputPath>${maven.multiModuleProjectDirectory}/folder-with-sources-to-scan/</micrometer-docs-generator.inputPath> <micrometer-docs-generator.inclusionPattern>.*</micrometer-docs-generator.inclusionPattern> <micrometer-docs-generator.outputPath>${maven.multiModuleProjectDirectory}/target/output-folder-with-adocs/'</micrometer-docs-generator.outputPath> </properties> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <executions> <execution> <id>generate-docs</id> <phase>prepare-package</phase> <goals> <goal>java</goal> </goals> <configuration> <mainClass>io.micrometer.docs.DocsGeneratorCommand</mainClass> <includePluginDependencies>true</includePluginDependencies> <arguments> <argument>${micrometer-docs-generator.inputPath}</argument> <argument>${micrometer-docs-generator.inclusionPattern}</argument> <argument>${micrometer-docs-generator.outputPath}</argument> </arguments> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-docs-generator</artifactId> <version>${micrometer-docs-generator.version}</version> <type>jar</type> </dependency> </dependencies> </plugin> </plugins> </build> <repositories> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/snapshot</url> <!-- For Snapshots --> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>false</enabled> </releases> </repository> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <!-- For Milestones --> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> </project>``` The following example shows a Gradle build.gradle with the Micrometer Docs Generator project: ``` repositories { maven { url 'https://repo.spring.io/snapshot' } // for snapshots maven { url 'https://repo.spring.io/milestone' } // for milestones mavenCentral() // for GA } ext { micrometerDocsVersion=\"1.0.2\" } configurations { adoc } dependencies { adoc \"io.micrometer:micrometer-docs-generator:$micrometerDocsVersion\" } task generateObservabilityDocs(type: JavaExec) { mainClass = \"io.micrometer.docs.DocsGeneratorCommand\" classpath configurations.adoc // input folder, inclusion pattern, output folder args project.rootDir.getAbsolutePath(), \".*\", project.rootProject.buildDir.getAbsolutePath() }``` Running these tasks would lead to generation of adoc files similar to these: ``` [[observability-metrics]] === Observability - Metrics Below you can find a list of all samples declared by this project. [[observability-metrics-task-runner-observation]] ==== Task Runner Observation Observation created when a task runner is executed. Metric name `spring.cloud.task.runner` (defined by convention class `org.springframework.cloud.task.configuration.observation.DefaultTaskObservationConvention`). Type `timer` and base unit `seconds`. Fully qualified name of the enclosing class `org.springframework.cloud.task.configuration.observation.TaskDocumentedObservation`. IMPORTANT: All tags must be prefixed with `spring.cloud.task` prefix! .Low cardinality Keys |=== |Name | Description |`spring.cloud.task.runner.bean-name`|Name of the bean that was executed by Spring Cloud" }, { "data": "|===``` ``` [[observability-spans]] === Observability - Spans Below you can find a list of all spans declared by this project. [[observability-spans-task-runner-observation]] ==== Task Runner Observation Span Observation created when a task runner is executed. Span name `spring.cloud.task.runner` (defined by convention class `org.springframework.cloud.task.configuration.observation.DefaultTaskObservationConvention`). Fully qualified name of the enclosing class `org.springframework.cloud.task.configuration.observation.TaskDocumentedObservation`. IMPORTANT: All tags and event names must be prefixed with `spring.cloud.task` prefix! .Tag Keys |=== |Name | Description |`spring.cloud.task.runner.bean-name`|Name of the bean that was executed by Spring Cloud Task. |===``` The main entry class for the docs generation is the DocsGeneratorCommand class. This class takes the following options: | 0 | 1 | |:-|:--| | --metrics | Generate metrics documentation. | | --spans | Generate spans documentation. | | --conventions | Generate observation conventions documentation. | | --metrics-template=<location> | Handlebars template file location. This can be a path in the classpath or file system, such as templates/metrics.adoc.hbs or /home/example/bar.hbs | | --spans-template=<location> | Handlebars template file location. This can be a path in the classpath or file system, such as templates/spans.adoc.hbs or /home/example/bar.hbs | | --conventions-template=<location> | Handlebars template file location. This can be a path in the classpath or file system, such as templates/conventions.adoc.hbs or /home/example/bar.hbs | | --metrics-output=<location> | Generated metrics doc file location. This can be an absolute path or a path relative to the output directory. Default: _metrics.adoc | | --spans-output=<location> | Generated spans doc file location. This can be an absolute path or a path relative to the output directory. Default: _spans.adoc | | --conventions-output=<location> | Generated observation convention doc file location. This can be an absolute path or a path relative to the output directory. Default: _conventions.adoc | --metrics Generate metrics documentation. --spans Generate spans documentation. --conventions Generate observation conventions documentation. --metrics-template=<location> Handlebars template file location. This can be a path in the classpath or file system, such as templates/metrics.adoc.hbs or /home/example/bar.hbs --spans-template=<location> Handlebars template file location. This can be a path in the classpath or file system, such as templates/spans.adoc.hbs or /home/example/bar.hbs --conventions-template=<location> Handlebars template file location. This can be a path in the classpath or file system, such as templates/conventions.adoc.hbs or /home/example/bar.hbs --metrics-output=<location> Generated metrics doc file location. This can be an absolute path or a path relative to the output directory. Default: _metrics.adoc --spans-output=<location> Generated spans doc file location. This can be an absolute path or a path relative to the output directory. Default: _spans.adoc --conventions-output=<location> Generated observation convention doc file location. This can be an absolute path or a path relative to the output directory. Default: _conventions.adoc VMware, Inc. or its affiliates. Terms of Use Privacy" } ]
{ "category": "Observability and Analysis", "file_name": "observation.html.md", "project_name": "Micrometer", "subcategory": "Observability" }
[ { "data": "Micrometer Tracing supports the following Tracers. OpenZipkin Brave OpenTelemetry The following example shows the required dependency in Gradle (assuming that the Micrometer Tracing BOM has been added): ``` implementation 'io.micrometer:micrometer-tracing-bridge-brave'``` ``` implementation 'io.micrometer:micrometer-tracing-bridge-otel'``` The following example shows the required dependency in Maven (assuming that the Micrometer Tracing BOM has been added): ``` <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-tracing-bridge-brave</artifactId> </dependency>``` ``` <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-tracing-bridge-otel</artifactId> </dependency>``` | 0 | 1 | |-:|:| | nan | Remember to pick only one bridge. You should not have two bridges on the classpath. | VMware, Inc. or its affiliates. Terms of Use Privacy" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Netdata", "subcategory": "Observability" }
[ { "data": "Netdata is very flexible and can be used to monitor all kinds of infrastructure. Read more about possible Deployment guides to understand what better suites your needs. The easiest way to install Netdata on your system is via Netdata Cloud, to do so: Once Netdata is installed, you can see the node live in your Netdata Space and charts in the Metrics tab. Take a look at our Dashboards and Charts section to read more about Netdata's features. If you are looking to configure your Netdata Agent installation, refer to the respective section in our Documentation. If Netdata didn't autodetect all the hardware, containers, services, or applications running on your node, you should learn more about how data collectors work. If there's a supported integration for metrics you need, refer to its respective page and read about its requirements to configure your endpoint to publish metrics in the correct format and endpoint. Netdata comes with hundreds of pre-configured alerts, designed by our monitoring gurus in parallel with our open-source community, but you may want to edit alerts or enable notifications to customize your Netdata experience. Go through our deployment guides, for suggested configuration changes for production deployments. By default, Netdata's installation scripts enable automatic updates for both nightly and stable release channels. If you preferred to update your Netdata Agent manually, you can disable automatic updates by using the --no-updates option when you install or update Netdata using the automatic one-line installation script. ``` wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh && sh /tmp/netdata-kickstart.sh --no-updates``` With automatic updates disabled, you can choose exactly when and how you update Netdata. Nightly: We create nightly builds every 24 hours. They contain fully-tested code that fixes bugs or security flaws, or introduces new features to Netdata. Every nightly release is a candidate for then becoming a stable releasewhen we're ready, we simply change the release tags on GitHub. That means nightly releases are stable and proven to function correctly in the vast majority of Netdata use cases. That's why nightly is the best choice for most Netdata users. Stable: We create stable releases whenever we believe the code has reached a major milestone. Most often, stable releases correlate with the introduction of new, significant features. Stable releases might be a better choice for those who run Netdata in mission-critical production systems, as updates will come more infrequently, and only after the community helps fix any bugs that might have been introduced in previous releases. Pros of using nightly releases: Pros of using stable releases: Starting with v1.30, Netdata collects anonymous usage information by default and sends it to a self-hosted PostHog instance within the Netdata" }, { "data": "Read about the information collected, and learn how to-opt, on our anonymous statistics page. The usage statistics are vital for us, as we use them to discover bugs and prioritize new features. We thank you for actively contributing to Netdata's future. We are tracking a few issues related to installation and packaging. Our regular installation process requires access to a number of GitHub services that do not have IPv6 connectivity. As such, using the kickstart install script on such hosts generally does not work, and will typically fail with an error from cURL or wget about connection timeouts. You can check if your system is affected by this by attempting to connect to (or ping) https://api.github.com/. Failing to connect indicates that you are affected by this issue. There are three potential workarounds for this: If you're running an older Linux distribution or one that has reached EOL, such as Ubuntu 14.04 LTS, Debian 8, or CentOS 6, your Agent may not be able to securely connect to Netdata Cloud due to an outdated version of OpenSSL. These old versions of OpenSSL cannot perform hostname validation, which helps securely encrypt SSL connections. If you choose to continue using the outdated version of OpenSSL, your node will still connect to Netdata Cloud, albeit with hostname verification disabled. Without verification, your Netdata Cloud connection could be vulnerable to man-in-the-middle attacks. To install the Agent on certain CentOS and RHEL systems, you must enable non-default repositories, such as EPEL or PowerTools, to gather hard dependencies. See the CentOS 6 and CentOS 8 sections for more information. If you see an error similar to Access to file is not permitted: /usr/share/netdata/web/index.html when you try to visit the Agent dashboard at http://NODE:19999, you need to update Netdata's permissions to match those of your system. Run ls -la /usr/share/netdata/web/index.html to find the file's permissions. You may need to change this path based on the error you're seeing in your browser. In the below example, the file is owned by the user root and the group root. ``` ls -la /usr/share/netdata/web/index.html-rw-r--r--. 1 root root 89377 May 5 06:30 /usr/share/netdata/web/index.html``` These files need to have the same user and group used to install your netdata. Suppose you installed netdata with user netdata and group netdata, in this scenario you will need to run the following command to fix the error: ``` We've received reports from the community about issues with running the kickstart.sh script on systems that have both a distribution-installed version of OpenSSL and a manually-installed local version. The Agent's installer cannot handle both. Our current build process has some issues when using certain configurations of the clang C compiler on Linux. See the section on nonrepresentable section on output errors for a workaround." } ]
{ "category": "Observability and Analysis", "file_name": "getting-started.md", "project_name": "Netdata", "subcategory": "Observability" }
[ { "data": "Netdata can be used to monitor all kinds of infrastructure, from tiny stand-alone IoT devices to complex hybrid setups combining on-premise and cloud infrastructure, mixing bare-metal servers, virtual machines and containers. There are 3 components to structure your Netdata ecosystem: Netdata Agents To monitor the physical or virtual nodes of your infrastructure, including all applications and containers running on them. Netdata Agents are Open-Source, licensed under GPL v3+. Netdata Parents To create observability centralization points within your infrastructure, to offload Netdata Agents functions from your production systems, to provide high-availability of your data, increased data retention and isolation of your nodes. Netdata Parents are implemented using the Netdata Agent software. Any Netdata Agent can be an Agent for a node and a Parent for other Agents, at the same time. It is recommended to set up multiple Netdata Parents. They will all seamlessly be integrated by Netdata Cloud into one monitoring solution. Netdata Cloud Our SaaS, combining all your infrastructure, all your Netdata Agents and Parents, into one uniform, distributed, scalable, monitoring database, offering advanced data slicing and dicing capabilities, custom dashboards, advanced troubleshooting tools, user management, centralized management of alerts, and more. The Netdata Agent is a highly modular software piece, providing data collection via numerous plugins, an in-house crafted time-series database, a query engine, health monitoring and alerts, machine learning and anomaly detection, metrics exporting to third party systems." } ]
{ "category": "Observability and Analysis", "file_name": "edit_usp=sharing.md", "project_name": "New Relic", "subcategory": "Observability" }
[ { "data": "| Unnamed: 0 | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | AD | AE | AF | AG | AH | AI | AJ | AK | AL | AM | AN | AO | AP | AQ | AR | |-:|-:|:|:-|-:|:-|-:|:--|:|:-|:-|:-|:-|-:|:|:|:|-:|:--|:--|:--|-:|-:|:--|:-|:|:|:-|:|:--|--:|--:|:-|:-|:-|:--|--:|--:|--:|--:|--:|--:|--:|--:|--:| | 1 | nan | All dollar amounts are in USD. | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 2 | nan | The costs for Dynatrace were updated on January 7, 2022, based on pricing clarifications received since initial publication. | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 3 | nan | Data ingest cost (standard retention) was updated on May 8, 2022, based on ingest pricing updates.% of basic users increased to 30% based on updated" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 4 | nan | Summary of monthly full-stack observability pricing | nan | nan | nan | nan | nan | nan | Monthly full-stack observability pricing by category | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 5 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 6 | nan | nan | New Relic | nan | Dynatrace | nan | Datadog | nan | nan | New Relic | New Relic | New Relic | nan | Dynatrace | Dynatrace | Dynatrace | nan | Datadog | Datadog | Datadog | nan | nan | nan | Assumptions | nan | nan | nan | nan | nan | nan | nan | Charts | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 7 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 8 | nan | Small engineering team | $2,275 | nan | $2,834 | nan | $10,866 | nan | nan | Small | Midsize | Large | nan | Small | Midsize | Large | nan | Small | Midsize | Large | nan | nan | nan | nan | Small | Midsize | Large | nan | Unit (per month) | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 9 | nan | Midsize engineering team | $11,837 | nan | $12,768 | nan | $32,022 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 10 | nan | Large engineering team | $25,007 | nan | $26,221 | nan | $72,139 | nan | Subscription-based pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Usage estimates | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 11 | nan | nan | nan | nan | nan | nan | nan | nan | Application performance monitoring (APM): APM, distributed tracing, and profiler | -- | -- | -- | nan | $2,656 | $12,089 | $24,857 | nan | $853 | $5,065 | $10,870 | nan | nan | APM | APM hosts | 20 | 125 | 225 | nan" }, { "data": "# APM hosts | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 12 | nan | nan | nan | nan | nan | nan | nan | nan | Infrastructure monitoring (infra): infra, containers, and custom metrics | -- | -- | -- | nan | included in APM | included in APM | included in APM | nan | $5,450 | $16,900 | $45,000 | nan | nan | APM | Profiled hosts | 10 | 40 | 65 | nan | # profiled hosts | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 13 | nan | nan | nan | nan | nan | nan | nan | nan | Log management (logs): ingested logs and indexed log events with 30-day retention in GB | -- | -- | -- | nan | $141 | $563 | $1,125 | nan | $4,150 | $8,500 | $13,250 | nan | nan | APM | Profiled container hosts | 50 | 100 | 150 | nan | # profiled container hours | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 14 | nan | nan | nan | nan | nan | nan | nan | nan | Digital experience monitoring (DEM): synthetics monitoring and real user monitoring (RUM) including browser and mobile monitoring | -- | -- | -- | nan | $12 | $41 | $102 | nan | $38 | $123 | $329 | nan | nan | APM | Indexed spans | 50000000 | 500000000 | 2000000000 | nan | # indexed spans (OpenTelemetry) | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 15 | nan | nan | nan | nan | nan | nan | nan | nan | Network performance monitoring (NPM): network hosts and flows | -- | -- | -- | nan | included in APM | included in APM | included in APM | nan | $125 | $685 | $1,316 | nan | nan | Infra | Infra hosts | 50 | 200 | 350 | nan | # hosts | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 16 | nan | nan | nan | nan | nan | nan | nan | nan | Serverless monitoring: serverless functions | -- | -- | -- | nan | $25 | $75 | $138 | nan | $250 | $750 | $1,375 | nan | nan | Infra | Container hours | 750000 | 1500000 | 2500000 | nan" }, { "data": "# container hours | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 17 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Infra | Custom metrics | 75000 | 250000 | 750000 | nan | # metrics | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 18 | nan | nan | nan | nan | nan | nan | nan | nan | Usage-based pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Logs | Ingested logs (live and rehydrated in GB) | 2500 | 10000 | 20000 | nan | GB | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 19 | nan | nan | nan | nan | nan | nan | nan | nan | Data ingest | $1,291 | $5,732 | $10,862 | nan | -- | -- | -- | nan | -- | -- | -- | nan | nan | Logs | Indexed logs (30-day retention in GB) | 1560 | 3000 | 4500 | nan | Million log events | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 20 | nan | nan | nan | nan | nan | nan | nan | nan | Users | $984 | $6,105 | $14,145 | nan | -- | -- | -- | nan | -- | -- | -- | nan | nan | DEM | Synthetics API test runs | 7500 | 12500 | 27500 | nan | # synthetic API test runs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 21 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DEM | Synthetics browser test runs | 1500 | 5000 | 15000 | nan | # synthetic browser test runs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 22 | nan | nan | nan | nan | nan | nan | nan | nan | Total | $2,275 | $11,837 | $25,007 | nan | $2,834 | $12,768 | $26,221 | nan | $10,866 | $32,022 | $72,139 | nan | nan | DEM | RUM sessions | 35000 | 125000 | 300000 | nan" }, { "data": "# RUM sessions | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 23 | nan | nan | nan | nan | nan | nan | nan | nan | Estimated amount saved per month with New Relic | nan | nan | nan | nan | $559 | $931 | $1,214 | nan | $8,591 | $20,185 | $47,132 | nan | nan | NPM | Network hosts | 25 | 75 | 125 | nan | # hosts | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 24 | nan | nan | nan | nan | nan | nan | nan | nan | Estimated percent savings per month with New Relic | nan | nan | nan | nan | 19.72% | 7.29% | 4.63% | nan | 79.06% | 63.04% | 65.33% | nan | nan | NPM | Network flows | 125000000 | 250000000 | 550000000 | nan | # flows | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 25 | nan | nan | nan | nan | nan | nan | nan | nan | Times more value for money with New Relic (X) | nan | nan | nan | nan | 1.25 | 1.08 | 1.05 | nan | 4.78 | 2.71 | 2.88 | nan | nan | nan | Serverless functions | 50 | 150 | 275 | nan | Million invocations | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 26 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 27 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Organization | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 28 | nan | nan | nan | nan | nan | nan | nan | nan | Detailed breakdown of monthly full-stack observability pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Total engineers | 25 | 65 | 150 | nan" }, { "data": "" }, { "data": "# engineers | nan | nan | nan | New Relic | Dynatrace | Datadog | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 29 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Users | User type: Basic (free) | 8 | 20 | 45 | nan | 30% of all engineers | nan | nan | Small engineering team | $2,275 | $2,834 | $10,866 | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 30 | nan | nan | nan | nan | nan | nan | nan | nan | nan | New Relic | New Relic | New Relic | nan | Dynatrace | Dynatrace | Dynatrace | nan | Datadog | Datadog | Datadog | nan | nan | Users | User type: Core | 12 | 32 | 75 | nan | 50% of all engineers | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 31 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Users | User type: Full platform | 5 | 13 | 30 | nan | 20% of all engineers | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 32 | nan | nan | nan | nan | nan | nan | nan | nan | nan | Small | Midsize | Large | nan | Small | Midsize | Large | nan | Small | Midsize | Large | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 33 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | New Relic | Dynatrace | Datadog | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 34 | nan | nan | nan | nan | nan | nan | nan | nan | Subscription-based pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | New Relic pricing | New Relic pricing | New Relic pricing | New Relic pricing | New Relic pricing | New Relic pricing | nan | nan | Midsize engineering team | $11,837 | $12,768 | $32,022 | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 35 | nan | nan | nan | nan | nan | nan | nan | APM | APM hosts | -- | -- | -- | nan | $1,380 | $8,625 | $15,525 | nan | $310 | $2,635 | $4,960 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 36 | nan | nan | nan | nan | nan | nan | nan | APM | APM + profiled hosts | -- | -- | -- | nan | included in APM | included in APM | included in APM | nan | $400 | $1,600 | $2,600 | nan | nan | nan | Pricing unit measure | Rate | Unit (per month) | Source | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 37 | nan | nan | nan | nan | nan | nan | nan | APM | Profiled container hosts | -- | -- | -- | nan | included in APM | included in APM | included in APM | nan | $92 | $192 | $292 | nan | nan | Data ingest | Data ingest cost (standard retention) | $0.30 | per GB | New Relic pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 38 | nan | nan | nan | nan | nan | nan | nan | APM | Indexed spans | -- | -- | -- | nan | included in APM | included in APM | included in APM | nan | $51 | $638 | $3,018 | nan | nan | Data ingest | GB/infra host/month | 13 | 90th percentile GB/infra host | -- | nan | nan | nan | nan | nan | New Relic | Dynatrace | Datadog | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 39 | nan | nan | nan | nan | nan | nan | nan | Infra | Infra hosts | -- | -- | -- | nan | $630 | $1,575 | $2,625 | nan | $750 | $3,000 | $5,250 | nan | nan | Data ingest | GB/APM host/month | 43.8 | 90th percentile GB/APM host | -- | nan | nan | nan | nan | Large engineering team | $25,007 | $26,221 | $72,139 | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 40 | nan | nan | nan | nan | nan | nan | nan | Infra | Container hours | -- | -- | -- | nan | included in infra | included in infra | included in infra | nan | $1,200 | $2,400 | $4,000 | nan | nan | Data ingest | GB/serverless function/month |" }, { "data": "| 90th percentile GB/APM host | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 41 | nan | nan | nan | nan | nan | nan | nan | Infra | Custom metrics | -- | -- | -- | nan | $646 | $1,889 | $6,707 | nan | $3,500 | $11,500 | $35,750 | nan | nan | Data ingest | GB/network/month | 14 | 90th percentile GB/APM host | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 42 | nan | nan | nan | nan | nan | nan | nan | Logs | Ingested logs (live and rehydrated in GB) | -- | -- | -- | nan | $141 | $563 | $1,125 | nan | $250 | $1,000 | $2,000 | nan | nan | Data ingest | GB/10K RUM session/month | 0.5 | 90th percentile GB/APM host | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 43 | nan | nan | nan | nan | nan | nan | nan | Logs | Indexed logs (30-day retention in GB) | -- | -- | -- | nan | included in logs | included in logs | included in logs | nan | $3,900 | $7,500 | $11,250 | nan | nan | Data ingest | Free data credits | 100 | GB | New Relic pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 44 | nan | nan | nan | nan | nan | nan | nan | DEM | Synthetics API test runs | -- | -- | -- | nan | $1 | $1 | $3 | nan | $4 | $6 | $14 | nan | nan | Users | Standard edition core user price | $49.00 | per user | New Relic pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 45 | nan | nan | nan | nan | nan | nan | nan | DEM | Synthetics browser test runs | -- | -- | -- | nan | $2 | $6 | $17 | nan | $18 | $60 | $180 | nan | nan | Users | Pro edition core user promotional price | $49.00 | per user | New Relic pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 46 | nan | nan | nan | nan | nan | nan | nan | DEM | RUM sessions | -- | -- | -- | nan | $10 | $34 | $83 | nan | $16 | $56 | $135 | nan | nan | Users | Standard edition full platform user price |" }, { "data": "| per user exceeding 1 | New Relic pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 47 | nan | nan | nan | nan | nan | nan | nan | NPM | Network hosts | -- | -- | -- | nan | included in APM | included in APM | included in APM | nan | $125 | $375 | $625 | nan | nan | Users | Pro edition full platform user price (annual pool of funds option) |" }, { "data": "| per user | Average estimated price for configurations of these sizes | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 48 | nan | nan | nan | nan | nan | nan | nan | NPM | Network flows | -- | -- | -- | nan | included in APM | included in APM | included in APM | nan | $0 | $310 | $691 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 49 | nan | nan | nan | nan | nan | nan | nan | nan | Serverless functions | -- | -- | -- | nan | $25 | $75 | $138 | nan | $250 | $750 | $1,375 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 50 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Dynatrace pricing | Dynatrace pricing | Dynatrace pricing | Dynatrace pricing | Dynatrace pricing | Dynatrace pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 51 | nan | nan | nan | nan | nan | nan | nan | nan | Usage-based pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 52 | nan | nan | nan | nan | nan | nan | nan | nan | Data ingest | $1,291 | $5,732 | $10,862 | nan | -- | -- | -- | nan | -- | -- | -- | nan | nan | nan | Pricing unit measure | Rate | Unit | Source | nan | Examples | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 53 | nan | nan | nan | nan | nan | nan | nan | Users | Basic users | $0 | $0 | $0 | nan | -- | -- | -- | nan | -- | -- | -- | nan | nan | APM/Infra/NPM | Full-stack monitoring | $69.00 | for 8 GB per host per month | https://www.dynatrace.com/pricing/#full-stack-monitoring | nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 54 | nan | nan | nan | nan | nan | nan | nan | Users | Core users | $588 | $1,568 | $3,675 | nan | -- | -- | -- | nan | -- | -- | -- | nan | nan | APM/Infra/NPM | Infra monitoring | $21.00 | for 8 GB per host per month | https://www.dynatrace.com/pricing/#infrastructure-monitoring | nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 55 | nan | nan | nan | nan | nan | nan | nan | Users | Full platform users | $396 | $4,537 | $10,470 | nan | -- | -- | -- | nan | -- | -- | -- | nan | nan | APM/Infra/NPM | Traces (OpenTelemetry) | 7000 | Davis Data Units (DDUs) per million spans per month | https://www.dynatrace.com/support/help/monitoring-consumption/davis-data-units/custom-traces | nan | If the average number of API calls per month is 1 million, the monthly DDU consumption is 7,000 DDUs. | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 56 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | APM/Infra/NPM | Custom metrics | 43.8 | DDUs per custom metric per month | https://www.dynatrace.com/support/help/monitoring-consumption/davis-data-units/metric-cost-calculation/ | nan | 1 metric data point x 60 min x 24 h x 365 days x 0.001 metric weight = 525.6 DDUs per metric/year = 43.8 DDUs per metric/month. | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 57 | nan | nan | nan | nan | nan | nan | nan | nan | Total | $2,275 | $11,837 | $25,007 | nan | $2,834 | $12,768 | $26,221 | nan | $10,866 | $32,022 | $72,139 | nan | nan | APM/Infra/NPM | Custom metrics included per APM host | 500 | for 8 GB RAM per APM host |" }, { "data": "| nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 58 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | APM/Infra/NPM | Custom metrics included per infra host | 200 | for 8 GB RAM per infra host | https://www.dynatrace.com/support/help/monitoring-consumption/davis-data-units/metric-cost-calculation | nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 59 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Logs | Free-tier DDUs | 200000 | free DDUs per year | https://www.dynatrace.com/support/help/monitoring-consumption/davis-data-units/#davis-data-units-free-tier | nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 60 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Logs | Logs | 0.0005 | DDUs per log event per month | https://www.dynatrace.com/support/help/monitoring-consumption/davis-data-units/log-monitoring-consumption | nan | 1 million log records multiplied by a DDU weight of 0.0005 consumes a total of 500 DDUs. | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 61 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Logs | Open ingestion | $25.00 | for 100K annual DDUs per month | https://www.dynatrace.com/pricing/#open-ingestion | nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 62 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DEM | Digital experience monitoring | $11.00 | for 10K annual DEM units per month | https://www.dynatrace.com/pricing/#digital-experience-monitoring | nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 63 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DEM | Synthetics API test runs | 0.1 | consumption per unit of measure |" }, { "data": "| nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 64 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DEM | RUM sessions | 0.25 | consumption per unit of measure | https://www.dynatrace.com/support/help/monitoring-consumption/digital-experience-monitoring-units | nan | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 65 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Serverless monitoring | 2000 | DDUs per function per month | https://www.dynatrace.com/support/help/monitoring-consumption/davis-data-units/serverless-monitoring | nan | 1 AWS Lambda function x 1 million invocations x 0.002 DDU weight = 2,000 DDUs per month per function | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 66 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 67 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Assumptions | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 68 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Log events per GB of logs | 450000 | Log events per GB of logs | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 69 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DDUs |" }, { "data": "" }, { "data": "" }, { "data": "| USD / DDU | -- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 70 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 71 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Calculations | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 72 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 73 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Small | Midsize | Large | nan | Unit | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 74 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Log events | 1125000000 | 4500000000 | 9000000000 | nan | Log events | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 75 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Log DDUs | 562500 | 2250000 | 4500000 | nan | DDUs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 76 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Custom metrics | 3235000 | 10750000 | 32500000 | nan | DDUs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 77 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Serverless functions | 100000 | 300000 | 550000 | nan | DDUs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 78 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Traces | 350000 | 3500000 | 14000000 | nan | DDUs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 79 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 80 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 81 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Datadog pricing | Datadog pricing | Datadog pricing | Datadog pricing | Datadog pricing | Datadog pricing | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 82 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 83 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Pricing unit measure | Rate | Unit (per month) | Source | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 84 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | APM | APM hosts | $31.00 | per host | https://www.datadoghq.com/pricing/?product=apm--continuous-profiler | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 85 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | APM | APM + continuous profiler hosts | $40.00 | per host | https://docs.datadoghq.com/accountmanagement/billing/apmtracing_profiler | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 86 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | APM | Profiled container hosts | $2.00 | per additional container per host exceeding 4 | https://docs.datadoghq.com/accountmanagement/billing/apmtracing_profiler (Additional containers will be billed at $0.002 per container per hour.) | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 87 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | APM | Indexed spans | $1.70 | per million indexed spans | https://docs.datadoghq.com/accountmanagement/billing/apmtracing_profiler | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 88 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Infra | Infra hosts (Pro edition) | $15.00 | per host |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 89 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Infra | Container hours (Pro edition) | $0.0020 | per container per hour for containers exceed 10 per host | https://www.datadoghq.com/pricing/?product=infrastructure#infrastructure-common-questions | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 90 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Infra | Custom metrics (Pro edition) | $0.05 | per metric exceeding 100/host | https://docs.datadoghq.com/accountmanagement/billing/custommetrics/?tab=countrate (as of October 2021) | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 91 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Logs | Ingested logs (live and rehydrated in GB) | $0.10 | per GB | https://www.datadoghq.com/pricing/?product=log-management | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 92 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Logs | Indexed logs (30-day retention in GB) | $2.50 | per million log events | https://www.datadoghq.com/pricing/?product=log-management | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 93 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DEM | Synthetics API test runs | $5.00 | per 10K | https://www.datadoghq.com/pricing/?product=synthetic-monitoring | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 94 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DEM | Synthetics browser test runs | $12.00 | per thousand |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 95 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | DEM | RUM sessions | $0.45 | 1K sessions | https://www.datadoghq.com/pricing/?product=real-user-monitoring | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 96 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | NPM | Network hosts (NPM) | $5.00 | per host | https://www.datadoghq.com/pricing/?product=network-monitoring | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 97 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | NPM | Network flows | $1.27 | per million flows exceed 6 million per host | https://www.datadoghq.com/pricing/?product=network-monitoring | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 98 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Serverless functions | $5.00 | per million invocations | https://www.datadoghq.com/pricing/?product=serverless | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 99 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 100 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | Assumptions | Small | Midsize | Large | nan | Unit | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 0 |" } ]
{ "category": "Observability and Analysis", "file_name": "reporters.html.md", "project_name": "Micrometer", "subcategory": "Observability" }
[ { "data": "getting-help documentation getting-started upgrading using features web data io messaging container-images actuator deployment native-image cli build-tool-plugins howto application-properties configuration-metadata auto-configuration-classes test-auto-configuration executable-jar dependency-versions actuator-api#audit-events actuator-api#audit-events.retrieving actuator-api#audit-events.retrieving.query-parameters actuator-api#audit-events.retrieving.response-structure actuator-api#beans actuator-api#beans.retrieving actuator-api#beans.retrieving.response-structure actuator-api#caches actuator-api#caches.all actuator-api#caches.all.response-structure actuator-api#caches.evict-all actuator-api#caches.evict-named actuator-api#caches.evict-named.request-structure actuator-api#caches.named actuator-api#caches.named.query-parameters actuator-api#caches.named.response-structure actuator-api#conditions actuator-api#conditions.retrieving actuator-api#conditions.retrieving.response-structure actuator-api#configprops actuator-api#configprops.retrieving actuator-api#configprops.retrieving-by-prefix actuator-api#configprops.retrieving-by-prefix.response-structure actuator-api#configprops.retrieving.response-structure actuator-api#env actuator-api#env.entire actuator-api#env.entire.response-structure actuator-api#env.single-property actuator-api#env.single-property.response-structure actuator-api#flyway actuator-api#flyway.retrieving actuator-api#flyway.retrieving.response-structure actuator-api#health actuator-api#health.retrieving actuator-api#health.retrieving-component actuator-api#health.retrieving-component-nested actuator-api#health.retrieving-component-nested.response-structure actuator-api#health.retrieving-component.response-structure actuator-api#health.retrieving.response-structure actuator-api#heapdump actuator-api#heapdump.retrieving actuator-api#httpexchanges actuator-api#httpexchanges.retrieving actuator-api#http-trace-retrieving actuator-api#httpexchanges.retrieving.response-structure actuator-api#http-trace-retrieving-response-structure actuator-api#overview actuator-api#overview.endpoint-urls actuator-api#overview.timestamps actuator-api#info actuator-api#info.retrieving actuator-api#info.retrieving.response-structure actuator-api#info.retrieving.response-structure.build actuator-api#info.retrieving.response-structure.git actuator-api#integrationgraph actuator-api#integrationgraph.rebuilding actuator-api#integrationgraph.retrieving actuator-api#integrationgraph.retrieving.response-structure actuator-api#liquibase actuator-api#liquibase.retrieving actuator-api#liquibase.retrieving.response-structure actuator-api#logfile actuator-api#logfile.retrieving actuator-api#logfile.retrieving-part actuator-api#loggers actuator-api#loggers.all actuator-api#loggers.all.response-structure actuator-api#loggers.clearing-level actuator-api#loggers.group actuator-api#loggers.group-setting-level actuator-api#loggers.group-setting-level.request-structure actuator-api#loggers.group.response-structure actuator-api#loggers.setting-level actuator-api#loggers.setting-level.request-structure actuator-api#loggers.single actuator-api#loggers.single.response-structure actuator-api#mappings actuator-api#mappings.retrieving actuator-api#mappings.retrieving.response-structure actuator-api#mappings.retrieving.response-structure-dispatcher-handlers actuator-api#mappings.retrieving.response-structure-dispatcher-servlets actuator-api#mappings.retrieving.response-structure-servlet-filters actuator-api#mappings.retrieving.response-structure-servlets actuator-api#metrics actuator-api#metrics.drilling-down actuator-api#metrics.retrieving-metric actuator-api#metrics.retrieving-metric.query-parameters actuator-api#metrics.retrieving-metric.response-structure actuator-api#metrics.retrieving-names actuator-api#metrics.retrieving-names.response-structure actuator-api#prometheus actuator-api#prometheus.retrieving actuator-api#prometheus.retrieving-names actuator-api#prometheus.retrieving.query-parameters actuator-api#quartz actuator-api#quartz.job actuator-api#quartz.job-group actuator-api#quartz.job-group.response-structure actuator-api#quartz.job-groups actuator-api#quartz.job-groups.response-structure actuator-api#quartz.job.response-structure actuator-api#quartz.report actuator-api#quartz.report.response-structure actuator-api#quartz.trigger actuator-api#quartz.trigger-group actuator-api#quartz.trigger-group.response-structure actuator-api#quartz.trigger-groups actuator-api#quartz.trigger-groups.response-structure actuator-api#quartz.trigger.calendar-interval-response-structure actuator-api#quartz.trigger.common-response-structure actuator-api#quartz.trigger.cron-response-structure actuator-api#quartz.trigger.custom-response-structure actuator-api#quartz.trigger.daily-time-interval-response-structure actuator-api#quartz.trigger.simple-response-structure actuator-api#sbom actuator-api#sbom.retrieving-available-sboms actuator-api#sbom.retrieving-available-sboms.response-structure actuator-api#sbom.retrieving-single-sbom actuator-api#sbom.retrieving-single-sbom.response-structure actuator-api#scheduled-tasks actuator-api#scheduled-tasks.retrieving actuator-api#scheduled-tasks.retrieving.response-structure actuator-api#sessions actuator-api#sessions.deleting actuator-api#sessions.retrieving actuator-api#sessions.retrieving-id actuator-api#sessions.retrieving-id.response-structure actuator-api#sessions.retrieving.query-parameters actuator-api#sessions.retrieving.response-structure actuator-api#shutdown actuator-api#shutdown.shutting-down actuator-api#shutdown.shutting-down.response-structure actuator-api#startup actuator-api#startup.retrieving actuator-api#startup.retrieving.drain actuator-api#startup.retrieving.response-structure actuator-api#startup.retrieving.snapshot actuator-api#threaddump actuator-api#threaddump.retrieving-json actuator-api#threaddump.retrieving-json.response-structure actuator-api#threaddump.retrieving-text gradle-plugin#aot gradle-plugin#aot.processing-applications gradle-plugin#aot.processing-tests gradle-plugin#getting-started gradle-plugin#gradle-plugin gradle-plugin#integrating-with-actuator gradle-plugin#integrating-with-actuator.build-info gradle-plugin#introduction gradle-plugin#managing-dependencies gradle-plugin#managing-dependencies.dependency-management-plugin gradle-plugin#managing-dependencies.dependency-management-plugin.customizing gradle-plugin#managing-dependencies.dependency-management-plugin.learning-more gradle-plugin#managing-dependencies.dependency-management-plugin.using-in-isolation gradle-plugin#managing-dependencies.gradle-bom-support gradle-plugin#managing-dependencies.gradle-bom-support.customizing gradle-plugin#build-image gradle-plugin#build-image.customization gradle-plugin#build-image.customization.tags gradle-plugin#build-image.docker-daemon gradle-plugin#build-image.docker-registry gradle-plugin#build-image.examples gradle-plugin#build-image.examples.builder-configuration gradle-plugin#build-image.examples.buildpacks gradle-plugin#build-image.examples.caches gradle-plugin#build-image.examples.custom-image-builder gradle-plugin#build-image.examples.custom-image-name gradle-plugin#build-image.examples.docker gradle-plugin#build-image.examples.docker.auth gradle-plugin#build-image.examples.docker.colima gradle-plugin#build-image.examples.docker.minikube gradle-plugin#build-image.examples.docker.podman gradle-plugin#build-image.examples.publish gradle-plugin#build-image.examples.runtime-jvm-configuration gradle-plugin#packaging-executable gradle-plugin#packaging-executable.and-plain-archives gradle-plugin#packaging-executable.configuring gradle-plugin#packaging-executable.configuring.including-development-only-dependencies gradle-plugin#packaging-executable.configuring.launch-script gradle-plugin#packaging-executable.configuring.layered-archives gradle-plugin#packaging-executable.configuring.layered-archives.configuration gradle-plugin#packaging-executable.configuring.main-class gradle-plugin#packaging-executable.configuring.properties-launcher gradle-plugin#packaging-executable.configuring.unpacking gradle-plugin#packaging-executable.jars gradle-plugin#packaging-executable.wars gradle-plugin#packaging-executable.wars.deployable gradle-plugin#publishing-your-application gradle-plugin#publishing-your-application.distribution gradle-plugin#publishing-your-application-maven gradle-plugin#publishing-your-application.maven-publish gradle-plugin#reacting-to-other-plugins gradle-plugin#reacting-to-other-plugins.application gradle-plugin#reacting-to-other-plugins.dependency-management gradle-plugin#reacting-to-other-plugins.java gradle-plugin#reacting-to-other-plugins.kotlin gradle-plugin#reacting-to-other-plugins.nbt gradle-plugin#reacting-to-other-plugins.war gradle-plugin#running-your-application gradle-plugin#running-your-application.passing-arguments gradle-plugin#running-your-application.passing-system-properties gradle-plugin#running-your-application.reloading-resources gradle-plugin#running-your-application.using-a-test-main-class maven-plugin#aot maven-plugin#aot.process-aot-goal maven-plugin#aot.process-aot-goal.optional-parameters maven-plugin#aot.process-aot-goal.parameter-details maven-plugin#aot.process-aot-goal.parameter-details.arguments maven-plugin#aot.process-aot-goal.parameter-details.classes-directory maven-plugin#aot.process-aot-goal.parameter-details.compiler-arguments maven-plugin#aot.process-aot-goal.parameter-details.exclude-group-ids maven-plugin#aot.process-aot-goal.parameter-details.excludes maven-plugin#aot.process-aot-goal.parameter-details.generated-classes maven-plugin#aot.process-aot-goal.parameter-details.generated-resources maven-plugin#aot.process-aot-goal.parameter-details.generated-sources maven-plugin#aot.process-aot-goal.parameter-details.includes maven-plugin#aot.process-aot-goal.parameter-details.jvm-arguments maven-plugin#aot.process-aot-goal.parameter-details.main-class maven-plugin#aot.process-aot-goal.parameter-details.profiles maven-plugin#aot.process-aot-goal.parameter-details.skip maven-plugin#aot.process-aot-goal.parameter-details.system-property-variables maven-plugin#aot.process-aot-goal.required-parameters maven-plugin#aot.process-test-aot-goal maven-plugin#aot.process-test-aot-goal.optional-parameters maven-plugin#aot.process-test-aot-goal.parameter-details maven-plugin#aot.process-test-aot-goal.parameter-details.classes-directory maven-plugin#aot.process-test-aot-goal.parameter-details.compiler-arguments maven-plugin#aot.process-test-aot-goal.parameter-details.exclude-group-ids maven-plugin#aot.process-test-aot-goal.parameter-details.excludes maven-plugin#aot.process-test-aot-goal.parameter-details.generated-classes maven-plugin#aot.process-test-aot-goal.parameter-details.generated-resources maven-plugin#aot.process-test-aot-goal.parameter-details.generated-sources maven-plugin#aot.process-test-aot-goal.parameter-details.generated-test-classes maven-plugin#aot.process-test-aot-goal.parameter-details.includes maven-plugin#aot.process-test-aot-goal.parameter-details.jvm-arguments maven-plugin#aot.process-test-aot-goal.parameter-details.skip maven-plugin#aot.process-test-aot-goal.parameter-details.system-property-variables maven-plugin#aot.process-test-aot-goal.parameter-details.test-classes-directory maven-plugin#aot.process-test-aot-goal.required-parameters maven-plugin#aot.processing-applications maven-plugin#aot.processing-applications.using-the-native-profile maven-plugin#aot.processing-tests maven-plugin#build-image maven-plugin#build-image.build-image-goal maven-plugin#build-image.build-image-goal.optional-parameters maven-plugin#build-image.build-image-goal.parameter-details maven-plugin#build-image.build-image-goal.parameter-details.classifier maven-plugin#build-image.build-image-goal.parameter-details.docker maven-plugin#build-image.build-image-goal.parameter-details.exclude-devtools maven-plugin#build-image.build-image-goal.parameter-details.exclude-docker-compose maven-plugin#build-image.build-image-goal.parameter-details.exclude-group-ids maven-plugin#build-image.build-image-goal.parameter-details.excludes maven-plugin#build-image.build-image-goal.parameter-details.image maven-plugin#build-image.build-image-goal.parameter-details.include-system-scope maven-plugin#build-image.build-image-goal.parameter-details.include-tools maven-plugin#build-image.build-image-goal.parameter-details.includes maven-plugin#build-image.build-image-goal.parameter-details.layers maven-plugin#build-image.build-image-goal.parameter-details.layout maven-plugin#build-image.build-image-goal.parameter-details.layout-factory maven-plugin#build-image.build-image-goal.parameter-details.loader-implementation maven-plugin#build-image.build-image-goal.parameter-details.main-class maven-plugin#build-image.build-image-goal.parameter-details.skip maven-plugin#build-image.build-image-goal.parameter-details.source-directory maven-plugin#build-image.build-image-goal.required-parameters maven-plugin#build-image.build-image-no-fork-goal maven-plugin#build-image.build-image-no-fork-goal.optional-parameters maven-plugin#build-image.build-image-no-fork-goal.parameter-details maven-plugin#build-image.build-image-no-fork-goal.parameter-details.classifier maven-plugin#build-image.build-image-no-fork-goal.parameter-details.docker maven-plugin#build-image.build-image-no-fork-goal.parameter-details.exclude-devtools maven-plugin#build-image.build-image-no-fork-goal.parameter-details.exclude-docker-compose maven-plugin#build-image.build-image-no-fork-goal.parameter-details.exclude-group-ids maven-plugin#build-image.build-image-no-fork-goal.parameter-details.excludes maven-plugin#build-image.build-image-no-fork-goal.parameter-details.image maven-plugin#build-image.build-image-no-fork-goal.parameter-details.include-system-scope maven-plugin#build-image.build-image-no-fork-goal.parameter-details.include-tools maven-plugin#build-image.build-image-no-fork-goal.parameter-details.includes maven-plugin#build-image.build-image-no-fork-goal.parameter-details.layers maven-plugin#build-image.build-image-no-fork-goal.parameter-details.layout maven-plugin#build-image.build-image-no-fork-goal.parameter-details.layout-factory maven-plugin#build-image.build-image-no-fork-goal.parameter-details.loader-implementation maven-plugin#build-image.build-image-no-fork-goal.parameter-details.main-class maven-plugin#build-image.build-image-no-fork-goal.parameter-details.skip maven-plugin#build-image.build-image-no-fork-goal.parameter-details.source-directory maven-plugin#build-image.build-image-no-fork-goal.required-parameters maven-plugin#build-image.customization maven-plugin#build-image.customization.tags maven-plugin#build-image.docker-daemon maven-plugin#build-image.docker-registry maven-plugin#build-image.examples maven-plugin#build-image.examples.builder-configuration maven-plugin#build-image.examples.buildpacks maven-plugin#build-image.examples.caches maven-plugin#build-image.examples.custom-image-builder maven-plugin#build-image.examples.custom-image-name maven-plugin#build-image.examples.docker maven-plugin#build-image.examples.docker.auth maven-plugin#build-image.examples.docker.colima maven-plugin#build-image.examples.docker.minikube maven-plugin#build-image.examples.docker.podman maven-plugin#build-image.examples.publish maven-plugin#build-image.examples.runtime-jvm-configuration maven-plugin#build-info maven-plugin#build-info.build-info-goal maven-plugin#build-info.build-info-goal.optional-parameters maven-plugin#build-info.build-info-goal.parameter-details maven-plugin#build-info.build-info-goal.parameter-details.additional-properties maven-plugin#build-info.build-info-goal.parameter-details.exclude-info-properties maven-plugin#build-info.build-info-goal.parameter-details.output-file maven-plugin#build-info.build-info-goal.parameter-details.skip maven-plugin#build-info.build-info-goal.parameter-details.time maven-plugin#getting-started maven-plugin#goals maven-plugin#help maven-plugin#help.help-goal maven-plugin#help.help-goal.optional-parameters maven-plugin#help.help-goal.parameter-details maven-plugin#help.help-goal.parameter-details.detail maven-plugin#help.help-goal.parameter-details.goal maven-plugin#help.help-goal.parameter-details.indent-size maven-plugin#help.help-goal.parameter-details.line-length maven-plugin#maven-plugin maven-plugin#integration-tests maven-plugin#integration-tests.examples maven-plugin#integration-tests.examples.jmx-port maven-plugin#integration-tests.examples.random-port maven-plugin#integration-tests.examples.skip maven-plugin#integration-tests.no-starter-parent maven-plugin#integration-tests.start-goal maven-plugin#integration-tests.start-goal.optional-parameters maven-plugin#integration-tests.start-goal.parameter-details maven-plugin#integration-tests.start-goal.parameter-details.add-resources maven-plugin#integration-tests.start-goal.parameter-details.additional-classpath-elements maven-plugin#integration-tests.start-goal.parameter-details.agents maven-plugin#integration-tests.start-goal.parameter-details.arguments maven-plugin#integration-tests.start-goal.parameter-details.classes-directory maven-plugin#integration-tests.start-goal.parameter-details.commandline-arguments maven-plugin#integration-tests.start-goal.parameter-details.directories maven-plugin#integration-tests.start-goal.parameter-details.environment-variables maven-plugin#integration-tests.start-goal.parameter-details.exclude-group-ids maven-plugin#integration-tests.start-goal.parameter-details.excludes maven-plugin#integration-tests.start-goal.parameter-details.includes maven-plugin#integration-tests.start-goal.parameter-details.jmx-name maven-plugin#integration-tests.start-goal.parameter-details.jmx-port maven-plugin#integration-tests.start-goal.parameter-details.jvm-arguments maven-plugin#integration-tests.start-goal.parameter-details.main-class maven-plugin#integration-tests.start-goal.parameter-details.max-attempts maven-plugin#integration-tests.start-goal.parameter-details.noverify maven-plugin#integration-tests.start-goal.parameter-details.profiles maven-plugin#integration-tests.start-goal.parameter-details.skip maven-plugin#integration-tests.start-goal.parameter-details.system-property-variables maven-plugin#integration-tests.start-goal.parameter-details.use-test-classpath maven-plugin#integration-tests.start-goal.parameter-details.wait maven-plugin#integration-tests.start-goal.parameter-details.working-directory maven-plugin#integration-tests.start-goal.required-parameters maven-plugin#integration-tests.stop-goal maven-plugin#integration-tests.stop-goal.optional-parameters maven-plugin#integration-tests.stop-goal.parameter-details maven-plugin#integration-tests.stop-goal.parameter-details.jmx-name maven-plugin#integration-tests.stop-goal.parameter-details.jmx-port maven-plugin#integration-tests.stop-goal.parameter-details.skip maven-plugin#packaging maven-plugin#packaging.examples maven-plugin#packaging.examples.custom-classifier maven-plugin#packaging.examples.custom-layers-configuration maven-plugin#packaging.examples.custom-layout maven-plugin#packaging.examples.custom-name maven-plugin#packaging.examples.exclude-dependency maven-plugin#packaging.examples.layered-archive-tools maven-plugin#packaging.examples.local-artifact maven-plugin#packaging.layers maven-plugin#packaging.layers.configuration maven-plugin#packaging.repackage-goal maven-plugin#packaging.repackage-goal.optional-parameters maven-plugin#packaging.repackage-goal.parameter-details maven-plugin#packaging.repackage-goal.parameter-details.attach maven-plugin#packaging.repackage-goal.parameter-details.classifier maven-plugin#packaging.repackage-goal.parameter-details.embedded-launch-script maven-plugin#packaging.repackage-goal.parameter-details.embedded-launch-script-properties maven-plugin#packaging.repackage-goal.parameter-details.exclude-devtools maven-plugin#packaging.repackage-goal.parameter-details.exclude-docker-compose maven-plugin#packaging.repackage-goal.parameter-details.exclude-group-ids maven-plugin#packaging.repackage-goal.parameter-details.excludes maven-plugin#packaging.repackage-goal.parameter-details.executable maven-plugin#packaging.repackage-goal.parameter-details.include-system-scope maven-plugin#packaging.repackage-goal.parameter-details.include-tools maven-plugin#packaging.repackage-goal.parameter-details.includes maven-plugin#packaging.repackage-goal.parameter-details.layers maven-plugin#packaging.repackage-goal.parameter-details.layout maven-plugin#packaging.repackage-goal.parameter-details.layout-factory maven-plugin#packaging.repackage-goal.parameter-details.loader-implementation maven-plugin#packaging.repackage-goal.parameter-details.main-class maven-plugin#packaging.repackage-goal.parameter-details.output-directory maven-plugin#packaging.repackage-goal.parameter-details.output-timestamp maven-plugin#packaging.repackage-goal.parameter-details.requires-unpack maven-plugin#packaging.repackage-goal.parameter-details.skip maven-plugin#packaging.repackage-goal.required-parameters maven-plugin#run maven-plugin#run.examples maven-plugin#run.examples.debug maven-plugin#run.examples.environment-variables maven-plugin#run.examples.specify-active-profiles maven-plugin#run.examples.system-properties maven-plugin#run.examples.using-application-arguments maven-plugin#run.run-goal maven-plugin#run.run-goal.optional-parameters maven-plugin#run.run-goal.parameter-details maven-plugin#run.run-goal.parameter-details.add-resources maven-plugin#run.run-goal.parameter-details.additional-classpath-elements maven-plugin#run.run-goal.parameter-details.agents maven-plugin#run.run-goal.parameter-details.arguments maven-plugin#run.run-goal.parameter-details.classes-directory maven-plugin#run.run-goal.parameter-details.commandline-arguments maven-plugin#run.run-goal.parameter-details.directories maven-plugin#run.run-goal.parameter-details.environment-variables maven-plugin#run.run-goal.parameter-details.exclude-group-ids maven-plugin#run.run-goal.parameter-details.excludes maven-plugin#run.run-goal.parameter-details.includes maven-plugin#run.run-goal.parameter-details.jvm-arguments maven-plugin#run.run-goal.parameter-details.main-class maven-plugin#run.run-goal.parameter-details.noverify maven-plugin#run.run-goal.parameter-details.optimized-launch maven-plugin#run.run-goal.parameter-details.profiles maven-plugin#run.run-goal.parameter-details.skip maven-plugin#run.run-goal.parameter-details.system-property-variables maven-plugin#run.run-goal.parameter-details.use-test-classpath maven-plugin#run.run-goal.parameter-details.working-directory maven-plugin#run.run-goal.required-parameters maven-plugin#run.test-run-goal maven-plugin#run.test-run-goal.optional-parameters maven-plugin#run.test-run-goal.parameter-details maven-plugin#run.test-run-goal.parameter-details.add-resources maven-plugin#run.test-run-goal.parameter-details.additional-classpath-elements maven-plugin#run.test-run-goal.parameter-details.agents maven-plugin#run.test-run-goal.parameter-details.arguments maven-plugin#run.test-run-goal.parameter-details.classes-directory maven-plugin#run.test-run-goal.parameter-details.commandline-arguments maven-plugin#run.test-run-goal.parameter-details.directories maven-plugin#run.test-run-goal.parameter-details.environment-variables maven-plugin#run.test-run-goal.parameter-details.exclude-group-ids maven-plugin#run.test-run-goal.parameter-details.excludes maven-plugin#run.test-run-goal.parameter-details.includes maven-plugin#run.test-run-goal.parameter-details.jvm-arguments maven-plugin#run.test-run-goal.parameter-details.main-class maven-plugin#run.test-run-goal.parameter-details.noverify maven-plugin#run.test-run-goal.parameter-details.optimized-launch maven-plugin#run.test-run-goal.parameter-details.profiles maven-plugin#run.test-run-goal.parameter-details.skip maven-plugin#run.test-run-goal.parameter-details.system-property-variables maven-plugin#run.test-run-goal.parameter-details.test-classes-directory maven-plugin#run.test-run-goal.parameter-details.working-directory maven-plugin#run.test-run-goal.required-parameters maven-plugin#using maven-plugin#using.import maven-plugin#using.overriding-command-line maven-plugin#using.parent-pom" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Nightingale", "subcategory": "Observability" }
[ { "data": "Nightingale project is an all-in-one observability solution which aims to combine the advantages of Prometheus and Grafana. It manages alert rules and visualizes metrics, logs, traces in a beautiful web UI. Any issues or PRs are welcome! The Nightingale project is very open and can interact with common collectors in the open source community, such as categraf, telegraf, datadog-agent, grafana-agent, as well as common time series databases in the open source community, such as Prometheus, VictoriaMetrics, Thanos, as well as logging stores, such as ElasticSearch, Loki, as well as common notification mediums, such as Slack, mm, Dingtalk, Wecom. Nightingle can be used as an alert engine to make anomaly judgment on data. It supports to configure different effective time for alert strategies, multiple judgment rules can be configured within the same alert strategy, and multiple rules can be inhibited according to the severity. Nightingale can be used as a visualization tool, similar to Grafana, to view metrics and log data, and supports making dashboards, which support pie charts, line charts, and many other chart types. Nightingale has built-in alerting rules and dashboards for different middleware and databases right out of the box. Nightingale supports Kibana-like query exploration, which allows you to filter logs by keywords and filters." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "New Relic", "subcategory": "Observability" }
[ { "data": "START HERE MONITOR YOUR DATA DATA INSIGHTS OTHER CAPABILITIES LATEST UPDATES ADMIN AND DATA The New Relic Query Language (NRQL) is a powerful tool you can use to query and understand nearly any type of data, but it can seem overwhelming at first glance. Don't worry! Here's some information to give you a foundational understanding of NRQL, including what it is, how to use it, and some tips and tricks that will help you get the most out of your queries. Once you've learned about NRQL, you can capture and interpret your data, letting you break down the big picture into easily understandable pieces and helping you identify problems as they occur. Here's a quick video to help introduce you to using NRQL by showing you how to find a query from a dashboard and modify it in the query builder. For more detailed information on querying, including a listing of clauses, functions, and example queries, see our NRQL reference. If you haven't already, create your free New Relic account below to start monitoring your data today! NRQL is an acronym of New Relic query language. It's a query language similar to ANSI SQL (see the syntax), and you can use it to retrieve detailed New Relic data to get insight into your applications, hosts, and business-important activity. NRQL can help you: You can use NRQL to create simple queries, such as fetching rows of data in a raw tabular form that gives insight on individual events. You can also use NRQL to run powerful calculations on the data before it's presented to you, such as crafting funnels based on how end users interact with your site or application. We use NRQL behind the scenes to generate many of the charts and dashboards in our curated UI experiences: We build many of the charts and visualizations within New Relic using NRQL. You can view a chart's query and then edit it to make your own custom chart as a quick way to get started using NRQL. You can use NRQL across the platform to access your data. Those places include: one.newrelic.com > All capabilities > Query your data You can run a NRQL query in the query builder within the platform. This NRQL query shows a count of distributed tracing spans faceted by their entity names. one.newrelic.com > User profile > NRQL console > Show You can run a NRQL query from anywhere within New Relic using the NRQL console. This allows you to quickly query your data without leaving your current screen. one.newrelic.com > All capabilities > Alerts > Alert conditions (Policies) > (select a policy) > Add a condition. Click NRQL, and then Next, define thresholds. You can use NRQL to build NRQL-based alerts, our primary and most powerful alert type. This will help to notify you of issues and help you address them in a timely fashion. You can also use NRQL with our NerdGraph API. This gives you more powerful features than querying in the UI (for example, cross-account querying, and asynchronous queries). NRQL is one of several ways to query New Relic" }, { "data": "For more on all query options, see Query your data. If you're already familiar with writing SQL queries, you'll be happy to know that NRQL has a lot of similarities. Here's a quick breakdown of the structure of a NRQL query: ``` SELECT function(attribute) FROM data type [, ...] [FACET attribute | function(attribute)] [LIMIT number] [SINCE time] [UNTIL time] [WITH TIMEZONE timezone] [COMPARE WITH time] [TIMESERIES time]``` Here are the rules that NRQL follows: | NRQL rule | Details | |:-|:| | Required values | The SELECT clause and FROM clause are required. All other clauses are optional. You can start your query with either SELECT or FROM. | | Query string size | The query string must be less than 4 KB. | | Case sensitivity | The data type names and attribute names are case sensitive.NRQL clauses and functions are not case sensitive. | | Syntax for strings | NRQL uses single quotes to designate strings. For example:.css-1rrd9av{color:var(--code-console-text-primary);font-family:var(--code-font);font-size:0.75rem;display:block;overflow:auto;white-space:pre;word-spacing:normal;word-break:normal;tab-size:2;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none;text-shadow:none;padding:1rem;}.css-1rrd9av .namespace{opacity:0.7;}.css-1rrd9av .token.plain:empty{display:inline-block;}.css-1rrd9av .token.comment,.css-1rrd9av .token.prolog,.css-1rrd9av .token.doctype,.css-1rrd9av .token.cdata{color:var(--color-comment);}.css-1rrd9av .token.tag,.css-1rrd9av .token.class-name{color:var(--code-query-syntax-keyword);}.css-1rrd9av .token.function{color:var(--code-query-syntax-function);}.css-1rrd9av .token.punctuation,.css-1rrd9av .token.operator,.css-1rrd9av .token.keyword,.css-1rrd9av .token.property,.css-1rrd9av .token.entity,.css-1rrd9av .token.atrule,.css-1rrd9av .token.attr-value,.css-1rrd9av .token.url{color:var(--code-query-syntax-operator);}.css-1rrd9av .token.regex,.css-1rrd9av .token.important,.css-1rrd9av .token.variable{color:var(--code-query-syntax-regex);}.css-1rrd9av .token.selector,.css-1rrd9av .token.attr-name,.css-1rrd9av .token.string,.css-1rrd9av .token.char,.css-1rrd9av .token.builtin,.css-1rrd9av .token.inserted{color:var(--code-query-syntax-string);}.css-1rrd9av .token.property,.css-1rrd9av .token.boolean,.css-1rrd9av .token.constant,.css-1rrd9av .token.symbol,.css-1rrd9av .token.deleted,.css-1rrd9av .token.number{color:var(--code-query-syntax-numeric);}.css-1rrd9av .token.important,.css-1rrd9av .token.bold{font-weight:bold;}.css-1rrd9av .token.italic{font-style:italic;}.css-1rrd9av .token.entity{cursor:help;}.css-ebgyu1{width:100%;padding:0;background:none;}.css-ebgyu1 var,.css-ebgyu1 mark{font-size:inherit;}.css-ebgyu1 var{background:var(--color-current-line);color:inherit;}.css-ebgyu1 a:hover var{background:var(--color-selection);}.css-ebgyu1 mark .token{color:var(--color-black)!important;}... where traceId = '030a573f0df02c57'Copy | | Non-standard custom event and attribute names | Events that we report by default have names that contain alphanumeric characters, colons (:), and underscores (_). Attribute names can have those characters and periods (.). Default-reported names start with a letter. Custom names that don't follow these guidelines must be enclosed with backticks in NRQL queries. For example:... FACET `Logged-in user`Copy | | Data type coercion | We don't support data type \"coercion.\" For more information, see Data type conversion. | NRQL rule Details Required values The SELECT clause and FROM clause are required. All other clauses are optional. You can start your query with either SELECT or FROM. Query string size The query string must be less than 4 KB. Case sensitivity Syntax for strings NRQL uses single quotes to designate strings. For example: ``` ... where traceId = '030a573f0df02c57'``` Non-standard custom event and attribute names Events that we report by default have names that contain alphanumeric characters, colons (:), and underscores (_). Attribute names can have those characters and periods (.). Default-reported names start with a letter. Custom names that don't follow these guidelines must be enclosed with backticks in NRQL queries. For example: ``` ... FACET `Logged-in user```` Data type coercion We don't support data type \"coercion.\" For more information, see Data type conversion. If you need any more information, you can check out our NRQL reference to help you build your queries. NRQL lets you query nearly every type of our telemetry data, including: Some data, like relationships between monitored entities, is not available via NRQL but is available using our NerdGraph API. Ready to learn more? We have information on how to use NRQL and how to use charts and dashboards with NRQL. If you want to start using NRQL instead, jump straight into our guided NRQL tutorial. Was this doc helpful? This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "OpenSearch", "subcategory": "Observability" }
[ { "data": "OpenSearch provides clients in JavaScript, Python, Ruby, Java, PHP, .NET, Go, Hadoop, and Rust. The OpenSearch Java high-level REST client will be deprecated starting with OpenSearch 3.0.0 and will be removed in a future release. Switching to the Java client is recommended. OpenSearch provides clients for the following programming languages and platforms: Clients that work with Elasticsearch OSS 7.10.2 should work with OpenSearch 1.x. The latest versions of those clients, however, might include license or version checks that artificially break compatibility. The following table provides recommendations for which client versions to use for best compatibility with OpenSearch 1.x. For OpenSearch 2.0 and later, no Elasticsearch clients are fully compatible with OpenSearch. While OpenSearch and Elasticsearch share several core features, mixing and matching the client and server has a high risk of errors and unexpected results. As OpenSearch and Elasticsearch continue to diverge, such risks may increase. Although your Elasticsearch client may continue working with your OpenSearch cluster, using OpenSearch clients for OpenSearch clusters is recommended. To view the compatibility matrix for a specific client, see the COMPATIBILITY.md file in the clients repository. | Client | Recommended version | |:|:-| | Elasticsearch Java low-level REST client | 7.13.4 | | Elasticsearch Java high-level REST client | 7.13.4 | | Elasticsearch Python client | 7.13.4 | | Elasticsearch Node.js client | 7.13.0 | | Elasticsearch Ruby client | 7.13.0 | If you test a legacy client and verify that it works, please submit a PR and add it to this table. Thank you for your feedback! Have a question? Ask us on the OpenSearch forum. Want to contribute? Edit this page or create an issue." } ]
{ "category": "Observability and Analysis", "file_name": "1SYKfjYhZdm2Wh2Cl6KVQalKg_m4NhTPZqq-8SzEVO6s.md", "project_name": "OpenTelemetry", "subcategory": "Observability" }
[ { "data": "| Unnamed: 0 | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | |-:|:|:--|:|:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|--:|--:| | 1 | Name | Start time | Duration | URL | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 2 | Profiling WG | 2023-06-29 7:57:35 | 62 | https://zoom.us/rec/share/eU4AGmnoupvj7AKyRY4jnEcSV7ojA9IYB2QTAz0NJRIVCd4XHBIh4QoRi3NOzMUS.axf76323nlzG-G | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 3 | Instrumentation: Messaging WG | 2023-06-29 8:00:23 | 61 | https://zoom.us/rec/share/rOVBypdmUA-60stCE3G0GMfXJ48sYIz88jsz7tTAROP56SvmWSEijdRMs8AZ0FaJ.SL_pbtH7dBkx4Lse | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 4 | Java SIG | 2023-06-29 8:59:26 | 86 | https://zoom.us/rec/share/mNCtM6MOH5G-5nC9ysPzmhfnYkBKKOGI_2OE6Mu-4IzwXIFS4Hr-83sztAfU9cii.bSEd9FWXpNcdUSRl | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 5 | Python SIG | 2023-06-29 8:59:41 | 60 | https://zoom.us/rec/share/IsNhaNLNykfd7GTqzqG760Pb3tgVQJVM4q0taJvQVZA3vK91rnYj618H0mupN1L.ACxg9BW84vns5Nu | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 6 | Swift SIG | 2023-06-29 8:59:47 | 63 | https://zoom.us/rec/share/HzlBsKJuSMms1mzj2N8A0IJ9f14FNPcgj8IA1GcmxjaixZvJP-tDDZ-B3PnsrmP0.FfydjngZir6zYcxM | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 7 | Go SIG | 2023-06-29 9:56:46 | 63 | https://zoom.us/rec/share/L3QNpX-LPnLJ87F2XU0Cn9qtqGgokEkjYLzEXNoPp91DONqiuGG5ACPi-C-ug.23orWyTqI3P8ovm- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 8 | End-User WG | 2023-06-29 9:57:35 | 62 | https://zoom.us/rec/share/R2BhOljDcuEgLeqWTtVn2DUxCKWFBNrVW_DsQ1wmv0lNmC94CzzCkWHHL4YjHTU.L4CxJDIlIsOTwWNo | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 9 | Governance Committee | 2023-06-29 11:31:05 | 31 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 10 | Semantic Convention WG | 2023-07-03 8:00:01 | 16 | https://zoom.us/rec/share/yVl2Lj02TfeR-OycL6pMEhlrM6IMRgnbOk3bw8Qlb7t2ddxWQSgh0NdONTtWy0S-.QowwRddtx8BHbNZv | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 11 | Maintainers + Community Weekly Meeting | 2023-07-03 9:00:08 | 6 | https://zoom.us/rec/share/5PeaqeShftpL0iDtn0OirJ7aVvwXIwA-x0FZp7SdKYXYWBT2gwPmEi64lwaven0D.XbJNXee1F5tiVM | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 12 | Communications SIG | 2023-07-03 9:59:45 | 32 | https://zoom.us/rec/share/cN2zhY7iPfSWolPxo-Ysbgcjc0pC9JOlvKWoKfBOEAl32gsUZ4GDqAL02W3-e4xM.GY-ah40KpxKXW5Zy | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 13 | Collector SIG (EU-APAC) | 2023-07-05 0:00:41 | 23 | https://zoom.us/rec/share/5H4AN8F1NtfWr6iExpyHBObO08LpPKROiZPgiAXWzhKPZ8JNPoxuufHd5E0GXTs.ONX3HwB7sQhRKQu | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 14 | PHP SIG | 2023-07-05 4:58:58 | 13 | https://zoom.us/rec/share/zSC5NiK2pdsRmGyyxJtz7OtAOIJx8hTYB4dnVskiKXRSUrjqRX5lK1CK978sDPig.oRGQ-ip0Qg8ACaZz | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 15 | Security Governance SIG | 2023-07-05 8:30:05 | 33 | https://zoom.us/rec/share/xuDTxF8wKeZuxR8f4MESZO3xf9LwBERPXgOyVmxgLwklx3ddv7yxYu8FsmIzMjxX.GRWF3DFMTrOv12_Q | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 16 | JavaScript SIG | 2023-07-05 8:57:12 | 16 | https://zoom.us/rec/share/O5s7jbg4JdPG6VjLN45ZxmFiRJiSLy3LHqmGNN3Qs-8GcpTbkclw9Y_3Y0HTbcWf.UyzDJ5s5Q-i3t0zw | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 17 | Collector SIG | 2023-07-05 9:00:19 | 26 | https://zoom.us/rec/share/sJpjMaWO9uCGd5ZSCZ-SbPwABFU60t8gzkyYj6SKqY--3xte9qoxbJLXnDFJJR8e._A9-KoaeWP-XhcAx | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 18 | .NET Auto-Instr SIG | 2023-07-05 9:00:32 | 9 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 19 | Sampling SIG | 2023-07-06 8:00:50 | 17 | https://zoom.us/rec/share/zSUWa6Nb56CLYLLctQrtVtMFoCBt9zWaS_Ds2abQskzjea2IazLjJiogBzbMaDap.Mr3Gq0SaZQ-8qrQ7 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 20 | Java SIG | 2023-07-06 8:59:21 | 61 | https://zoom.us/rec/share/4oipoUziknyQscQXO1GgywQZluqHMqHAOWEx6SYV9KcAZwe681pIV6KqwjXZ22.mb49nZAZ77rVxG42 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 21 | Kubernetes Operator SIG | 2023-07-06 8:59:35 | 53 | https://zoom.us/rec/share/uQcnskvcSuj0n68-w0sb4rL-3kjsKRvwgEvSUCDTmfYCOiZ4hpc4T-9ITS7AeX3a.Rwy4Mp51dIYvyN4g | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 22 | Swift SIG | 2023-07-06 9:00:07 | 5 | https://zoom.us/rec/share/Ib0tluhKriHPmxqr8l20T2T-gW2eU2uBqD3hK2e1PdkWSVaIZp6HQnXYWFgkc3mU.MZr3KrAteNq4ZPrS | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 23 | Python SIG | 2023-07-06 9:00:41 | 51 | https://zoom.us/rec/share/nIvu8AjOof3APpPGfj757gZET1N2xypnKLgWN7SCtJZHzpm3btkM6vPvyJcbmHBi.dBAsTwc7tq3PHrGL | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 24 | Governance Committee | 2023-07-06 11:21:13 | 69 | https://zoom.us/rec/share/Jtr-lWh4HzE6hXJVLE4os21IFpxYstK30zJC1rgxKyUSeAawAGFuLtAHPF3XM4I.kqU9LybGdZ9b89 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 25 | Configuration WG | 2023-07-10 7:59:54 | 34 | https://zoom.us/rec/share/i9bmohHRQMTmbDb-KGGLDCnWvAoMC1mUkeRFm4HIrYk3nG0MBDPg9TozLQlfpCyR.Nv06F2YEq7h_bpUs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 26 | Semantic Convention WG | 2023-07-10 8:00:24 | 45 | https://zoom.us/rec/share/6wPbZv4VtjN5IBRMFlCD0bnfWpS8z-UDL4RycZW4vScy6gG7GNTooo2T2snnT.Xzp7dndZl_nYlDg | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 27 | Maintainers + Community Weekly Meeting | 2023-07-10 8:59:32 | 48 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 28 | Specification SIG | 2023-07-11 7:59:32 | 45 | https://zoom.us/rec/share/MUCzZhre2ReWqfHkfw3eqvqaswH-PJIFX6lNWl8Cz0gQoVH-WYKZp_v2dFXf8WRo.DSHNMuXvGeKDu3Ts | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 29 | Rust SIG | 2023-07-11 8:54:01 | 42 | https://zoom.us/rec/share/sfoW6Sry0tSLRkuJ6prstprIHgSpzPFqHWzr6LBADmL0YH0-uXrqyGOwQYP_Isiq.XmIh1-YJhextM4Rt | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 30 | RUM WG | 2023-07-11 9:00:04 | 31 | https://zoom.us/rec/share/AQENigcNGoukRCmoKvlO9azgAYCRKQLPKo3dt8UBPUnfNfN4n2URbS8ZlRh8uX1q.1cZFARgRFYRfT18r | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 31 | eBPF SIG | 2023-07-11 9:00:19 | 12 | https://zoom.us/rec/share/YxUpSHn-qwRVmoT8Melf3PtSq9FV2KWd_rWM31PKABwHqpo8E2kvMlyaOvzGvQ.v4haydFChWVTvbEE | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 32 | Ruby SIG | 2023-07-11 9:01:29 | 66 | https://zoom.us/rec/share/evI2lEjsxKpqxSLIE4Wyl9Y9A6bSgQFPGBXtzemw3hItw7wOOTGh1yWr0HjyHZ.hS-69jnXRGu7amiO | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 33 | Go Auto-Instrumentation SIG | 2023-07-11 9:30:29 | 36 | https://zoom.us/rec/share/zVoQndxBg0LTdJlSZqEoThGCtDt2cbohNXUyMK93bC5jcGe81SKVAbFp6Fsa8wkD.02wWIxcaCZGDPsvD | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 34 | Agent Management WG | 2023-07-11 10:58:01 | 22 | https://zoom.us/rec/share/qOrIuJ-LogYe8pgUp1GXb7iLl9BfqDG-Lybso8Tftn0y68oB0NCRS6WkGfjz3p.oNrpiSU7aO1lRbaF | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 35 | .NET SIG | 2023-07-11 11:00:52 | 48 | https://zoom.us/rec/share/Y-VmcMN1VxbS4qYMDhEuLzlRdOnqbPb3GovN-iVxgnaedPn0TLVijdN5JwD0qBtD.ooD_ewhi9D1odK5c | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 36 | FAAS WG | 2023-07-11 11:58:41 | 40 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 37 | PHP SIG | 2023-07-12 4:58:33 | 12 | https://zoom.us/rec/share/9w3ZzViB0q7u5VNYollORhB_BVkfhG7jiGPUW4m5uDtft9wQMBsX2NGs1k8X5NWc.AudMPglgXm2hDWdP | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 38 | Security Governance SIG | 2023-07-12 8:29:59 | 26 | https://zoom.us/rec/share/tgYAhS1Gv4p6coq1vCxNKTUGqf1886FL6RIB-zlPCFNit3pThpNJuhcsylm2MUYB.yBVPCXGck9JOvEib | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 39 | JavaScript SIG | 2023-07-12 8:55:07 | 30 | https://zoom.us/rec/share/wis4vPYdOBTIITUjuKSmI_iKDk-12KJFZOBb9SGLWunrJjbp9dT5xOayXi8ddmEL.kmrMuQTZIkH7bzVZ | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 40 | Collector SIG | 2023-07-12 8:59:23 | 52 | https://zoom.us/rec/share/8U-UE9CF1wxA7P3yqKPXdxO2rKNlSzhpQhireSDsmxAtXUlhp5rrHdjo7DChToZs.J97TxnmPSDXuCm | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 41 | C/C++ SIG | 2023-07-12 8:59:32 | 103 | https://zoom.us/rec/share/pxalkhhhjIsJGrZ4LawoMU8QBCXFpb-JyDB7h-l4ZZB44fMkLI6gNrEcGcCjAitR.cqcfiAqokII4iSZs | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 42 | .NET Auto-Instr SIG | 2023-07-12 9:00:10 | 50 | https://zoom.us/rec/share/mQHvTTrwTVfpXUTiFAEby6SWUMQMFkqSc-CYQXWn359l63xrhPEYD5FOAj-qYifh.CQDWcFlcA72uN6c7 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 43 | Profiling WG | 2023-07-13 7:57:08 | 52 | https://zoom.us/rec/share/ZD_1qrlhizweMiLGVfQsKDL2KPU9uqRdeG5Ovv6od9okKuceUkCKuXwMdVvxhnht.JQ56u3yc35dzvwnE | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 44 | Instrumentation: Messaging WG | 2023-07-13 7:59:20 | 54 | https://zoom.us/rec/share/R6gdAtdaJHJHGRDM3MyglTLZVIpEINJbjl15vdZwNn35hI9DCzKIdHkFH-btI18Q.frhyhsYLtIqLW6d- | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 45 | Sampling SIG | 2023-07-13 8:00:19 | 28 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 46 | Python SIG | 2023-07-13 8:58:10 | 62 | https://zoom.us/rec/share/mjpKhVyAtQdhK7bQZHWD7vlUWCHn0ljg1TtpkvOS7GfkQJTjJcnBbYpSvZ7YEv.BvflaWq6rm4H75J6 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 47 | Java SIG | 2023-07-13 8:59:33 | 60 | https://zoom.us/rec/share/0IBIpnzS3807kUWAcIs2kJJY-2cO2aG7YTCleAsTJIAUTuVnHSCWsCUkQRiY1UWD.h5eLwo4EqQGxOhPY | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 48 | Go SIG | 2023-07-13 9:59:58 | 39 | https://zoom.us/rec/share/4CmLfT9vMmRrym7MijIBZeqgmamMzikuP9Dwd4e3CZpsL27CyK2l25EUpKdczr8J.PpBXR-oINnU3jPyG | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 49 | Governance Committee | 2023-07-13 11:27:48 | 63 | https://zoom.us/rec/share/Ri9sWq2uQTIqXo_tFuyuCKxZZiNXwu2WeqdDpwm1RNncJJAQqDkuiYPXItnFe4aw.mQT2ngoXsxTdrUdA | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 50 | Semantic Convention WG | 2023-07-17 7:59:31 | 58 | https://zoom.us/rec/share/sHjugnk2OHBr9uc5Mz9BPe9PaNUfqJQEZeJm6uzMavtkT9Kc-Q3v5uL786gKj5ku.PBtczyUqIaamGOWm | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 51 | Maintainers + Community Weekly Meeting | 2023-07-17 9:00:11 | 21 | https://zoom.us/rec/share/Aopxg6rCapCcbFcX2eU61x0ovZw5EtC7ee_v0MHKD2woSZQsjXKCLKQyke3hhlVg.fWDGaJHnnGmwGlKx | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 52 | Communications SIG | 2023-07-17 9:58:25 | 31 | https://zoom.us/rec/share/05J4ZnkoAb6O4kN9WM-AB803-PcvFeHnOM3nudWh-SiJJ3ql7mkIhgsrrNrlPtg.bWHkHAskIdkEoomU | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 53 | Client Instrumentation SIG | 2023-07-17 12:30:04 | 31 | https://zoom.us/rec/share/cjt4bb3X6YW0pWFzyW9t44zPyDiRQld37zBeQr77ESh0EBCbmXz8JUBvG1lSfU.wlXiNLsHilw4ffF | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 54 | OpenTelemetry C/C++ SIG | 2023-07-17 12:59:35 | 24 | https://zoom.us/rec/share/cPgaShHnlgOQKSZYpy1YZ7nJY2B3UBw6uEAXyMdK2Q3EPXbXl7F0B718ZjLxiQVI.sMbl3waGjSjDlUFq | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 55 | Go Auto-Instrumentation SIG | 2023-07-18 7:00:10 | 15 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 56 | Specification SIG | 2023-07-18 7:59:48 | 41 | https://zoom.us/rec/share/SlntZfDOVQ2b-7rrR5ywvwoGnOedldj0dvftbbpWF6uux1M7NJpe5k01jFvqRfc.3DZiOoLqdhUzxXB1 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 57 | Ruby SIG | 2023-07-18 8:54:51 | 66 | https://zoom.us/rec/share/wbUpw4b3qPlFShfaP4pudeQnuF-5xnbfZAJGLWelj-G4rQpYlsnC3G0Z6ZX3Jk.hRg0VcKhQ6cCJeZm | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 58 | Rust SIG | 2023-07-18 8:59:44 | 8 | https://zoom.us/rec/share/JXaWuFkmp7NnlYpRYIQBzAJRUGQ-p09CBlvLX8Tubx4PG9ifMG3DgsPqr2loXaoM.SQqRprzwVMrodb7E | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 59 | RUM WG | 2023-07-18 9:00:16 | 34 | https://zoom.us/rec/share/8tHJSmWfq4-2yMTO57yHCnDpTcmrR7oiyv0DKg7rLPlAoglVo04OCgby42Fj.g69m3WIoMjEz4mjo | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 60 | FAAS WG | 2023-07-18 11:55:17 | 20 | https://zoom.us/rec/share/00ux3zTkmGwfOmSwVNF952Xx-0GkY7bBYTFSasIHVGLhPozUwmZlh5oCssGN1S.sG5SUmSBD5cJ8GQJ | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 61 | .NET SIG | 2023-07-18 16:00:24 | 37 | https://zoom.us/rec/share/Va_AXC08XH3jUCZsXQjGAZRp-ZEis5uPS1zNSmwyagxn1XfEDpi8EaXo4wZeu8fK.dwA9em33XWuVYNQi | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 62 | PHP SIG | 2023-07-19 4:57:09 | 17 | https://zoom.us/rec/share/lQuiJSXpCk1K7Fs3RqKpj4wgJbZcKQkyxw3LDJYHVZjo3WBONAKltpwinR05sBMu.eMDFVZtGxDqPkN_q | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 63 | Prometheus WG | 2023-07-19 7:57:40 | 25 | https://zoom.us/rec/share/WMUpMftN1x-5b-SGsBL95z-tbKo5kCwaJa3meQAzjJ4mhkqH2bb9X-RaHJucqR.0evU93V68Ld179wf | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 64 | Security Governance SIG | 2023-07-19 8:27:02 | 34 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 65 | System Sem Conv Stability WG | 2023-07-19 8:28:53 | 18 | https://zoom.us/rec/share/ZBZbnQ5pO09xPYMy8Ec0P7LX5MFSGWp6rLZvv8LcgdAZnJOZBh_ug8K5ygtxCJV9.y4Rs9bMEj7E75yD1 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 66 | .NET Auto-Instr SIG | 2023-07-19 8:59:38 | 45 | https://zoom.us/rec/share/xXlOyXW6DdxDMMVggVjtmVAYx5JEX2tz9gK7Gngz6jg-gxZCj2gsMbC1_nej.bkHT6G6rJys4lx1H | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 67 | Collector SIG | 2023-07-19 8:59:52 | 31 | https://zoom.us/rec/share/cfwlrlD-7r-EBcarvhD-GhPNzbRoUhdDkIwUDYxx5FpmHZAdD098vUVBaKab98.bP2Y63Ns_r-iUwGF | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 68 | Python SIG | 2023-07-19 8:59:53 | 7 | https://zoom.us/rec/share/rFq2o2ugpKaguoYQHPyttmzjh1Wn5WXbEZyHssok3X0T8z5fjUvjTjk-vlp3zb.0f05IH6mA7zxyA4t | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 69 | JavaScript SIG | 2023-07-19 9:00:17 | 45 | https://zoom.us/rec/share/xcGhWfa8-oZ6em7mnvOKgDqOnyBi7w-SG2VQtsi3VxVpqBLNUc31Q4GeRB4kO99i.Bbw3OeLGJymM4sZQ | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 70 | Collector SIG | 2023-07-19 12:17:58 | 13 | https://zoom.us/rec/share/zpnlU84KZ9wVrTC9COSHgU9EUTWY50IHw8QDLmxiDwXBcMLteGgVOdLqBDFpQGG8.NI4cNojcMd0z4YrQ | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 71 | Sampling SIG | 2023-07-20 8:00:51 | 20 | https://zoom.us/rec/share/jEYAWGUHGSfxycVEYh3msMDFxunyOqWLsvSfO4PQE6S4njZUnie7mFs_jKEFs1U.pPQ82jEosTuPv-M0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 72 | Instrumentation: Messaging WG | 2023-07-20 8:04:52 | 40 | https://zoom.us/rec/share/J6846R4y76ZIkPKu6yEzQHLFLBcouMYhsKyyPGuA0NjMhrHfPB9qRJ-BkSVqqK6R.Xu7hbVBqHcJv57u3 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 73 | Python SIG | 2023-07-20 8:57:49 | 36 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 74 | Kubernetes Operator SIG | 2023-07-20 8:59:05 | 31 | https://zoom.us/rec/share/OdZ7eNXmuyig8U0Cj4H6qut1PF6lrdeXNG0992wE0uxkRK-sJL-PexYED6Fhv_.bjF1KPbG-QAITmHq | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 75 | Java SIG | 2023-07-20 8:59:17 | 55 | https://zoom.us/rec/share/icxcWEptF0GK6FSbJmuMBpg8L1IPdvUNvl1-ztVfb-3gztJQMDvA2eg5gPmddl9f.PNPvPEwSpiESEjCT | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 76 | Swift SIG | 2023-07-20 9:00:21 | 60 | https://zoom.us/rec/share/qmtguigjCE1hh9TG43Gg17JUNU8jNvIfmDB2eF3IiXbV0vrxkUdcuWA3vIZRooF.K0Ad5hGQlDb0YcN | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 77 | Go SIG | 2023-07-20 9:59:56 | 60 | https://zoom.us/rec/share/Dk1Re6TKsGJ4i0lMo3pXfoJJo1cNvo63_GUTYwCUMenxqq8bxfSFtB5Sq91fJUdq.wfT41xRlDf18bKMM | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 78 | End-User WG | 2023-07-20 10:04:25 | 41 | https://zoom.us/rec/share/gVhEP42tjHJNTlGXiLfaHJaSdEKiotUYLUUfjRwDv0fxY9aNz2cBYcQ7hR63YP8u.lAZumDg2En6Keele | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 79 | Governance Committee | 2023-07-20 11:28:43 | 10 | https://zoom.us/rec/share/uunoGKzQ-taP3ZQPoc857sz7uSn9C0_JZlaQBuBg6tKWZINjGI-ka0W1OZfNpfzO.FH22KviXXEpnzphb | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 80 | Semantic Convention WG | 2023-07-24 7:59:18 | 66 | https://zoom.us/rec/share/gVMNkhWAieU0yx3C1bWTashWvecADtsNsPrqsua-c4JIY-rhhNIB4Wy1uHWFV6.cf7BPN59VeyFThma | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 81 | Configuration WG | 2023-07-24 8:01:06 | 3 | https://zoom.us/rec/share/tDI2Ypctpj8GaUptyZTmF5R3ProXomdsuPqkANMPgl3f7jpwAOS3PgHJM6mkNUjH.4lA53GKJGoLe9qsc | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 82 | Community Demo App SIG | 2023-07-24 8:14:24 | 40 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 83 | Maintainers + Community Weekly Meeting | 2023-07-24 9:00:15 | 54 | https://zoom.us/rec/share/3HwFW7J6edYHsiCkzAD2aw8Qj5k_TUbO98bdBmK3VZGRVqo0nBvk4dln2EaXKs.eqsIMI9KBrfBM-MT | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 84 | Specification SIG | 2023-07-25 8:00:09 | 66 | https://zoom.us/rec/share/zjvhV13RsJHFNNg5Gu-XowOM2YjNW2jGZN436Besd4E36uZk8GCfawt9pGXn7mlq.IoNZPEfMhdMagr7F | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 85 | Ruby SIG | 2023-07-25 8:59:22 | 66 | https://zoom.us/rec/share/msE6SgnGoGtE4zAvmvNsoPllKFi_UHKOFfwPWi9xgWMVO631vmtFcSvq0XQw1bKH.2ccOnhv3JZHFrpaQ | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 86 | RUM WG | 2023-07-25 9:00:35 | 45 | https://zoom.us/rec/share/0lNfihXXDoAtS8uSY-N2tQtUqyUchtlM4Ul2nOeGj2IJtj18VoPz2joAO4m0Kgzh.t7VuEhxX2KjlC3b2 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 87 | Rust SIG | 2023-07-25 9:01:35 | 38 | https://zoom.us/rec/share/-T5zlfmlpNdU0NZev9JGRTmnhjBFLU74BSpWGqMQTnCCnz-eU4lYRHyvc9n81vE1.EQdgbaXMMHJTIM | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 88 | Go Auto-Instrumentation SIG | 2023-07-25 9:29:56 | 19 | https://zoom.us/rec/share/aqgc70m1Y3HaplH4bsv4evBHviE21Kfr5vyGk4drRxbJJ3iuQ0bdo25Qq54NezNg.u0aEoM8u_9buJoiO | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 89 | Agent Management WG | 2023-07-25 10:56:20 | 17 | https://zoom.us/rec/share/dtbSVfbv67q8bF3f-j1RxjdnxLWbycD4b4YvY7WcwhrIT9zNLySZWKWVxDR8.sGzKzACGExa_vZ7X | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 90 | .NET SIG | 2023-07-25 10:59:58 | 40 | https://zoom.us/rec/share/XKWQDtsETpy26rJbYIuBxB8oWD25_SxkAlo0alxIBRhmUu7X52pRV-MhGZkry2AR.BkpBmBavVFylzHgr | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 91 | PHP SIG | 2023-07-26 4:59:27 | 71 |" }, { "data": "| nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 92 | Java SIG | 2023-07-26 8:00:24 | 55 | https://zoom.us/rec/share/Gy6BS3hZtWzHiKKafpiqdI0z-iHeEVhYMXwzB89XmU5eJbCFBlkbcDEMT0QrVnmK.4SclFe5qmByJYQDz | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 93 | Security Governance SIG | 2023-07-26 8:29:15 | 25 | https://zoom.us/rec/share/E8T5eQ3g9Se-PQmmbiPDXWo2crny8ksGmhq3ivhRa90XiCT_1tJAlNJrp2a2VmiT.j-1scHdHG3SbaKHh | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 94 | System Sem Conv Stability WG | 2023-07-26 8:30:58 | 14 | https://zoom.us/rec/share/pRRXU5oFSyE5jU8MzV4uRNgUdlQNUsJOq1gHc-MQ1MUXPIjptr064oQkOf8eY.endrvUibLLSyCsB | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 95 | JavaScript SIG | 2023-07-26 8:56:16 | 53 | https://zoom.us/rec/share/yIuAdW5e4HviAP4wfzaaUHRAVcj4PGDSPZdusq7outX2HMpLp24Q8r_zFEKIbrgd.dhYXwItEjbPRqLdW | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 96 | Collector SIG | 2023-07-26 8:59:15 | 45 | https://zoom.us/rec/share/G7Mckc6XlVQ2D0qtZk71rDJVk9o5c9PrRS1TwhL4f3Pr3FMCscljI8Q0vBU0LrZx.wF9i8qYSRb-Fyq8H | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 97 | C/C++ SIG | 2023-07-26 9:00:07 | 34 | https://zoom.us/rec/share/9T1RU4Nk0M3i8PGziUuqIqrR5hR-zIjvfR4v0VsIEuSSbZLuzMQElk90YliEX7y8.9rVvlDfHHfQi7xOY | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 98 | .NET Auto-Instr SIG | 2023-07-26 9:00:14 | 33 | https://zoom.us/rec/share/uhkh5SnN8kzP2PsKUh-pRyiOXm2a2uXYasVnY-Eco9TOGyYfqkqAF5bNxlkMmE.JTVkM5JGbuwUbfzQ | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 99 | Profiling WG | 2023-07-27 7:59:10 | 43 | https://zoom.us/rec/share/djLuamhMpPjmo7A5lT-cht3xTgMLXQvA800bUbzcKUx1hg6uKKMvfqRglGh5LS.Cdr3U5S2D-Iv3pLn | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | | 100 | Sampling SIG | 2023-07-27 8:02:32 | 7 | https://zoom.us/rec/share/cbQIVGk26pcpRP6euFWIaYBtjxgwRQWIBaFpxOof2LmCJ8aqG15-t-iz2sYGs-dG.2F73u-o8Kgg-83PV | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan" } ]
{ "category": "Observability and Analysis", "file_name": "core.md", "project_name": "OpenTelemetry", "subcategory": "Observability" }
[ { "data": "OpenTelemetry code instrumentation is supported for the languages listed below. Depending on the language, topics covered will include some or all of the following: If you are using Kubernetes, you can use the OpenTelemetry Operator for Kubernetes to inject auto-instrumentation libraries for .NET, Java, Node.js, Python, Go into your application. The current status of the major functional components for OpenTelemetry is as follows: | Language | Traces | Metrics | Logs | |:--|:|:|:| | C++ | Stable | Stable | Stable | | C#/.NET | Stable | Stable | Stable | | Erlang/Elixir | Stable | Experimental | Experimental | | Go | Stable | Stable | Beta | | Java | Stable | Stable | Stable | | JavaScript | Stable | Stable | Experimental | | PHP | Stable | Stable | Stable | | Python | Stable | Stable | Experimental | | Ruby | Stable | In development | In development | | Rust | Beta | Alpha | Alpha | | Swift | Stable | Experimental | In development | A language-specific implementation of OpenTelemetry in C++. A language-specific implementation of OpenTelemetry in .NET. A language-specific implementation of OpenTelemetry in Erlang/Elixir. A language-specific implementation of OpenTelemetry in Go. A language-specific implementation of OpenTelemetry in Java. A language-specific implementation of OpenTelemetry in JavaScript (for Node.js & the browser). A language-specific implementation of OpenTelemetry in PHP. A language-specific implementation of OpenTelemetry in Python. A language-specific implementation of OpenTelemetry in Ruby. A language-specific implementation of OpenTelemetry in Rust. A language-specific implementation of OpenTelemetry in Swift. Language-specific implementation of OpenTelemetry for other languages." } ]
{ "category": "Observability and Analysis", "file_name": "docs.nodesource.com.md", "project_name": "NodeSource", "subcategory": "Observability" }
[ { "data": "At times, Node.js can feel like a black box. Shifting to an asynchronous programming model changes how developers are required to handle and interpret existing data. In order to help customers gain more visibility, we provide the N|Solid Runtime. The N|Solid Runtime is a build of Node.js bundled with an advanced native C++ component, the N|Solid Agent. The N|Solid Agent runs on its own thread inside your application, with direct access to the core elements of Node.js, libuv and the V8 JavaScript engine. The N|Solid Runtime provides access to detailed metrics, and allows you to control application behavior in production. It also includes a powerful security policy model that allows restricting access to system resources by untrusted modules. N|Solid 5.x is delivered bundled with either a Node.js v18.x Hydrogen LTS or Node.js v20.x Iron LTS runtime. All N|Solid features are additive, meaning any application that runs on Node.js will also work with N|Solid. To verify the version of Node.js you have bundled in N|Solid, use nsolid -v. For more information about the Node.js API, consult the Node.js API Documentation. This guide will help you quickly install all of the N|Solid components for a production environment. We provide production rpm and deb packages for use in Linux distributions and a windows MSI installer for use on Microsoft Windows Server 2016 & 2019. Our rpm and deb packages for use in Linux distributions makes it significantly easier to set up the N|Solid components by using our repositories. We support the following rpm based 64-bit Linux distributions: We also support the following deb based 64-bit Linux distributions: If you do not wish to use these packages, manual installation instructions are provided below. Note: The default Node.js runtime used in the instructions below is Node.js v20 LTS (Iron). If you wish to use a version of N|Solid based on Node.js v18 LTS (Hydrogen), replace any instances of 20.x with 18.x. For rpm based distributions, use the following command as root: ``` $ curl -fsSL https://rpm.nodesource.com/setup_20.x | bash - ``` For deb based distributions, there is a similar command run as root: ``` $ curl -fsSL https://deb.nodesource.com/setup_20.x | bash - ``` For rpm based systems, install all the required components with the command: ``` $ sudo yum install nsolid -y ``` For deb based systems, the process is similar. Install the required components: ``` $ sudo apt-get install nsolid -y ``` Once you have set up the NodeSource N|Solid package repository appropriate to your system, you will automatically get point releases via the standard system update tools. For example, fetch and install any new versions of RPM packages with: ``` $ sudo yum -y update ``` To install the N|Solid Runtime and Console via the MSI bundle for Windows 10 complete the following steps: Once N|Solid's major components are intalled a PowerShell will pop up automatically to install Chocolatey, and other Node.js dependencies. You may be asked to restart your operating system. Please preceed to do so. PLEASE NOTE: You can optionally 'Remove' or 'Install' the N|Solid Console Service via the corresponding shortcuts in the Windows Start menu: NodeSource provides a Windows MSI installer to deploy NSolid in production enviornments. We support the following Windows based distributions: Note: The default Node.js runtime used in the instructions below is Node.js v16 LTS (Gallium). If you wish to use a version of N|Solid based on Node.js v14 LTS (Fermium) or Node.js v12 LTS (Erbium), replace any instances of gallium with fermium or erbium. To stop and start the N|Solid Console via the Windows Start" }, { "data": "You can optionally 'Remove' or 'Install' the N|Solid runtime Service via the corresponding shortcuts in the Windows Start menu: To install and the N|Solid Runtime and N|Solid Console: Open PowerShell powershell Install Chocolatey ``` Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) ``` ``` Invoque-Request download_URL -OutFile nsolid-v4.6.2-gallium-x64.msi ``` ``` powershell ``` ``` Start-Process .\\nsolid-v4.6.2-gallium-x64.msi ``` ``` $env:PATH = [System.Environment]::GetEnvironmentVariable(\"Path\",\"Machine\") + \";\" + [System.Environment]::GetEnvironmentVariable(\"Path\",\"User\") ``` ``` nsolid -v nsolid -vv ``` To install N|Solid for macOS: To install N|Solid via Homebrew, first make sure you have Homebrew installed and that it is up to date. ``` $ brew update ``` First, add the nsolid tap: ``` $ brew tap nodesource/nsolid ``` Now you can install N|Solid: ``` $ brew install nsolid ``` For more information on our Homebrew packages, see our repository. The next step is to configure your shell to use the N|Solid Runtime. Open a second Terminal window and run: ``` $ export PATH=$PATH:/usr/local/nsolid/bin ``` This will add the N|Solid runtime to the current path. Any Node application that you start from within the N|Solid Shell will then be shown in the N|Solid Console. For example, monitor the Node.js REPL using N|Solid by running: ``` $ NSOLIDAPPNAME=\"REPL\" NSOLIDSAAS=\"your saas token\" nsolid ``` If you have an existing Nodejs application, e.g. \"index.js\", you can monitor it with N|Solid by running: ``` $ NSOLIDAPPNAME=\"MyApplication\" NSOLID_SAAS=\"your saas token\" nsolid index.js ``` You can customize the name of your application in the N|Solid Console by changing the NSOLID_APPNAME environmental variable. If you do not specify an app name, N|Solid will look for a 'name' property in your package.json, and if that is not found your application will be named untitled application. You are now ready to use N|Solid for your own applications! The N|Solid runtime can be configured via the environment variables listed below. Although each of these environment variables are optional, certain features will be missing or limited in each case. It is recommended that you connect to the console using NSOLIDCOMMAND or a StatsD collector using NSOLIDSTATSD, and customize your process using any of the remaining environment variables. | Environment Variable | Description | |:|:--| | NSOLIDCOMMAND | The route to the console command port. It should be formatted as \"host:port\". The default host is localhost, and the default port is 9001. Without this environment variable, the N|Solid Agent will not attempt to connect. The host can be provided via several formats: IPv4 10.0.0.21 IPv6 [2001:cdba::3257:9652] hostname nsolidhub.local | | NSOLID_APPNAME | The name of the application that the instance is running. Use this in order to create logical groupings of the processes in the console. If omitted, the value defaults to the name field of your package.json. If this is also omitted, the value defaults to untitled application | | NSOLID_TAGS | The list of tags associated with your instance, which can be used to identify and filter instances in Console views. See Tags and Filters for more details | | NSOLID_PUBKEY | The ZMQ public key used with the N|Solid Console server | | NSOLID_HOSTNAME | The hostname the N|Solid process is running on (overrides system-determined hostname) | | NSOLIDSTATSD | The route to a StatsD collector if you wish to send metrics directly to an external data collection service from the N|Solid process. Without this environment variable, the N|Solid Agent will not attempt to send StatsD data. It should be formatted as \"host:port\". If unspecified, the default host is localhost and port is" }, { "data": "Hosts and IP addresses are accepted in the same format as NSOLIDCOMMAND | | NSOLID_INTERVAL | The interval, in milliseconds, in which the N|Solid Agent will collect and report metrics to the console and/or the StatsD collector if either are connected. This value defaults to 3000 (3 seconds) and cannot be lower than 1000 | | NSOLID_OTLP | It defines the type of OTLP endpoint we want N | | NSOLIDOTLPCONFIG | Specific configuration for the OTLP endpoint tye defined with NSOLID_OTLP. See the OpenTelemetry section for more details | | NSOLIDSTATSDBUCKET | An override for the default StatsD bucket (key prefix) when using the NSOLID_STATSD functionality. See the StatsD section for more details | | NSOLIDSTATSDTAGS | Optional tags to append to StatsD metric data when using the NSOLID_STATSD functionality. See the StatsD section for more details | | NSOLIDTRACINGENABLED | Boolean to indicate if you want N|Solid to generate traces when connected to an endpoint that supports it. See the Tracing section for more details | | NSOLIDTRACINGMODULES_BLACKLIST | List of core modules instrumented by N|Solid you want to disable when tracing is enabled. See the Tracing section for more details | | NSOLIDREDACTSNAPSHOTS | Boolean to indicate if you want heap snapshots to obscure string variable contents. This may impede your ability to debug and is meant for sensitive production environments. | The N|Solid process can also be configured via an nsolid object in the package.json file for the current application. Environment variable values override the properties in the package.json file. The mapping of nsolid object properties in the package.json file to environment variables is as follows: | nsolid property in package.json | Environment Variable | |:-|:-| | command | NSOLID_COMMAND | | pubkey | NSOLID_PUBKEY | | statsd | NSOLID_STATSD | | statsdBucket | NSOLIDSTATSDBUCKET | | statsdTags | NSOLIDSTATSDTAGS | | otlp | NSOLID_OTLP | | otlpConfig | NSOLIDOTLPCONFIG | | tracingEnabled | NSOLIDTRACINGENABLED | | tracingModulesBlacklist | NSOLIDTRACINGMODULES_BLACKLIST | | hostname | NSOLID_HOSTNAME | | env | NODE_ENV | | interval | NSOLID_INTERVAL | | tags | NSOLID_TAGS - this may be an array of tag values or a comma-separated string of tag values | | app | NSOLID_APPNAME | ``` { \"name\": \"message-service\", \"version\": \"1.0.0\", \"nsolid\": { \"env\": \"production\", \"command\": \"nsolid-command-host.local:9001\", \"app\": \"messaging\", \"tags\": \"aws, mq\" } } ``` The N|Solid Command Line Interface (CLI) can be configured via a configuration file, .nsolid-clirc. ``` { \"remote\": \"http://localhost:6753\", \"app\": \"my-node-application\" } ``` Another option, which can be useful if you don't have access to static configuration data, is to use the N|Solid API from within your application. For example, if you have a configuration management platform or database that knows the hostname and port to your Console, you may obtain it and then invoke nsolid.start() to connect and begin reporting metrics. This can be done at any time, as long as your process is not already connected to an N|Solid Console. The keys are the same as the package.json keys documented above, with the following difference: Example: ``` const nsolid = require('nsolid') nsolid.start({ command: 'nsolid-command-host.local:9001', tags: ['nsolid-awesome', 'Auth service'], app: 'nsolid-awesome', appVersion: '1.0.0' }) ``` Get the N|Solid app name (equal to nsolid.app getter). ``` const nsolid = require('nsolid') nsolid.appName 'nsolid-awesome' ``` Get the N|Solid app version (taken from the package.json file). ``` const nsolid = require('nsolid') nsolid.appVersion '1.0.0' ``` Get the N|Solid app name. ``` const nsolid = require('nsolid') nsolid.app 'nsolid-awesome' ``` ``` const nsolid = require('nsolid') process.on('uncaughtException', err => { nsolid.clearFatalError(err) proces.exit(1) }) ``` Get the N|Solid application config. ``` const nsolid = require('nsolid') nsolid.config { app: 'nsolid-awesome', appVersion:" }, { "data": "blockedLoopThreshold: 200, env: 'prod', hostname: 'nsolid-host', interval: 3000, pauseMetrics: false, pubkey: '<3', statsdBucket: 'nsolid.${env}.${app}.${hostname}.${shortId}', tags: ['nsolid-awesome', 'Auth service'] } ``` Get an N|Solid worker thread name. ``` const { Worker, isMainThread } = require('worker_threads'); const { getThreadName, setThreadName } = require('nsolid') if (!isMainThread) { console.log(getThreadName()) // '' setThreadName('worker-parser') console.log(getThreadName()) // worker-parser return setTimeout(() => {}, 1000) } const worker = new Worker(filename) ``` Get the N|Solid agent id. ``` const nsolid = require('nsolid') nsolid.id 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' ``` It returns relevant information about the running process and platform is running on. ``` const nsolid = require('nsolid') nsolid.info() { id: '14e708a4bdb1fc5763fa1f29a9567229ab1b1ac4', app: 'beautiful-nsolid-app', appVersion: '1.0.0', tags: undefined, pid: 855556, processStart: 1600731441576, nodeEnv: undefined, execPath: 'your-beautiful-app-directory/beautiful-nsolid-app', main: 'index.js', arch: 'x64', platform: 'linux', hostname: 'NodeSource', totalMem: 16302604288, versions: { node: '12.18.4', nsolid: '4.0.0', v8: '7.8.279.23-node.39', uv: '1.38.0', zlib: '1.2.11', brotli: '1.0.7', ares: '1.16.0', modules: '72', nghttp2: '1.41.0', napi: '6', llhttp: '2.1.2', http_parser: '2.9.3', openssl: '1.1.1g', cldr: '37.0', icu: '67.1', tz: '2019c', unicode: '13.0' }, cpuCores: 4, cpuModel: 'Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz' } ``` The nsolid.info() method can be called asynchronously too: ``` const nsolid = require('nsolid') nsolid.info((err, info) => { if (!err) { // Yay! } }) ``` It retrieves a list of enviroment and process metrics. ``` const nsolid = require('nsolid') nsolid.metrics() { threadId: 0, timestamp: 0, activeHandles: 0, activeRequests: 0, heapTotal: 4464640, totalHeapSizeExecutable: 524288, totalPhysicalSize: 3685528, totalAvailableSize: 2194638976, heapUsed: 2666008, heapSizeLimit: 2197815296, mallocedMemory: 8192, externalMem: 833517, peakMallocedMemory: 91744, numberOfNativeContexts: 1, numberOfDetachedContexts: 0, gcCount: 1, gcForcedCount: 0, gcFullCount: 0, gcMajorCount: 0, gcDurUs99Ptile: 812, gcDurUsMedian: 812, dnsCount: 0, httpClientAbortCount: 0, httpClientCount: 0, httpServerAbortCount: 0, httpServerCount: 0, dns99Ptile: 0, dnsMedian: 0, httpClient99Ptile: 0, httpClientMedian: 0, httpServer99Ptile: 0, httpServerMedian: 0, loopIdleTime: 0, loopIterations: 0, loopIterWithEvents: 0, eventsProcessed: 0, eventsWaiting: 0, providerDelay: 0, processingDelay: 0, loopUtilization: 1, res5s: 0, res1m: 0, res5m: 0, res15m: 0, loopTotalCount: 0, loopAvgTasks: 0, loopEstimatedLag: 40.416415, loopIdlePercent: 0, title: 'nsolid', user: 'NodeSource', uptime: 0, systemUptime: 96314, freeMem: 2586660864, blockInputOpCount: 0, blockOutputOpCount: 0, ctxSwitchInvoluntaryCount: 14, ctxSwitchVoluntaryCount: 21, ipcReceivedCount: 0, ipcSentCount: 0, pageFaultHardCount: 0, pageFaultSoftCount: 1929, signalCount: 0, swapCount: 0, rss: 31719424, load1m: 2.79, load5m: 2.52, load15m: 2.47, cpuUserPercent: 86.061918, cpuSystemPercent: 15.533329, cpuPercent: 101.595247, cpu: 101.595247 } ``` If callback is passed, it returns the metrics asynchronously: ``` const nsolid = require('nsolid') nsolid.metrics((err, metrics) => { if (!err) { // Yay!! All the metrics are in store in metrics param } }) ``` See Metrics in detail for more information about each metric. Whether the agent is currently retrieving metrics or not. ``` const nsolid = require('nsolid') nsolid.metricsPaused false ``` See nsolid.pauseMetrics() Registers a listener to be called anytime a new license is set in the NSolid runtime. ``` const nsolid = require('nsolid') nsolid.onLicense(({ licensed, expired }) => { if (expired) { Logger.error('Invalid license!') return } doSomethingAwesome() }) ``` It retrieves the list of packages listed in the package.json file and its dependencies, it returns a list of all packages that can be required by walking through all directories that could contain a module. ``` const nsolid = require('nsolid') nsolid.packages() [ { path: 'nsolid-service/node_modules/js-tokens', name: 'js-tokens', version: '4.0.0', main: 'index.js', dependencies: [], required: false }, { path: 'nsolid-service/node_modules/important-package', name: 'important-package', version: '3.0.0', main: 'main.js', dependencies: ['../express', '../object-assign'], required: false }, ... ] ``` The nsolid.packages() method has async implementation too: ``` const nsolid = require('nsolid') nsolid.packages((err, packages) => { if (!err) { console.log(packages) } }) [ { path: 'your-beautiful-app-directory/beautiful-nsolid-app', name: 'beautiful-nsolid-app', version: '1.5.2', main: 'index.js', dependencies: [], required: false } ] ``` It pauses the process metrics collection, and do nothing if metrics were already paused (this method must be called on the main thread). ``` const nsolid = require('nsolid') nsolid.pauseMetrics() ``` See" }, { "data": "It gets the start date of the process in milliseconds. ``` const nsolid = require('nsolid') nsolid.processStart 1600730465456 ``` This is the way to trigger a CPU profile programmatically from within your application and have the resulting profile saved on your N|Solid console. The profile duration 600000 milliseconds, which is the default if none is specified, and profiler can be stoped before the duration expires. ``` const nsolid = require('nsolid') nsolid.profile(600000 / durationMilliseconds /, err => { if (err) { // The profile could not be started! } }) ``` The CPU profiler can be triggered in a synchronous way: ``` const nsolid = require('nsolid') try { nsolid.profile(600000 / durationMilliseconds /) } catch (err) { // The profile could not be started! } ``` Learn more about Cpu profiling. To complete the profile before the duration expires, call profileEnd method: ``` const nsolid = require('nsolid') nsolid.profileEnd(err => { if (err) { // Error stopping profiler (may it was not running) } }) ``` Learn more about Cpu profiling. It resumes the process metrics collection, and does nothing if metrics were already resumed (this method must be called on the main thread). ``` const nsolid = require('nsolid') nsolid.resumeMetrics() ``` If you must exit asynchronously from an uncaughtException handler it is still possible to report the exception by passing it to nsolid.saveFatalError() prior to shutting down. ``` const nsolid = require('nsolid') process.on('uncaughtException', err => { nsolid.saveFatalError(err) proces.exit(1) }) ``` If it is preferred to handle workers threads with names, this could be achieved by using the nsolid.setThreadName(name) method. ``` const { Worker, isMainThread } = require('worker_threads'); const { getThreadName, setThreadName } = require('nsolid') if (!isMainThread) { console.log(getThreadName()) // '' setThreadName('worker-renamed') console.log(getThreadName()) // worker-renamed return setTimeout(() => {}, 1000) } else { const worker = new Worker(filename) } ``` The main thread can also have a name ``` const { isMainThread } = require('worker_threads'); const { setThreadName } = require('nsolid') if (isMainThread) setThreadName('app') ``` A worker (or the main thread) can be named only with strings, otherwise, an exception will be thrown. ``` const { Worker, isMainThread } = require('worker_threads'); const { getThreadName, setThreadName } = require('nsolid') if (!isMainThread) { try { setThreadName(null) } catch (err) { / TypeError [ERRINVALIDARG_TYPE]: The \"name\" argument must be of type string. Received null */ } } else { const worker = new Worker(filename) } ``` This is the way to take a heap snapshot programmatically from within your application and have the resulting snapshot(s) saved on your N|Solid console. ``` const nsolid = require('nsolid') nsolid.snapshot(err => { if (err) { // The snapshot could not be created! } }) ``` Snapshots can also be taken on a synchronous way: ``` const nsolid = require('nsolid') try { nsolid.snapshot() } catch (err) { // The snapshot could not be created! } ``` Learn more about Heap snapshots. It retrieves the startup times of the process in a high resolution array of [seconds, nanoseconds]. ``` const nsolid = require('nsolid') nsolid.startupTimes() { initialized_node: [ 0, 971137 ], initialized_v8: [ 0, 181632 ], loaded_environment: [ 0, 3874746 ] } ``` The nsolid.startupTimes() method can be called asynchronously: ``` const nsolid = require('nsolid') nsolid.startupTimes((err, startupTimes)) { if (!err) { console.log(startupTimes) } } { initialized_node: [ 0, 971137 ], initialized_v8: [ 0, 181632 ], loaded_environment: [ 0, 3874746 ] } ``` The nsolid.statsd object: | Property | Description | |:--|:-| | counter: | Send a \"counter\" type value to statsd. Will use NSOLIDSTATSDBUCKET and NSOLIDSTATSDTAGS if configured. | | format: | Function that retrieves the statsd agent status. | | gauge: | Send a \"gauge\" type value to" }, { "data": "Will use NSOLIDSTATSDBUCKET and NSOLIDSTATSDTAGS if configured. | | sendRaw: | Send a raw string to statsd. Caller is required to comply to statsd format (terminating newline not required). An array of strings is also permissible, they will be sent newline separated. If connected via UDP, data sent via sendRaw() will be split up into 1400Kb batches to fit within a standard MTU window, this applies to newline separated strings or an array of strings passed to this interface. | | set: | Send a \"set\" type value to statsd. Will use NSOLIDSTATSDBUCKET and NSOLIDSTATSDTAGS if configured. | | status: | Function that retrieves the statsd agent status. | | tcpIp: | If configured, returns the ip address tcp statsd data will be sent to. | | timing: | Send a \"timing\" type value to statsd. Will use NSOLIDSTATSDBUCKET and NSOLIDSTATSDTAGS if configured. | | udpIp: | If configured, returns the ip address udp statsd data will be sent to. | The statsd statuses are: Usage example: ``` const nsolid = require('nsolid') nsolid.statsd.status() 'unconfigured' ``` The nsolid.statsd.format object: | Property | Description | |:--|:--| | bucket: | Returns the \"bucket\" string prepended to all auto-generated statsd metric strings. | | counter: | Format a \"counter\" string for name and value suitable for statsd. | | gauge: | Format a \"gauge\" string for name and value suitable for statsd. | | set: | Format a \"set\" string for name and value suitable for statsd. | | timing: | Format a \"timing\" string for name and value suitable for statsd. | Usage example: ``` const nsolid = require('nsolid') nsolid.statsd.format.bucket() '' ``` Get the N|Solid app tags (taken from the package.json file or NSOLID_TAGS environment variable). ``` const nsolid = require('nsolid') nsolid.tags ['nsolid-awesome', 'Auth service'] ``` The nsolid.traceStats object: | Property | Description | |:-|:-| | dnsCount: | The process's total number of DNS lookups performed | | httpClientAbortCount: | The process's total number of outgoing HTTP(S) client requests canceled due to inactivity. | | httpClientCount: | The process's total number of outgoing HTTP(S) client requests performed. | | httpServerAbortCount: | The process's total number of served incoming HTTP(S) requests canceled. | | httpServerCount: | The process's total number of incoming HTTP(s) requests served. | The nsolid.zmq object: | Property | Description | |:--|:-| | status: | Function that retrieves the zmq agent status. | The zmq agent statuses are: Usage example: ``` const nsolid = require('nsolid') nsolid.zmq.status() 'ready' ``` N|Solid has added support for some OpenTelemetry features: Using the OpenTelemetry JS API @opentelemetry/api to instrument your own code is very easy. N|Solid provides a nsolid.otel.register() API which allows to use the N|Solid implementation of the OpenTelemetry TraceAPI. See a very basic example in the following code. Notice that for the traces to be generated the enviroment variable NSOLIDTRACINGENABLED should be set. ``` // Run this code with `NSOLIDTRACINGENABLED=1` so traces are generated. const nsolid = require('nsolid'); const api = require('@opentelemetry/api'); if (!nsolid.otel.register(api)) { throw new Error('Error registering api'); } const tracer = api.trace.getTracer('Test tracer'); const span = tracer.startSpan('initial', { attributes: { a: 1, b: 2 }}); span.updateName('my name'); span.setAttributes({c: 3, d: 4 }); span.setAttribute('e', 5); span.addEvent('my_event 1', Date.now()); span.addEvent('my_event 2', { attr1: 'val1', attr2: 'val2'}, Date.now()); span.end(); ``` N|Solid also provides a nsolid.otel.registerInstrumentations() API to register instrumentation modules that use the OpenTelemetry TraceAPI that are available in the OpenTelemetry echosystem. The following code shows an example using the @opentelemetry/instrumentation-fs module: ``` // Run this code with `NSOLIDTRACINGENABLED=1` so traces are" }, { "data": "const nsolid = require('nsolid'); const api = require('@opentelemetry/api'); const os = require('os'); const path = require('path'); const { FsInstrumentation } = require('@opentelemetry/instrumentation-fs'); nsolid.start({ tracingEnabled: true }); if (!nsolid.otel.register(api)) { throw new Error('Error registering api'); } nsolid.otel.registerInstrumentations([ new FsInstrumentation({ }) ]); const fs = require('fs'); fs.mkdtemp(path.join(os.tmpdir(), 'foo-'), (err, directory) => { if (err) throw err; console.log(directory); }); ``` It's possible now to export traces with N|Solid to endpoints supporting the OpenTelemetry Protocol(OTLP) over HTTP. On top of that we make very easy to send traces to specific vendors endpoints such as Datadog, DynaTrace and NewRelic. And not only that, for these vendors we're also able to export the metrics N|Solid generates, so this info can also be displayed in their solutions with no need to use their agents which have the performance issues explained in this article. To configure the OTLP endpoint there are two configuration options we need to set either via NSOLIDOTLP and NSOLIDOTLP_CONFIG the environment variables or the other ways N|Solid provides to set them. NSOLID_OTLP defines the type of endpoint we're exporting the traces to. Allowed values at the moment are: NSOLIDOTLPCONFIG defines the configuration for the type of endpoint selected in NSOLID_OTLP. This configuration is a string containing a JS object serialized using JSON. The format of this JS object differs depending on the type of endpoint. | Endpoint Type | Format | |:-|:-| | datadog | { zone: 'us' | 'eu', key: 'yourdatadogkey', url: 'otlpendpointurl' } | | dynatrace | { site: 'youdynatracesize', token: 'your_dynatrace' } | | newrelic | { zone: 'us' | 'eu', key: 'yourdatadogkey' } | | otlp | { url: 'otlpendpointurl' } | ``` { zone: 'us' | 'eu', key: 'yourdatadogkey', url: 'otlpendpointurl' }``` ``` { site: 'youdynatracesize', token: 'your_dynatrace' }``` ``` { zone: 'us' | 'eu', key: 'yourdatadogkey' }``` ``` { url: 'otlpendpointurl' }``` Here is an example of how to configure N|Solid to export data to Dynatrace. Notice you need N|Solid to be licensed. ``` $ NSOLIDOTLP=dynatrace NSOLIDOTLPCONFIG='{\"site\":\"mysite\",\"token\":\"mytoken\"}' NSOLIDLICENSETOKEN=mynsolidlicense nsolid myprocess.js ``` The N|Solid Agent is able to periodically send a suite of metrics directly to a StatsD-compatible data collector endpoint. As this functionality is built directly into the nsolid executable, you do not need to be connected to the N|Solid Console server in order to use it. Independent nsolid processes can be configured to talk to StatsD collectors using environment variables when starting your application. Consult the Metrics in Detail section for a complete list of metrics that are sent to the StatsD collector for your N|Solid processes. Consult the Using StatsD or AppDynamics without connecting to Console section for instructions on configuring a license token, if your N|Solid processes are not connected to the N|Solid Console server. StatsD is a standardized protocol that is supported by many data collection databases and services. Supported backends include AWS CloudWatch, DataDog, ElasticSearch, Graphite, InfluxDB, MongoDB, OpenTSDB, StackDriver, Zabbix and many more. See the StatsD documentation for a more complete list. Supply the NSOLID_STATSD environment variable when starting an N|Solid process to have the Agent attempt to connect to an endpoint. The format of this value is \"host:port\". If unspecified, the default host is localhost and port is 8125. The host can be specified as a hostname string, an IPv4 or IPv6 address. Once connected, a suite of metrics will be sent to the collector using the StatsD protocol. The NSOLIDINTERVAL environment variable can be used to adjust the default reporting interval of 3 seconds. Be aware that this will also change the reporting interval for the N|Solid Console connection if connected via the NSOLIDCOMMAND environment variable. StatsD metrics are reported using" }, { "data": "A bucket is the full name, or key, of the entry being reported. Buckets need to be descriptive enough to clearly identify the process, host and metric type being reported. It is important that you have enough granularity to be able to inspect the data at the level you require, while also retaining the ability to group metrics for aggregate reporting in your frontend using bucket wildcard mechanisms. This will depend somewhat on your reporting frontend and may require some experimentation. By default, N|Solid creates bucket names prefixed with \"nsolid.<env>.<app>.<hostname>.<shortId>.\", followed by the name of the metric. In this string: To override the default StatsD metric bucket strings, provide a string via the NSOLIDSTATSDBUCKET environment variable to be used as the full prefix. ${key} style variables can be used to insert any of the above values. The default bucket prefix would be specified as follows: \"nsolid.${env}.${app}.${hostname}.${shortId}\". Your StatsD data, by default, will be formatted like so when sent to the collector: ``` nsolid.prod.myapp.nodehost.803bbd5.uptime:314.4|g nsolid.prod.myapp.nodehost.803bbd5.rss:28401664|g nsolid.prod.myapp.nodehost.803bbd5.heapTotal:8425472|g nsolid.prod.myapp.nodehost.803bbd5.heapUsed:5342488|g ... ``` Some backends, such as DataDog support \"tags\" for metric reporting. By default, N|Solid does not append any tags to its metrics. If required, you can supply your own tags to be appended to all reported metrics from an individual process. Using the NSOLIDSTATSDTAGS environment variable, or the statsdTags property of the nsolid section in package.json, supply a string with the same variable substitution format as for the buckets above. In addition to env, app, hostname, shortId and id variables, you can also make use of tags to insert the N|Solid tags that have been supplied for the process. StatsD tags should be a comma-separated list of strings that your backend can decode. Refer to your backend documentation for how these values are made use of for reporting and whether their use will be suitable for your deployment. This feature allows users to dynamically toggle metrics collection off. pauseMetrics() will cause the agent to no longer send metrics via zmq or StatsD. Internally metrics are collected and recalculated every duration of NSOLID_INTERVAL (default: 3 seconds). When sending metrics is paused the metrics will still be collected and recalculated internally every NSOLID_INTERVAL. This way when the metrics are again sent they aren't skewed for the duration they were paused. Custom commands can be triggered via the N|Solid Command Line Interface (CLI). Custom commands allow you to interact with your application's processes in ways specific to your business needs. To implement a custom command, create a function to handle the command, and register that function with N|Solid. The custom command function should be defined to take a single parameter, request: ``` function customCommandHandler(request) { ... } ``` The request parameter is an object with the following properties/functions: | Property | Description | |:-|:| | request.value | An optional piece of data sent with the command, using the nsolid-cli parameter --data | | request.return(value) | The function to call when you are ready to return the result of this command. The N|Solid Agent will reply with the value passed as a parameter | | request.throw(error) | A function to call if an error is encountered. The N|Solid Agent will reply with the error passed as a parameter | Your function you must call either request.return() or request.throw() to signal completion of the command. To get access to N|Solid's built-in nsolid module, call require(\"nsolid\"). A custom command handler is registered using the nsolid.on() function: The" }, { "data": "function takes the following parameters: | Parameter | Description | |:|:--| | commandName | The string name of the command to implement | | handler | The custom command function implementing the command | ``` nsolid.on(commandName, handler) ``` ``` const nsolid = require(\"nsolid\") ... nsolid.on(\"foo\", fooCommand) ... function fooCommand(request) { ... } ``` Below is an example of how you can use custom commands to dynamically change the configuration state of the application, specifically the log level. The example assumes that a global boolean variable Verbose is used to indicate whether to log verbosely or not. ``` // // This program is a simple \"server\" which does nothing, but does implement // an N|Solid custom command, named `verbose`. Once you've started this program // with the N|Solid agent enabled, you can send the `verbose` command as in: // // nsolid-cli --id $NSOLID_AGENTID custom --name verbose // nsolid-cli --id $NSOLID_AGENTID custom --name verbose --data on // nsolid-cli --id $NSOLID_AGENTID custom --name verbose --data off // // All these forms get or set the \"verbose\" level of logging in the program. // // The server logs a message every second when \"verbose\" is off, and logs // an additional message after that one when \"verbose\" is on. The default // setting of \"verbose\" is false. // \"use strict\" // get access to N|Solid's built-in module `nsolid` const nsolid = require(\"nsolid\") // the current \"verbose\" level let Verbose = false // register the `verbose` command for nsolid-cli nsolid.on(\"verbose\", verboseCommand) // your server which doesn't do much setInterval(onInterval, 2000) console.log(\"N|Solid custom command demo - log-level - started\") console.log(\"\") console.log(\"to use the verbose command with `nsolid-cli`, run:\") console.log(\" nsolid-cli --id $NSOLID_AGENTID custom --name verbose\") console.log(\" nsolid-cli --id $NSOLID_AGENTID custom --name verbose --data on \") console.log(\" nsolid-cli --id $NSOLID_AGENTID custom --name verbose --data off \") // function onInterval() { log(\"interval event!\") logVerbose(\"some extra logging here\") } // // implements the `verbose` command for nsolid-cli // function verboseCommand(request) { // if \"on\" or \"off\" passed in with --data, set Verbose appropriately if (request.value == \"on\") { Verbose = true } else if (request.value == \"off\") { Verbose = false } else if (request.value) { return request.throw(\"expecting data of `on` or `off`, got \" + request.value) } // return current value of Verbose return request.return({verbose: Verbose}) } // function log(message) { console.log(message) } // function logVerbose(message) { if (!Verbose) return log(message) } ``` When running your application with the N|Solid agent active, you can use the following command to return the current value of the Verbose setting: ``` $ nsolid-cli --id $NSOLID_AGENTID custom --name verbose ``` To set Verbose on or off, use one of the following commands: ``` $ NSOLID_AGENTID=\"69d916ad395061f80589e20bef9af3cb50ece9cb\" # This will change $ nsolid-cli --id $NSOLID_AGENTID custom --name verbose --data on $ nsolid-cli --id $NSOLID_AGENTID custom --name verbose --data off ``` The output of these three CLI commands looks like the following: ``` {\"verbose\":false,\"time\":1472496720972,\"timeNS\":\"1472496720972042976\",\"id\":\"69d916ad395061f80589e20bef9af3cb50ece9cb\",\"app\":\"my-verbose-app\",\"hostname\":\"titania\"} ``` In addition to the built-in lifecycle events, you can add your own using the process.recordStartupTime(label) function. The label will then be used in the JSON output of the startup-times command. You can use this to record the times at various stages of your application's startup. For instance: To obtain the startup timing values, you can use the nsolid-cli startup-times" }, { "data": "N|Solid provides three startup times by default via startup-times: | Parameter | Description | |:-|:--| | initialized_node | The time it takes to initialize the Node internals, reported in [seconds, nanoseconds] | | initialized_v8 | The time it takes to initialize the V8 engine, reported in [seconds, nanoseconds] | | loaded_environment | The time it takes to complete all initialization, which includes running some of Node's internal JavaScript code, and your main module's top-level code, reported in [seconds, nanoseconds] | Usage ``` $ nsolid-cli startup-times ``` ``` { \"id\": \"bf24f4ed072b3bb4b220aa81fa3a73fde8038409\", \"app\": \"MyApp\", \"hostname\": \"myApp.example.com\", \"initialized_node\": [ 0, 130404 ], \"initialized_v8\": [ 0, 482651 ], \"loaded_environment\": [ 0, 620207709 ] } ``` This indicates that Node was initialized in 130,404 nanoseconds (which is 130 microseconds, 0.130 milliseconds, or 0.000130 seconds). Note: The timing information is provided in hrtime format, which is a two element array of [seconds, nanoseconds]. A nanosecond is one billionth (1,000,000,000th) of a second. The time values are the elapsed time since the process was started. Below is an example web server instrumented to provide the time it takes for the web server to start listening to connections: ``` const http = require(\"http\") const server = http.createServer(onRequest) server.listen(3000, onListen) function onListen() { console.log(\"server listening on http://localhost:3000\") process.recordStartupTime(\"http_listen\") } function onRequest(request, response) { response.end(\"Hello, world!\") } ``` To start this program with the N|Solid agent listening on port 5000, use the following command: ``` $ NSOLIDAPPNAME=http-sample nsolid httpsample.js ``` To obtain the startup times, including the custom timer, use the following command: ``` $ nsolid-cli startup-times --app http-sample ``` ``` { \"id\": \"bf24f4ed072b3bb4b220aa81fa3a73fde8038409\", \"app\": \"http-sample\", \"hostname\": \"http-sample.example.com\", \"initialized_node\": [ 0, 129554 ], \"initialized_v8\": [ 0, 460521 ], \"loaded_environment\": [ 0, 95201339 ], \"http_listen\": [ 0, 94902772 ] } ``` N|Solid comes with a set of predefined endpoints for interaction and introspection that can be accessed at the command-line using the N|Solid Command Line Interface, nsolid-cli. The N|Solid CLI tool mirrors the Console's API schema. Many options are documented below. For commands not documented here, consult your Console's API Docs at http://localhost:6753/api/v3/api-docs (replace localhost:6753 with your Console server's address.) To see a list of help information for nsolid-cli, use the -h option: ``` $ nsolid-cli -h ``` The output of the nsolid-cli commands, unless otherwise noted, is line-delimited JSON (each line is separated by a newline delimiter \\n). The output shown in the examples below is expanded for readability. --auth If your Console is running with authentication enabled, you will need to configure an administrative access token to allow nsolid-cli to be permitted to access it. This can be set with the NSOLIDCONSOLEAUTHADMINTOKEN environment variable or the corresponding config file setting to a secure value value. Once set, you can pass this value to nsolid-cli using the --auth argument. To disable this authentication, see user authentication. --attach For the specific case of importing settings from a configuration file .nsconfig, you can use this option to read the contents from the file in the filesystem. --start and --end options For commands that take --start and --end options, you can pass the following formats for the date/time value: | Format | Description | |:-|:-| | milliseconds since epoch | Value returned from Date.now() | | yyyy-mm-ddThh:mm:ss | ISO date | | -Ns | N seconds since now | | -Nm | N minutes since now | | -Nh | N hours since now | | -Nd | N days since now | | 0 | Now, when used with the --end option | --q option Some commands take a --q parameter followed by a query filter expression. Only results matching all query filter terms will be returned. A filter term is a string with no spaces separating the values. \"field+operator+value\" The field may be any field from info or metrics, or vulns, vuln, or package. See the output of those commands to see what options exist for the field portion of the query. The operators for each field type are described below. The value can be a single value or a list of values separated by" }, { "data": "The term will evaluate to true if the field and operator match any of the specified values. Values may escape any character by preceding it with the % character. In practice, the only characters that need to be escaped are %, (space), and ,. A query filter expression is a list of filter terms, separated by space characters. Here is an example showing the list command being passed the --q flag with a query filter expression that has two filter terms: ``` $ nsolid-cli list --q \"vulns>=2 package=express\" ``` String Operators | Operator | Description | |:--|:| | \"=\" | Evaluates to true if the field value is equal to any value in the expression | | \"!=\" | Evaluates to true if the field value is not equal to any value in the expression | | \"~\" | Evaluates to true if the field value wild-card matches any value in the expression | | \"!~\" | Evaluates to true if the field value does not wild-card match any value in the expression | Note that multiple string values may be specified in the expression, separated by commas. Number Operators | Operator | Description | |:--|:| | \"<\" | Evaluates to true if the field value is less than the value in the expression | | \"<=\" | Evaluates to true if the field value is less than or equal to the value in the expression | | \">\" | Evaluates to true if the field value is greater than the value in the expression | | \">=\" | Evaluates to true if the field value is greater than or equal to the value in the expression | Package Operators | Operator | Description | |:--|:| | \"=\" | Evaluates to true if the field value equals the value in the expression | | \"!=\" | Evaluates to true if the field value does not equal the value in the expression | | \"<\" | Evaluates to true if the field value is less than the value in the expression | | \"<=\"\" | Evaluates to true if the field value is less than or equal to the value in the expression | | \">\" | Evaluates to true if the field value is greater than the value in the expression | | \">=\" | Evaluates to true if the field value is greater than or equal to the value in the expression | A value for packages fields is either package name or {package name}@{simple-semver-range}, where a simple semver range is one of: Semver Ranges | Type | Examples | |:-|:| | X-ranges | 1.2.3, 1.2, 1.2.x, 1.x, 1.2.*, etc. | | Tilde ranges | ~1.2.3, ~1.2, etc. | | Caret ranges | ^1.2.3, ^1.2, etc. | Using the < and > operators will do a semver range check comparing the versions. Download an asset. | Option | Description | |:|:| | --id | The asset id (required) | Usage ``` $ nsolid-cli asset --id 217040c0-02d1-4956-8648-7fb84b78c65e > my.heapsnapshot ``` Asset IDs are available via the assets command described below. The asset file itself will be written to stdout. The N|Solid CLI tool will automatically manage decompression if the asset is compressed. Lists the assets (CPU profiles and heap snapshots) that are currently available for" }, { "data": "| Option | Description | |:--|:-| | --id | The agent id or agent id prefix | | --app | The NSOLID_APP value | | --hostname | The host the process is running on | | --tag | An NSOLID_TAGS value (may be specified multiple times) | | --type | One of snapshot, snapshot-summary, or profile to limit the type of asset returned | | --start | Return assets that were created after the specified time | | --end | Return assets that were created before the specified time | | --starred | Return only assets that have been starred | Usage ``` $ nsolid-cli assets --app my-app-name --type snapshot ``` Returns a JSON stream including the following properties: | Property | Description | |:--|:-| | time | The timestamp of the asset completion | | asset | An asset id to use with the asset command | | type | profile, snapshot, or snapshot-summary | | id | The agent id | | app | The NSOLID_APP value | | hostname | The host the process is running on | | tags | The NSOLID_TAGS values | | size | The size of the asset in bytes | | compressed | Boolean value representing whether the asset will be served as a gzip | | pid | The operating system process id | | title | The process title | | info | The process info | | metrics | The process metrics nearest the time of collection | | starred | Boolean value representing whether the asset is starred or not | ``` { \"time\": \"2017-11-29T17:08:17.364Z\", \"asset\": \"3011f777-b8e0-4696-ae6c-50358bce298a\", \"type\": \"snapshot\", \"id\": \"272470293ef95e530b1d9d072e6ed87e0c980173\", \"app\": \"my-app-name\", \"hostname\": \"my-computer.local\", \"tags\": [ \"region:north\", \"zone:A\" ], \"size\": 4288158, \"compressed\": true, \"pid\": 5940, \"title\": \"my-app-name\", \"info\": { ... }, \"metrics\": { ... }, \"starred\": false } ``` Invoke a custom command. For more information on custom commands, see Custom Commands. | Option | Description | |:|:| | --id | The agent id or id prefix (required) | | --name | The name of the custom command (required) | | --data | Data to be sent with the command | Usage ``` $ nsolid-cli custom --id=[agent id] --name=verbose --data=off ``` Returns a JSON object with the following properties: | Property | Description | |:--|:-| | time | The timestamp recorded for the event | | id | The agent id | | app | The NSOLID_APP value | | hostname | The host the process is running on | | tags | The NSOLID_TAGS values | | result | The result of the custom command | ``` { \"time\": \"2017-12-04T00:56:28.566Z\", \"id\": \"81535293aea1fe8c1e2f3f7518d8db3f96cf7b39\", \"app\": \"nsolid2\", \"hostname\": \"computer.local\", \"tags\": [ \"localdev\" ], \"result\": { \"verbose\": false } } ``` Subscribe to the event stream, which emits a wide array of event types and metadata. Usage ``` $ nsolid-cli events ``` There are many types of events, and more are added all the time. Some of the primary types are described below. | Event Type | Description | |:|:-| | license-updated | The license data has been changed or verified with the license server | | field-range-changed | The range or option set for a field has expanded or contracted | | agent-packages-added | An agent's package list has been added or updated | | agent-found | A new agent has connected to the console | | agent-exit | An agent has terminated | ``` {\"time\":\"2017-12-04T01:47:16.386Z\",\"event\":\"license-updated\",\"args\":{\"licensed\":true}} {\"time\":\"2017-12-04T01:47:17.905Z\",\"event\":\"field-range-changed\",\"args\":{\"domain\":\"metrics\",\"name\":\"loopsPerSecond\",\"range\":[0,22]}} {\"time\":\"2017-12-04T01:47:48.613Z\",\"event\":\"agent-packages-added\",\"agent\":\"21fdaca1fd8533465392697e3d305e1991808836\",\"args\":{\"app\":\"my-app\",\"hostname\":\"x1c\",\"tags\":[],\"pid\":27646}} {\"time\":\"2017-12-04T01:47:48.613Z\",\"event\":\"agent-found\",\"agent\":\"21fdaca1fd8533465392697e3d305e1991808836\",\"args\":{\"app\":\"my-app\",\"hostname\":\"x1c\",\"tags\":[],\"pid\":27646}} {\"time\":\"2017-12-04T01:47:53.087Z\",\"event\":\"agent-exit\",\"agent\":\"21fdaca1fd8533465392697e3d305e1991808836\",\"args\":{\"app\":\"my-app\",\"hostname\":\"x1c\",\"tags\":[],\"pid\":27646,\"exitCode\":0}} ``` Extract the events from a range of time in the" }, { "data": "| Option | Description | |:|:| | --id | An agent id | | --type | An optional event type to only include | | --start | Return events that occurred after the specified time | | --end | Return events that occurred before the specified time | | --page | An optinal page number (events are paginated if this param is provided) | | --showLimit | An optional limit of the paginated records | | --orderBy | An optional field to order the events (such as agentId, hostname, etc) | | --order | An optional order parameter (asc or desc) | Usage ``` $ nsolid-cli events-historic --start \"-2h\" --end \"-1h\" ``` Returns a randomly generated keypair suitable for use in the N|Solid Console socket configuration. If your N|Solid Console instance is running on an untrusted network, it is recommended that you generate and use new keys. Usage ``` $ nsolid-cli generate-keypair ``` Returns a JSON object with the following properties: | Property | Description | |:--|:--| | public | Public key value. publicKey in N|Solid Console configuration, and env variable NSOLID_PUBKEY for N|Solid Runtime | | private | Private key value. privateKey in N|Solid Console configuration | ``` { \"public\": \"[t&m}{EZH7=HR(IW:+Ttk:=r.Y$:CP+-Q&5L?2N!\", \"private\": \"4QZof={^Pman?I?mB0o!]%z/{Jlu6:mJfl[Ms@[^\" } ``` Returns objects which contain static information about processes and the hosts they are running on. | Option | Description | |:|:| | --id | The full or partial agent id | | --q | The query options (see above) | Usage ``` $ nsolid-cli info ``` Returns a JSON stream including the following properties: | Property | Description | |:-|:--| | time | Milliseconds since epoch time message was sent | | id | The agent id | | app | The NSOLID_APP value or name property from package.json | | appVersion | The version property from package.json | | hostname | The host the process is running on | | tags | The NSOLID_TAGS values | | pid | Operating system process id | | processStart | The time the process started | | execPath | Path of the executable running the application | | main | The main module used when the application started up | | arch | The CPU architecture | | platform | Name of the N|Solid platform | | totalMem | Total available memory in the system | | cpuCores | The number of CPU cores | | cpuModel | The CPU model | | versions | Object describing the versions of components used in the runtime | ``` { \"id\": \"5dd6f7a940bfc3633cc3ffc82332640d51ce5112\", \"app\": \"my-app-name\", \"appVersion\": \"1.0.0\", \"tags\": [ \"region:north\", \"zone:A\" ], \"pid\": 14880, \"processStart\": 1512335595061, \"nodeEnv\": \"dev\", \"execPath\": \"/usr/bin/nsolid\", \"main\": \"/var/my-app/app.js\", \"arch\": \"x64\", \"platform\": \"linux\", \"hostname\": \"my-computer.local\", \"totalMem\": 8244523008, \"versions\": { \"http_parser\": \"2.7.0\", \"node\": \"8.10.0\", \"nsolid\": \"3.1.0\", \"v8\": \"6.2.414.50\", \"uv\": \"1.19.1\", \"zlib\": \"1.2.11\", \"ares\": \"1.10.1-DEV\", \"modules\": \"57\", \"nghttp2\": \"1.25.0\", \"openssl\": \"1.0.2n\", \"icu\": \"60.1\", \"unicode\": \"10.0\", \"cldr\": \"32.0\", \"tz\": \"2017c\", \"nsolid_lib\": { \"v8_profiler\": \"nsolid-v5.7.0-fix1\", \"sodium\": \"nsolid-2.1.0\", \"cjson\": \"nsolid-3.0.0\", \"function_origin\": \"nsolid-v1.2.1\", \"nan\": \"v2.5.1\", \"cli\": \"v3.0.0\", \"agent\": \"v8.0.3\", \"zmq-bindings\": \"nsolid-2.15.4-fix1\", \"zmq\": \"nsolid-v4.2.0-fix4\", \"persistentswithclassid\": \"v1.1.1\" } }, \"cpuCores\": 4, \"cpuModel\": \"Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz\", \"time\": \"2017-12-03T21:13:15.061Z\" } ``` Returns an array of all available matching N|Solid processes, along with their most recent info and metrics data. The command ls is an alias for this command. | Option | Description | |:|:--| | --q | The query object (see above) | Usage ``` $ nsolid-cli list --q id=5dd6 ``` Returns newline delimited JSON objects where each row includes the following properties: | Property | Description | |:--|:| | time | The timestamp of the last metrics payload | | info | The object returned from the info command | | metrics | The object returned from the metrics command | ``` { \"time\": \"2017-12-04T01:17:31.299Z\", \"info\": { ... }, \"metrics\": { ... } } ``` Subscribes to the metrics for a set of" }, { "data": "| Option | Description | |:--|:-| | --field | A list of fields to include in the output. If unspecified, all fields will return | | --interval | How frequently to poll for metrics data | | --q | The query set (see above) | Usage ``` $ nsolid-cli metrics ``` Consult the Metrics in Detail section for complete details on the metrics available. ``` { \"time\": \"2017-12-04T01:23:16.163Z\", \"id\": \"5dd6f7a940bfc3633cc3ffc82332640d51ce5112\", \"app\": \"my-app-name\", \"hostname\": \"my-computer.local\", \"tags\": [ \"region:north\", \"zone:A\" ], \"activeHandles\": 750, \"activeRequests\": 0, \"blockInputOpCount\": 0, \"blockOutputOpCount\": 19424, \"cpuPercent\": 0.2666424917029181, \"cpuSpeed\": 2640, \"cpuSystemPercent\": 0.13332124585145905, \"cpuUserPercent\": 0.13332124585145905, \"ctxSwitchInvoluntaryCount\": 9988, \"ctxSwitchVoluntaryCount\": 1795924, \"dns99Ptile\": 0, \"dnsCount\": 0, \"dnsMedian\": 0, \"externalMem\": 734532, \"freeMem\": 251301888, \"gcCount\": 446, \"gcCpuPercent\": 0, \"gcDurUs99Ptile\": 409, \"gcDurUsMedian\": 527, \"gcForcedCount\": 0, \"gcFullCount\": 0, \"gcMajorCount\": 54, \"heapSizeLimit\": 1501560832, \"heapTotal\": 73269248, \"heapUsed\": 60332056, \"httpClient99Ptile\": 0, \"httpClientAbortCount\": 0, \"httpClientCount\": 0, \"httpClientMedian\": 0, \"httpServer99Ptile\": 0, \"httpServerAbortCount\": 0, \"httpServerCount\": 0, \"httpServerMedian\": 0, \"ipcReceivedCount\": 0, \"ipcSentCount\": 0, \"load15m\": 0.2333984375, \"load1m\": 0.2265625, \"load5m\": 0.28564453125, \"loopAvgTasks\": 0, \"loopEstimatedLag\": 0, \"loopIdlePercent\": 100, \"loopTotalCount\": 713, \"loopsPerSecond\": 0, \"pageFaultHardCount\": 0, \"pageFaultSoftCount\": 132206, \"rss\": 138301440, \"signalCount\": 0, \"swapCount\": 0, \"systemUptime\": 23006, \"title\": \"node\", \"totalAvailableSize\": 1437491784, \"totalHeapSizeExecutable\": 5242880, \"totalPhysicalSize\": 72644376, \"uptime\": 15000.85, \"user\": \"appuser\", \"vulns\": 1 } ``` Retrieve metrics records over a historical time range. Records match the metrics command output. | Option | Description | |:|:-| | --field | A list of fields to include in the output. If unspecified, all fields will return | | --q | The query set (see above) | | --start | The start of the time range | | --end | The end of the time range | | --series | The aggregation level of the data. Can be raw, 1m, or 1h. Defaults to raw | Usage ``` $ nsolid-cli metrics-historic --start=-5m --end=-1m ``` Returns a list of packages and modules available in the specified process. | Option | Description | |:|:-| | --id | The full agent id or prefix | Usage ``` $ nsolid-cli packages --id=[agent id] ``` Returns a JSON object with the following properties: | Property | Description | |:-|:--| | id | The agent id | | time | The timestamp of the message | | app | The NSOLID_APP value | | packages | An array of package objects with details about the package and its dependencies | | vulnerabilities | An object with vulnerability details | ``` { \"id\": \"a40827afbc3620e40887d6774249c321848d54f6\", \"time\": \"2017-12-04T01:26:12.514Z\", \"packages\": [ { \"name\": \"my-app-name\", \"version\": \"1.0.0\", \"path\": \"/var/my-app\", \"main\": \"app.js\", \"dependencies\": [ \"node_modules/debug\", \"node_modules/minimist\", \"node_modules/split\" ], \"dependents\": [], \"vulns\": [] }, { \"name\": \"debug\", \"version\": \"2.2.0\", \"path\": \"/var/my-app/node_modules/debug\", \"main\": \"./node.js\", \"dependencies\": [ \"../ms\" ], \"dependents\": [ \"../..\" ], \"vulns\": [ \"npm:debug:20170905\" ] }, { \"name\": \"minimist\", \"version\": \"1.2.0\", \"path\": \"/var/my-app/node_modules/minimist\", \"main\": \"index.js\", \"dependencies\": [], \"dependents\": [ \"../..\" ], \"vulns\": [] }, { \"name\": \"ms\", \"version\": \"0.7.1\", \"path\": \"/var/my-app/node_modules/ms\", \"main\": \"./index\", \"dependencies\": [], \"dependents\": [ \"../debug\" ], \"vulns\": [ \"npm:ms:20170412\" ] }, { \"name\": \"split\", \"version\": \"1.0.0\", \"path\": \"/var/my-app/node_modules/split\", \"dependencies\": [ \"../through\" ], \"dependents\": [ \"../..\" ], \"vulns\": [] }, { \"name\": \"through\", \"version\": \"2.3.8\", \"path\": \"/var/my-app/node_modules/through\", \"main\": \"index.js\", \"dependencies\": [], \"dependents\": [ \"../split\" ], \"vulns\": [] } ], \"vulnerabilities\": [ { \"package\": \"debug\", \"title\": \"Regular Expression Denial of Service (ReDoS)\", \"published\": \"2017-09-26T03:55:05.106Z\", \"credit\": [ \"Cristian-Alexandru Staicu\" ], \"id\": \"npm:debug:20170905\", \"ids\": { \"NSP\": 534, \"CWE\": [ \"CWE-400\" ], \"CVE\": [], \"ALTERNATIVE\": [ \"SNYK-JS-DEBUG-10762\" ] }, \"vulnerable\": \"<2.6.9 || >=3.0.0 <3.1.0\", \"severity\": \"low\", \"description\": \" ... \", \"nsolidMetaData\": { \"hidden\": false }, \"packages\": [ \"/var/my-app/node_modules/debug\" ], \"depsChains\": [ [ \"nsolid-dev-demo@1.0.0\", \"debug@2.2.0\" ] ], \"topLevel\": 1 }, { \"package\": \"ms\", \"title\": \"Regular Expression Denial of Service (ReDoS)\", \"published\":" }, { "data": "\"credit\": [ \"Snyk Security Research Team\" ], \"id\": \"npm:ms:20170412\", \"ids\": { \"CWE\": [ \"CWE-400\" ], \"CVE\": [], \"ALTERNATIVE\": [ \"SNYK-JS-MS-10509\" ] }, \"vulnerable\": \"<2.0.0\", \"severity\": \"low\", \"description\": \" ... \", \"nsolidMetaData\": { \"hidden\": false }, \"packages\": [ \"/var/my-app/node_modules/ms\" ], \"depsChains\": [ [ \"nsolid-dev-demo@1.0.0\", \"debug@2.2.0\", \"ms@0.7.1\" ] ], \"topLevel\": 1 } ] } ``` Generates a V8 CPU profile of the specified process. | Option | Description | |:--|:| | --id | The agent id (required) | | --duration | Duration of profile in seconds. Default is 10 minutes | Usage ``` $ nsolid-cli profile --id=[agent id] > my.cpuprofile ``` Once the profile file has been created, it can be opened using Chromes Development Tools CPU Profile Debugging Tool. Note: To load the file, Chrome requires that the generated file have the extension .cpuprofile. Generates a V8 heap snapshot of the specified process. | Option | Description | |:|:| | --id | The agent id (required) | Usage ``` $ nsolid-cli snapshot --id=[agent id] > my.heapsnapshot ``` Once the snapshot file has been created, it can be opened using Chromes Development Tools heap snapshot browser Note: To load the file, Chrome requires that the generated file have the extension .heapsnapshot. Lists the time to reach certain process lifecycle startup phases from initial process execution. | Option | Description | |:|:-| | --id | The full agent id or prefix | Usage ``` $ nsolid-cli startup-times ``` Returns a JSON stream including the following properties: | Property | Description | |:-|:| | time | Milliseconds since epoch time message was sent | | id | The agent id | | app | The NSOLID_APP value | | hostname | The host the process is running on | | initialized_node | An array of two integers. The time it took to initialize the Node internals, reported as [seconds, nanoseconds] | | initialized_v8 | An array of two integers. The time it took to initialize the V8 engine, reported as [seconds, nanoseconds] | | loaded_environment | An array of two integers. The time it took to complete all initialization, which includes running some of Node's internal JavaScript code, and your main module's top-level code, reported as [seconds, nanoseconds] | ``` { \"loaded_environment\": [ 0, 322526338 ], \"initialized_node\": [ 0, 120919 ], \"initialized_v8\": [ 0, 240910 ], \"id\": \"5dd6f7a940bfc3633cc3ffc82332640d51ce5112\", \"time\": \"2017-12-04T01:32:30.042Z\", \"tags\": [ \"region:north\", \"zone:A\" ], \"app\": \"my-app-name\", \"hostname\": \"my-computer.local\" } ``` Additional timers can be added to your application with custom lifecycle events. Returns known security vulnerabilities for all processes. Usage ``` $ nsolid-cli vulnerabilities ``` Returns a JSON object representing all current known vulnerabilities. | Property | Description | |:-|:-| | time | Message timestamp | | vulnerabilities | An array of vulnerability objects | ``` { \"time\": \"2017-12-04T01:34:54.805Z\", \"vulnerabilities\": [ { \"report\": { \"package\": \"ms\", \"title\": \"Regular Expression Denial of Service (DoS)\", \"published\": \"2015-11-06T02:09:36.187Z\", \"credit\": [ \"Adam Baldwin\" ], \"id\": \"npm:ms:20151024\", \"ids\": { \"CWE\": [ \"CWE-400\" ], \"CVE\": [ \"CVE-2015-8315\" ], \"NSP\": 46, \"ALTERNATIVE\": [ \"SNYK-JS-MS-10064\" ] }, \"vulnerable\": \"<=0.7.0\", \"severity\": \"medium\", \"description\": \" ... \", \"nsolidMetaData\": { \"hidden\": false } }, \"processes\": [ { \"id\": \"e1c17bc36d7a9cc76ead259ace0307d4b9705646\", \"app\": \"my-app-name\", \"tags\": [ \"region:north\", \"zone:A\" ], \"hostname\": \"my-computer.local\", \"topLevel\": 1, \"depChains\": [ [ \"ms@0.7.0\" ] ] } ... ] }, ... ] } ``` Subscribe to matching agent data on an interval. | Option | Description | |:--|:--| | --field | List of fields to" }, { "data": "All fields returned if not specified | | --interval | Number of seconds before returning next current object (default: 1) | | --q | The query filter options (see above) | Usage ``` $ nsolid-cli query --q id=5 ``` Returns newline delimited JSON objects with framing objects. | Framing Type | Description | |:|:-| | start | The query stream start frame | | interval-start | The start of the records for this interval | | agent-enter | An agent entry | | summary | Summary data about the entire (unfiltered) data set | | interval-end | The last record for this interval frame | ``` {\"time\":\"2017-12-04T01:42:29.502Z\",\"type\":\"start\"} {\"time\":\"2017-12-04T01:42:29.502Z\",\"type\":\"interval-start\"} {\"time\":\"2017-12-04T01:42:29.502Z\",\"type\":\"agent-enter\",\"id\":\"5dd6f7a940bfc3633cc3ffc82332640d51ce5112\",\"info\":{ ... },\"metrics\":{ ... } {\"time\":\"2017-12-04T01:42:29.502Z\",\"type\":\"summary\",\"totalAgents\":9} {\"time\":\"2017-12-04T01:42:29.502Z\",\"type\":\"interval-end\"} ``` Pull a summary of all connected N|Solid processes. Reports counts of matched properties or resources. | Option | Description | |:|:| | --q | The query options (see above) | Usage ``` $ nsolid-cli summary ``` ``` { \"time\": \"2017-12-04T02:12:02.506Z\", \"processes\": 9, \"apps\": { \"my-app-name\": 5, \"api-server\": 1, \"web\": 1, \"batch-service\": 2 }, \"tags\": { \"region:north\": 4, \"zone:A\": 5, \"region:south\": 5, \"zone:B\": 2, \"zone:C\": 2 }, \"rss\": 693153792, \"cpu\": 3.4667812728278924, \"node\": { \"8.9.1\": 9 }, \"nsolid\": { \"3.0.0\": 9 }, \"packages\": 552, \"vulnerabilities\": { \"npm:ms:20170412\": 4, \"npm:qs:20140806\": 2, }, \"hiddenVulnerabilities\": {} } ``` Pull a JSON object with one or many settings configuration to backup or import later. The only settings available to export are: integrations, savedViews and notifications. | Option | Description | |:|:--| | --item | Could be one or a list of items to export (see example) | Usage ``` $ nsolid-cli export-settings --item integrations,savedViews,notifications ``` ``` { \"_metadata\": { \"_timestamp\": \"\", }, \"integrations\": {}, \"notifications\": {}, \"savedViews\": {} } ``` Apply a previously backup/exported settings. The only settings available to import are: integrations, savedViews and notifications. | Option | Description | |:|:--| | --item | Could be one or a list of items to export (see example) | | --action | Append to or clean previous settings (see example) | Usage ``` $ nsolid-cli import-settings --item integrations --action clean --attach backup.nsconfig ``` NodeSource has developed N|Solid to meet the needs of enterprise production environments. Built upon the experience and insights of several core contributors to the Node.js ecosystem, N|Solid provides live instrumentation of your production system's health, and stability with no changes to your application code. In addition, N|Solid offers the ability to control access for your critical applications. The N|Solid Console provides centralized access to all of your applications, and an aggregated view of each application's processes. This holistic view simplifies triage at runtime, makes it easier to find outliers, and takes advantage of advanced diagnostic tools directly from your browser. Read more about the Console The combination of the Runtime and Console make N|Solid an invaluable tool for gaining insight into the behavior of your live production applications. Backed by NodeSource's 24x7 support, N|Solid adds enterprise-grade diagnostic and security features, delivered on a release cycle that is aligned with Node.js Long Term Support (LTS) releases. This provides a stable production platform for all of your Node applications. N|Solid SaaS is a hosted monitoring database and introspection console designed to make the ideal use of the diagnostics and monitoring features of the N|Solid Runtime. Only two things are required to start inspecting your Node.js applications in N|Solid SaaS: The N|Solid Runtime is the Node.js runtime with additional V8 and core diagnostics hooks that can communicate with an external monitoring database. The minimum configuration value to begin using N|Solid Pro SaaS is the NSOLID_SAAS value. This value enables the agent thread and metrics collection and configures it for your account and data endpoint. The N|Solid Runtime will check a few places for configuration at startup for this and other values to customize its" }, { "data": "These options work best for most: Adding a small nsolid section to your package.json with the nsolid_saas value: ``` { ..., \"nsolid\": { \"nsolid_saas\": \"...\" }, ... } ``` From the command-line, you can set the NSOLID_SAAS environment variable for simple configuration. Many users option to use this method to configure multiple-tier environments by adding it to existing configuration files in their deployment setups. ``` NSOLID_SAAS=\"...\" nsolid index.js ``` Read more about configuring N|Solid We provide a large number of ways to download or install the N|Solid Runtime to support you through your entire development cycle. The nsolid_quickstart project is designed to be run via npx to allow you to try N|Solid without fully installing it to your system. Note that it will download the N|Solid Runtime upon execution if you do not already have it cached. Keep this in mind if you have network or time constraints. ``` $ npx nsolid-quickstart --saas=\"...\" --exec index.js Need to install the following packages: nsolid-quickstart@2.3.0 Ok to proceed? (y) y Checking N|Solid versions metadata Downloading N|Solid Bundle from: https://s3-us-west-2.amazonaws.com/nodesource-public-downloads/20.10.0-ns5.0.2/artifacts/binaries/nsolid-v5.0.2-iron-linux-x64.tar.gz https://s3-us-west-2.amazonaws.com/nodesource-public-downloads/20.10.0-ns5.0.2/artifacts/binaries/nsolid-v5.0.2-iron-linux-x64.tar.gz downloaded to: /tmp/nsolid-v5.0.2-iron-linux-x64.tar.gz Extracting /tmp/nsolid-v5.0.2-iron-linux-x64.tar.gz and installing Using N|Solid bundle 5.0.2 from /home/user/.nsolid-bundle/5.0.2/iron Executing the specified script index.js at /home/user/ns Here is your program output - ``` We suggest installing N|Solid to use wherever you run your application in development, as it also makes a great development tool, allowing you to compare (for example) profile data from a current production system to a development system from the same console. ``` $ sudo apt-get install nsolid-iron Reading package lists... Done Building dependency tree... Done Reading state information... Done nsolid-iron is already the newest version (5.0.2-deb-nodesource-systemd1). 0 upgraded, 0 newly installed, 0 to remove and 47 not upgraded. $ NSOLID_SAAS=\"...\" nsolid index.js ``` What you'll most likely use to put N|Solid into your production environments. The chances are very good that not only we have the same build artifact you use to get Node.js on your production server, but we also provide it as the world's most popular provider of Node.js distributions. It should be as easy as replacing the Node.js image or package you use for the equivalent containing N|Solid. Read more about Installation In order to communicate with the N|Solid SaaS endpoint, the systems running the N|Solid Runtime will need to be able to make outgoing connections on ports 9001, 32001, 32002. Concerned about your data? So are we! Your data is encrypted over-the-wire, and the metrics collected do not contain PII or other sensitive information. For memory debugging, we strongly suggest using our NSOLIDREDACTSNAPSHOTS feature if you have sensitive data and want to capture Heap Snapshots. This flag instructs the runtime to blank out all string data inside of Heap Snapshots prior to sending it to the N|Solid Console. Read more about Heap Snapshots The N|Solid Console provides valuable insight into clusters of N|Solid processes running in a variety of configurations. Processes are grouped by their associated application, making it simple to find outliers in memory and CPU utilization. Navigating to an individual process shows various metrics, and provides the ability to take heap snapshots and generate CPU profiles. Once a user launched the N|Solid Console and connected their processes to the console, users will now land on the consoles overview screen. This screen delivers aggregated application metrics, including: This list can be filtered and sorted by name, number of processes and number of vulnerabilities. When selecting a specific application by clicking the application's name, users are redirected to the Application Summary. When Number of Processes is clicked, users are redirected to the scatterplot filtered by the selected application" }, { "data": "This screen delivers key information of the selected application including metrics, the number of vulnerabilities, number of processes, number of Worker Threads, number of events, and the application's status itself. More insightful application information is presented in this view, with a fully dedicated metrics view, modules view, assets view, and events view; all this is shown application-wide. In the application metrics summary view, users can a short graphical summary of how the application is behaving. A key feature of the application's metrics view is the power of filtering the average of the metrics to processes with tags, users can select and unselect filters by their necessities and the graphs will be updated with processes with the desired tags. Users are now able to inspect any metric considered insightful per application, those metrics could be numeric or a graph. When a metric is clicked, it is zoomed in. In the application's modules view, the vulnerabilities are filtered application-wide, users can see the vulnerabilities not just for individual processes but complete applications, that's a better approach to take real advantage of NCM power. In the application's assets view, all the CPU profiles and Heap Snapshots of the application are listed, this makes it easier for users to diagnose per application not just per process. In the applications events view, users can see all the runtime events, like security events, lifecycle events, system events, performance events, and assets events. Learn more about event profiling here. The Processes view provides a visual overview of N|Solid applications monitored by the Console. By default, the N|Solid Console will display all processes. Using Filtering and Saved Views, you may filter processes by a wide variety of attributes, e.g. name, tags, performance metrics, etc. Saved views can also be created to group sets of processes together. | Section | Description | |:|:--| | View | The View dropdown allows you to navigate between different Saved Views and their respective Scatterplots. See Saved Views for more information on creating and managing Saved Views. The default view is All Processes | | Filter | The filter allows you to dynamically query for processes or create a new saved view | | Scatterplot | The scatterplot is a graph where processes matching the currently-selected saved view will be shown. Graph axes can be configured differently for each saved view, and process dots animate as the values for those attributes change | | Processes List | The processes list is a textual representation of the processes graphed on the scatterplot | The Scatterplot is an animated graph that provides an overview of your applications' performance across all or a subset of connected processes, when an specific process has at least one active worker thread, the process will be highlighted. By default, the Y-axis plots the memory (Heap Used), and the X-axis plots the % CPU utilized for each process. You can configure these axes to measure different parameters. Any numeric metric may be used to plot either axis. NodeSource recommends the following metrics as being the most useful: | Metric | Description | |:|:| | 5 Minute Load Average | The host system's five-minute load average. | | Active Handles | The number of active long-term resources held by the process. | | Application Name | The user-specified application name as set in package.json or the NSOLID_APP environment variable. | | CPU Used (%) | The percent CPU used by the process. | | Event Loop Estimated Lag | The estimated amount of time a I/O response may have to wait in the process, in" }, { "data": "| | Event Loop Idle Percent | The percent time that the process is waiting (idle) for I/O or timers. | | GC Count | The total number of garbage collections done by the process. | | GC Median Duration | The process's median duration of garbage collections, in microseconds. | | Heap Total | The process's total allocated JavaScript heap size, in bytes. | | Heap Used | The process's total used JavaScript heap size, in bytes. | | Hostname | The host system's name. | | NODEENV environment variable | The user-specified NODEENV environment variable. | | Process Uptime | The process's uptime, in seconds. | | Resident Set Size | The resident set size (total memory) used by the process, in bytes. | | Tag | The user-specified tags as set by the NSOLID_TAGS environment variable. | | Vulnerabilities Found | The number of known vulnerabilities found in the modules of the process. | Additionally, the Y-axis may also plot any textual field data that is provided by the N|Solid agents. A full list of metrics available to N|Solid can be found in the Metrics in Detail section (some of the listed metrics may not be available in the Scatterplot). There are three options for axis scaling available: The Processes List on the right side of the page provides a textual representation of the processes on the scatterplot. Click and drag inside the scatterplot to select a subset of processes. The Processes List will update to show only these processes. Click Apply as Filter to view the selected processes as a filter, which you can then save as a Saved View. Click Clear or click on the graph to clear the selection and show all processes within the view. Sort selected processes in the Processes List using the Sort dropdown. Sorting by App Name or Hostname will group like processes together. Sorting by Vulnerabilities will group vulnerable and non-vulnerable processes together respectively. Sorting by CPU, Memory, or Uptime will sort the processes numerically by these metrics. For sorting methods that group processes (App Name, Hostname, and Vulnerabilities), clicking on the label for a group of processes will select, or narrow selection, to that group of processes. Clicking the process ID will take you to the Process Detail view. Hovering your mouse over a process in the Processes List will highlight that process in the graph for easy identification among other processes. Hovering on a process when processes are selected also reveals an X icon that may be clicked to remove a single process from the selection. Hovering over a process in the Processes List also shows a target icon that enables process targeting. Process targeting causes one minute of the process's recent historic metrics to be visualized as a trail following the process around the graph. One process can be targeted at a time. Clicking the target icon a second time will disable process targeting. The Process Detail view shows detailed information relating to a specific process. At the top of the view it displays data that N|Solid receives from the process, including the pid, host, platform, and CPU configuration. The left side of this view contains the threads (worker threads) of the process, the thread 0 is the main thread and the rest of them are worker threads, and the right side of this page also contains controls for manually taking CPU profiles and heap snapshots of this" }, { "data": "The available threads can also have a name to make them easier to identify, learn how to set thread name You can compare threads by selecting thread IDs on the Thread List. Please note that maximum of 2 threads can be selected for comparison. The legend of the metrics chart will be automatically updated including each thread ID based on the threads you selected for comparision. Note: There are some metrics marked as process-wide, which are not thread-specific. The Metrics tab contains several charts which visualize important performance metrics which N|Solid gets from your processes. | Section | Description | |:--|:| | ELU (Event loop utilization) | Learn more at: Event loop utilization blogpost | | CPU Used | CPU usage for this process | | Memory | Heap total, heap used, and resident set size. Heap total is total size of the V8 heap, which includes everything in the heap, including free space which has been allocated by V8 but is unused by JavaScript objects. Heap used is the size of of the V8 heap occupied directly by JavaScript objects created by the application, as well as V8 internal objects created on behalf of it. Resident set size (RSS) is the actual memory footprint of the process over time. It does not include memory that belongs to the process that has been swapped out to disk | | Host Load | The average system load over the last one, five, and fifteen minutes | | Event Loop | Two series that reflect the health of the event loop. The Idle % shows how often the event loop is waiting for work. The Lag data shows the average time in milliseconds for each tick of the event loop | | Internal Activity | Two series which include process-lifetime counts of the total number of garbage collections performed, and the number of event loop turns | | Host Memory | The amount of host memory in use by all processes on this system | | Internal Activity Rates | The Internal Activity graph, but instead of lifetime totals it shows how many Event Loop iterations and Garbage Collections run per second | | HTTP Median | The median timing of HTTP/HTTPS Client or Server requests and DNS requests made by the application. Each is measured in milliseconds | | HTTP 99th Percentile | The 99th Percentile timings of HTTP/HTTPS Client and Server requests and DNS requests made by the application, measured in milliseconds | | HTTP Totals | The total counts of HTTP/HTTPS Client and Server requests and DNS requests for the lifetime of the application | | HTTP Requests Per Sec | The number of HTTP/HTTPS Client and Server requests and DNS requests made by the application per second | The Metrics Search Bar allows you to show or hide metrics, which supports auto-completion for all of the metrics defined in the Metrics in Detail. To show more metrics which are not on the metrics tab by default, search metrics using the auto-completion and select the metrics from the search results. You can also set multiple filters on the search bar. Note: Please refer to Metrics in Detail to see all available metrics. The chart of the metrics that you selected will be shown below the ELU metrics chart. If you want to hide metrics, click x icon on the metrics label on the search bar. The Modules tab lists all modules this process is loading and using at runtime. More importantly, this tab prominently features modules that include known security vulnerabilities. The Assets tab lists CPU profile and heap snapshots related to this process. Click on the asset to view" }, { "data": "This guide will help you try N|Solid on Linux, macOS or Windows 10 locally. It provides a quick, zero-config way to try NSolid or install all of the components onto your development machine on macOS. PLEASE NOTE: THIS WAY OF RUNNING NSOLID IS MEANT FOR LOCAL TESTING ONLY! For production deployments please refer to the documentation below. For instructions on installing the individual components for a production cluster on Linux or Windows 10, please refer to the the installation guide. Using one simple npx command, you can try NSolid by using a self-guided demo, run and diagnose your own application or execute an NPM task you can observe locally. To try N|Solid visit accounts.nodesource.com and sign up for a free account. Setting up an account is free and requires no credit card information. Select the Try NSolid option, and follow the prompts. Using the 'npx nsolid-quickstart' command loads N|Solid into your local cache where it will be executed without actually having been installed. This setup method was designed for frictionless ease of use. npx is a npm package runner (x probably stands for eXecute). The typical use is to download and run a package temporarily or for trials. It is not recommended to use this method in production. We provided separate setup instructions for on-prem here and cloud deployments here. To run the demo simply execute: ``` $ npx nsolid-quickstart --demo ``` The command will launch a self-guided demo that runs a ready-made simulation to introduce you to NSolids performance monitoring and diagnostic features. It will open a a new browser window or tab pointing to the default N|Solid Console URL: http://localhost:6753/. There you will be asked to authenticate with your accounts.nodesource.com credentials before the console drops you into the self-guided demo experience. Because this runs a simulated application, the demo uses zero infrastructure resources and does not interfere with any of the applications you may connect to the console. As such, the demo can also be accessed in on-prem and cloud deployments via the Start Demo and Guided Demo buttons. It is possible to use N|Solid Quickstart to launch your own applications. There are two options: execute a JavaScript file or execute an NPM task. To lauch your application from a JavaScript file just execute: ``` $ npx nsolid-quickstart --exec index.js ``` The above command is the equivalent of running node index.js. Just replace index.js with the right name for your script. To execute your application from an NPM task execute the following command: ``` $ npx nsolid-quickstart --npm dev ``` This is the equivalent of running npm run dev, just replace dev with the right name for your NPM task. There are many options included, like --lts, which allow to switch to a different Node.js LTS version (by default gallium is used), to see all available options execute: ``` $ npx nsolid-quickstart ``` The following section covers a series of questions users have asked NodeSource in the past: NSolid supports Role Based Access Control. The role assigned to your user-profile may be configured to limit your access to a sub-set of NSolid features. Contact your organization's account administrator to see which permissions you may be missing. To learn how NodeSource's RBAC works and which permission are available, visit the RBAC section in the accounts docs. To secure the NSolid Console against unauthorized access, every console is linked to an account. The organizations account administrator can invite team-members to the organizations account, thus authorizing team-members to log into their organizations NSolid" }, { "data": "You will encounter the below screen if you tried to access a console whose organization you are not part of (anymore). To regain access, we recommend you contact your orgs administrator directly. If you deployed the NSolid Console yourself, and cant access the console, we recommend you observe your consoles license key. If the license key belongs to an organization that does not consider your user credentials (email and password) part of its org you will not be permitted to use the NSolid Console. In that case, please contact the Consoles administrator via the contact details provided on the Access Not Allowed Screen or contact NodeSource support for assistance. Whether you are an Org Admin or have been invited to an existing organization, you may encounter the below screen when attempting to log into the NSolid Console: You may be seeing this screen for one of two reasons: To renew/ upgrade an enterprise license, please contact sales@nodesource.com. To renew an Advanced-tier license visit accounts.nodesource.com and update your payment details. This is expected behavior. To monitor Node applications using N|Solid, you must set the following environment variable when running your Node.js application. This tells the NSolid runtime where to report your metrics to. For a step-by-step guide on setting your environmental variable please refer to the following links: Yes. Utilizing typescript and/or transpilers with N|Solid makes interpreting CPU profiles difficult unless the user is deeply familiar with the code. The integration of Source Maps provides a translation layer that provides a reference between the compiled source code and source code. When compiling code, a Source Map is currently being generated. The integration of Source Maps would provide a translation layer that provides a reference between the compiled source code and source code. In an effort to address concrete customer pain-points NodeSource has introduced SourceMap Support to the N|Solid CPU profiler. This feature continues to evolve with the specific requirements of our customers. The features UX can be viewed in the docs here. This could happen if your firewall is blocking our services and API. Please make sure that services.nodesource.io and api.nodesource.com are listed in your firewall's whitelist. N|Solid Prod On-prem is available for Linux, Windows, and macOS. We provide the same support available for N|Solid Runtime found here. Packages and installation guides are delivered privately to customers. Please contact us to get N|Solid Pro On-Prem. For people who need to use N|Solid in a Dockerized environment, NodeSource has developed the N|Solid Docker images. These images are built with the enterprise customer in mind, developing a versatile set of independently scalable Docker images that work together to provide a unified N|Solid experience across all of your Dockerized infrastructure. In addition to being friendly to enterprise operations teams, the N|Solid Docker Images have been designed to be accessible to developers. For those who are already using Docker, these images provide an easy way to get up and running with the N|Solid console. With the use of docker-compose, you can now get the full N|Solid Suite on your local machine with a single command. We provide 3 separate Docker Images for N|Solid, allowing each component of the platform to be scaled and deployed independently. | Docker Image | Description | |:--|:| | nodesource/nsolid | The base image that your Docker images should build from. It provides the N|Solid runtime and agent and needs to be properly configured to register its agent with the N|Solid Console service. Releases | | nodesource/nsolid-console | Contains the N|Solid console, a web-based interface for monitoring Node.js applications. You will need to bind the internal port 6753 to your" }, { "data": "Releases | | nodesource/nsolid-cli | Provides an easy way to query statistics from your Node.js processes. It is meant to be run directly from the command line and is optional. Releases | All of the images are built on the official ubuntu:bionic image maintained by the Docker project. The latest tag defaults to the latest Node.js v20.x Iron LTS release. For more details please refer to the docker-nsolid github repository. First, follow the steps for installing Docker for your operating system. Although not required, it is recommended that you use docker-compose when first getting started. It simplifies managing these Docker images when running them locally. The docker-compose file below provides an example of the N|Solid platform configuration. It is also useful as a starting point for your docker setup. The following docker-compose file doesn't include any N|Solid applications yet. We will explain how to add your application below. Use a valid path in the volumes configuration, or remove it if persistence is not important to you. ``` version: \"2\" services: console: image: nodesource/nsolid-console container_name: nsolid.console ports: 9001:9001 9002:9002 9003:9003 6753:6753 environment: NSOLIDCONSOLELICENSE_KEY=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx volumes: /path/to/persistent/console:/var/lib/nsolid/console networks: nsolid networks: nsolid: ``` Note: The NSOLIDCONSOLELICENSE_KEY should be a valid license key not a JWT license token, you can see more information about license tokens here. To download the images using docker-compose, open a terminal in the directory where nsolid.yml is kept, and run: ``` $ docker-compose -f nsolid.yml pull ``` This will download all of the images needed to run the N|Solid console locally. To download the images using docker, execute the following commands: ``` $ docker pull nodesource/nsolid-console $ docker pull nodesource/nsolid-cli ``` To bring up the N|Solid Console, run the following command: ``` $ docker-compose -f nsolid.yml up ``` By default, this will bring up the N|Solid Console available on 127.0.0.1:6753. Though using docker-compose is our recommended approach for local development, the N|Solid console can be started with docker: ``` $ docker run -d -p 9001-9003:9001-9003 -p 6743:6753 \\ -v /path/to/persistent/console:/var/lib/nsolid/console --name console \\ --network docker_nsolid \\ nodesource/nsolid-console ``` This will bring up the N|Solid console available on localhost:6753. Make sure you have a Docker network to link your containers, you can create one with: ``` $ docker network create docker_nsolid ``` Visit the docker docs) for more information on Docker container networking. Create a file called server.js: ``` // server.js var http = require('http'); var hostname = require('os').hostname(); var port = process.env.PORT || 8888; var server = http.createServer(function handleRequest (request, response) { response.end('[' + hostname + '] Serving requests from myapp. Request URL:' + request.url); }); server.listen(port, function () { console.log('Server listening on port', port); }); ``` Create a file called Dockerfile: ``` FROM nodesource/nsolid COPY server.js /server.js EXPOSE 8888 CMD [\"nsolid\", \"/server.js\"] ``` Build docker image: ``` $ docker build -t example . ``` Create the file docker-compose.yml in the directory along-side nsolid.yml: ``` version: \"2\" services: example: image: example container_name: nsolid.example ports: 8888:8888 # Port your application exposes environment: NSOLID_APPNAME=example NSOLID_COMMAND=console:9001 NSOLID_DATA=console:9002 NSOLID_BULK=console:9003 networks: nsolid networks: nsolid: ``` For the complete documentation on defining a service with docker-compose.yml, refer to the Docker projects documentation page: https://docs.docker.com/compose/overview/. At this point, you are ready to bring up your application using docker-compose: ``` $ docker-compose -f nsolid.yml -f docker-compose.yml up ``` Start your service: ``` $ docker run -d --name example -e 'NSOLIDAPPNAME=example' -e 'NSOLIDCOMMAND=console:9001' -e 'NSOLIDDATA=console:9002' -e 'NSOLIDBULK=console:9003' --network docker_nsolid example ``` If you are new to Docker, follow the steps in our blog post to get your application into a Docker" }, { "data": "Your Dockerfile should begin with the following line: ``` FROM nodesource/nsolid ``` Congratulations, you are now up and running with N|Solid and Docker! These are the supported command line arguments if N|Solid Console is started explicitly. | Parameter | Description | |:|:-| | --config=config.toml | The configuration file for Console. The default config will be used if none is specified | | --relay | Run Console in Relay mode. This allows you to place an instance in a segment of your network that can aid connectivity and uptime | See Relay Mode for more information. N|Solid Console can be configured via a configuration file if the --config argument is specified. This file is formatted as TOML (similar to ini file format). A sample file is available in the file config-sample.toml in the root directory of the nsolid-console server. The contents of the file are also listed below. To run with a customized configuration, copy the file and edit it so that it contains your customized values. You can omit keys from any section which you don't want to override, and omit entire sections if no keys in that section are overridden. Alternatively, or in addition, you can set environment variables for every configuration key and value. The names of the environment variables are listed with the configuration keys and values below. For more information on the TOML language used here, please see: https://github.com/toml-lang/toml ``` logLevel = \"info\" [license] key = \"\" [auth] requireLogin = true adminToken = \"\" [web] proto = \"http\" server = \"0.0.0.0:6753\" # 0.0.0.0 allows access on all network interfaces httpProxy = \"\" httpsProxy = \"\" [web.https] key = \"\" cert = \"\" secureOptions = \"\" ciphers = \"\" [sockets] commandBindAddr = \"tcp://*:9001\" dataBindAddr = \"tcp://*:9002\" bulkBindAddr = \"tcp://*:9003\" publicKey = \"^kvy<i^qI<r{=ZDrfK4K<#NtqY+zaH:ksm/YGE6I\" privateKey = \"2).NRO5d[JbEFli7F@hdvE1(Fv?B6iIAn>NcLLDx\" HWM = 0 bulkHWM = 0 saas = \"\" commandRemoteAddr = \"\" dataRemoteAddr = \"\" bulkRemoteAddr = \"\" remotePublicKey = \"\" [relay] maxBufferSizeMb = -1 flushInterval = 1 cleanupSizeMb = 100 logSizeInterval = 10 [data] dir = \"~/.nsolid-console/data\" retentionDays = 31 oldDataThresholdMs = 10000 backfillTimeoutMs = 30000 dbname = 'database' [influxdb] url = \"\" user = \"nsolid\" password = \"L+T;95cTBC}~jPnj\" org = \"nsolid\" token = \"nsolid-token\" defaultBucket = \"agentData\" uiDisabled = true retentionPeriodSeconds = 3600 maxConcurrency = 2 [[influxdb.policies]] name = \"agentData\" retentionHours = 1 shardRetentionHours = 1 startupRetentionHours = 24 startupShardRetentionHours = 24 default = true [[influxdb.policies]] name = \"appAverageData\" retentionHours = 1 shardRetentionHours = 1 startupRetentionHours = 24 [[influxdb.policies]] name = \"oneminutedaily\" retentionHours = 24 shardRetentionHours = 1 startupRetentionHours = 24 [[influxdb.policies]] name = \"onehourmonthly\" retentionDays = 31 shardRetentionDays = 7 useSettingsDuration = true [[influxdb.policies]] name = \"rp_events\" retentionDays = 365 shardRetentionDays = 7 [notification] consoleURL = \"http://localhost:6753\" stackFrames = 10 [view] interval = 5000 [anomaly] timeout = 60000 [logs] influx = \"~/.nsolid-console/influxdb.log\" [vulnerabilities] refreshMinutes = 30 [assets] cleanupMinutes = 5 maxAssets = 500 maxAssetsMb = 100 summaryTimeout = 0 summaryLimit = true ``` The variables described below should only be necessary if a proxy is required in front of N|Solid Console. If this is not needed, ignore this section as the variables will be configured by the N|Solid agent via the COMMAND interface. | Environment Variable | Description | |:--|:| | NSOLIDDATA | This is the route to the Console data port. It should be formatted as \"host:port\", and follows the same conventions as NSOLIDCOMMAND. If this argument is not specified, N|Solid will attempt to discover this port from Console via the COMMAND interface | | NSOLIDBULK | This is the route to the Console bulk port. It should be formatted as \"host:port\", and follows the same conventions as" }, { "data": "If this argument is not specified, N|Solid will attempt to discover this port from Console via the COMMAND interface | | NSOLID_PUBKEY | This is Console's public key. If a custom key is not configured, the default keypair that ships with N|Solid will be used | If you already maintain an instance or cluster of InfluxDB enterprise or want to use Influx Cloud and want N|Solid Console to use your database to store its own metrics, you may do so using the NSOLIDCONSOLEINFLUXDBURL, NSOLIDCONSOLEINFLUXDBUSER and NSOLIDCONSOLEINFLUXDB_PASSWORD environment variables or set the values at the corresponding configuration file section. This should point to the http(s) listener for your InfluxDB and include the correct protocol and auth credentials if needed. This will look similar to ``` NSOLIDCONSOLEINFLUXDB_URL=http://my-influx-host:8086 NSOLIDCONSOLE-INFLUXDBUSER=myuser NSOLIDCONSOLE-INFLUXDBPASSWORD=secret ``` If errors occur reading or writing to this instance, a notification banner should appear in your console as well as specific error messages in the N|Solid Console log. N|Solid Console will create and manage its own database and retention policies in the InfluxDB and should be configured using the standard configuration options. Note: The supported InfluxDB version is v2.x, the InfluxDB instance must be a fresh instance with no initial user, organization and bucket defined; otherwise, N|Solid Console won't set up its standard options or even work. Historical metrics, displayed in the line-chart UI elements, are trimmed to the last 15 minutes by default. For more information about configuration options for N|Solid Console's data processing functions, see networking. In order to access N|Solid you must have a valid NodeSource account. If you have not connected your Console to your account, when you try and access the console you will be redirected to the welcome screen. Using this form to log into your NodeSource account will automatically configure your Console server and allow you to associate it with your account or organization. Once connected, you should be immediately redirected to the N|Solid Console. If you want your Console to be associated with a different account or organization, click Settings on the right side of the menu at the top of the page. If you are an administrator for the linked organization, you will see a \"Reset License\" button which can be used to restore the Console to its initial login state. If you do not have administrative access to the registered license and need to reset the Console's registration state, you can start the Console with the NSOLIDCONSOLEAUTHREQUIRELOGIN=false environment variable to disable the authentication requirement and allow access to the Reset Console button. Remember to restart without this setting if authentication is still desired. Beginning with N|Solid 3.4.0, users will be asked to authenticate before they can access the console. If the user is not the owner of the account or a member of the associated organization, they will not be permitted to access the console. To allow additional users to access your console, add them to the NodeSource Accounts organization which is associated with this Console. If you do not wish to enforce user authentication, it can be disabled by setting the NSOLIDCONSOLEAUTHREQUIRELOGIN=false environment variable or its associated config file setting. This authentication also impacts nsolid-cli. If you require use of the CLI, you should set NSOLIDCONSOLEAUTHADMINTOKEN or the corresponding config file setting to a secure value and use it with the --auth argument. This grants administrative access to the console and should be used with discretion. See Command Line Interface for instructions. You must have a valid license to use the N|Solid Runtime either with the Console or communicating directly to your StatsD" }, { "data": "If you do connect to your N|Solid Console, the license will be configured automatically for you. If you prefer to use your runtime without connecting to console, you must provide your license in the form of a signed token. This can be obtained from a running licensed N|Solid Console by running nsolid-cli license-token or by querying http://console:6753/api/v3/license-token directly. If you do not have a N|Solid Console available, you can contact NodeSource Support and they can provide you with one. The token will be valid for the duration of your license period and must be refreshed when your expiration date is extended. Your valid license token can be specified as the NSOLIDLICENSETOKEN environment variable or included in the nsolid section of package.json as \"licenseToken\". Similar to the StatsD situation, your N|Solid installation must have a valid license to operate. If your system is not able to contact the NodeSource web services, you may use the license token as described previously by pasting it into the License Key entry field in your N|Solid Console. Make sure you are not passing your xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx license key as the token value. This is your license key and cannot be used in offline licensing. The data maintained in the N|Solid Console server is stored in a directory determined by the dir property of the [data] section of the configuration file. The default value is ~/.nsolid-console/data. If the contents of the directory are copied to a new machine, the N|Solid Console server can be run with its configuration pointing to the copied directory. The N|Solid Runtime and CLI all can have debug messaging enabled by using the NODE_DEBUG=nsolid environment variable. ``` $ NODEDEBUG=nsolid NSOLIDAPP=testapp NSOLID_COMMAND=localhost ~/ns/nsolid-node/nsolid NSOLID 16597: starting agent (js init) name: testapp id: e24a79b2aae58b63e385a4c0ce9ed91f3d0202ee tags: undefined NSOLID 16597: nsolid agent e24a79b2aae58b63e385a4c0ce9ed91f3d0202ee starting with config NSOLID 16597: { command: '9001', data: undefined, bulk: undefined, pubkey: undefined, statsd: undefined, statsdBucket: 'nsolid.${env}.${app}.${hostname}.${shortId}', statsdTags: undefined, hostname: 'rpi3-1', env: 'prod', interval: null, tags: undefined, app: 'testapp', appVersion: undefined } NSOLID 16597: { zmqcommandremote: 'tcp://localhost:9001', zmqdataremote: null, zmqbulkremote: null, statsd_addr: null, storage_pubkey: '^kvy<i^qI<r{=ZDrfK4K<#NtqY+zaH:ksm/YGE6I' } NSOLID 16597: registering default commands NSOLID 16597: nsolid initializing NSOLID 16597: agent spawn() NSOLID 16597: agent spawned NSOLID 16597: e24a79b2aae58b63e385a4c0ce9ed91f3d0202ee nsolid initialized NSOLID 16597: { command_remote: 'tcp://localhost:9001', data_remote: 'tcp://localhost:9002', bulk_remote: 'tcp://localhost:9003', storage_pubkey: '^kvy<i^qI<r{=ZDrfK4K<#NtqY+zaH:ksm/YGE6I' } ``` N|Solid Console supports running in a relay mode in order to support proxying through firewalls, buffering agent data during downtime, and scaling with complex network topologies. The relay sits between your Runtime instances and N|Solid Console. In this mode, nsolid-console looks like an N|Solid Console instance to agents but does not have any N|Solid Console functionality enabled. To use relay mode, set the --relay flag and the three [sockets] variables described below. These can be provided via the configuration file or as environment variables. If omitted, the defaults will be used. Changing the NSOLIDCOMMAND, NSOLIDDATA, and NSOLIDBULK variables is recommended, as those will be the ones that the relay itself listens on while it uses the NSOLIDCONSOLE_SOCKETS-* variables in order to connect to N|Solid Console. In the simplest case, set the NSOLIDCONSOLESOCKETS* variables to the same values that the related NSOLID variables were set to, picking different addresses for the NSOLID_CONSOLE_SOCKETS_ variables. | Environment Variable | Description | |:-|:-| | NSOLIDCONSOLESOCKETSCOMMANDREMOTEADDR | This is the route to the Console command port that the relay will use. It should be formatted as \"host:port\" and follows the same conventions as NSOLIDCOMMAND | | NSOLIDCONSOLESOCKETSDATAREMOTEADDR | This is the route to the Console data port that the relay will use. It should be formatted as \"host:port\" and follows the same conventions as" }, { "data": "Unlike NSOLID_DATA, this variable has to be specified or included in the config | | NSOLIDCONSOLESOCKETSBULKREMOTEADDR | This is the route to the Console bulk port that the relay will use. It should be formatted as \"host:port\" and follows the same conventions as NSOLIDCOMMAND. Unlike NSOLID_BULK, this variable has to be specified or included in the config | The following example shows the [sockets] section of a configuration file written to support relay mode. Note that the related Bind addresses have been changed, and an explicit public key has been specified for the remote Console server. ``` [sockets] commandBindAddr = \"tcp://<console-address>:8001\" dataBindAddr = \"tcp://<console-address>:8002\" bulkBindAddr = \"tcp://<console-address>:8003\" commandRemoteAddr = \"tcp://<console-address>:9001\" dataRemoteAddr = \"tcp://<console-address>:9002\" bulkRemoteAddr = \"tcp://<console-address>:9003\" remotePublicKey = \"^kvy<i^qI<r{=ZDrfK4K<#NtqY+zaH:ksm/YGE6I\" ``` Relay mode environment variables need to be added to the console section of the Docker configuration file. For example, in order to match the settings in the above configuration file, add the following: ``` console: environment: NSOLIDCONSOLESOCKETSCOMMANDBIND_ADDR=console:8001 NSOLIDCONSOLESOCKETSDATABIND_ADDR=console:8002 NSOLIDCONSOLESOCKETSBULKBIND_ADDR=console:8003 NSOLIDCONSOLESOCKETSCOMMANDREMOTE_ADDR=console:9001 NSOLIDCONSOLESOCKETSDATAREMOTE_ADDR=console:9002 NSOLIDCONSOLESOCKETSBULKREMOTE_ADDR=console:9003 ``` See the Docker page for more information on setting up Docker Compose. To setup SAML in NSolid see here. If you are using NSolid for the first time it is important that you observe step 1 - 8. If you are a repeat user you can jump to step 9. To use the NSolid Console for the first time you must sign into accounts.nodesource.com first to accept the NodeSource Terms and Conditions. As such this user flow suggests starting with accounts.nodesource.com. Once you accepted the terms you can log into the NSolid Console directly using your Okta SAML credentials directly. You can now install NSolid (see here), or open the NSolid Console (next step) If you happen to be a member of multiple NodeSource organizations, please note that you can still select your personal and/or other org accounts from the org-selector in the top left corner, but access to said accounts will be restricted. To regain full access to said accounts, simply log out and access said organizations via their corresponding email address, SAML or SSO login. If you accepted the NodeSource Terms previously you can directly navigate to the NSolid Console. The URL to do so depends on your organizations chosen method of deployment. Please see the Getting Started Guide for details (here). When logging into the Console for the first time, you must register your console with your SAML details. Enter your corporate email address that is associated with your SAML credentials in the EMAIL and SAML Accounts field: There are two ways to use PingID to authenticate into NodeSources Accounts and NSolid Console. This covers the use of the PingOne Desktop Application view. You can also use accounts.nodesource.com to sign in directly with your PingID credentials (see here). If you are using NSolid for the first time it is important that you observe step 1 - 9. If you are a repeat user you can jump to step 10. If you are using NSolid for the first time it is important that you observe step 1 - 8. If you are a repeat user you can jump to step 9. To use the NSolid Console for the first time you must sign into accounts.nodesource.com first to accept the NodeSource Terms and Conditions. As such this user flow suggests starting with accounts.nodesource.com. Once you accepted the terms you can log into the NSolid Console directly using your PingID SAML credentials" }, { "data": "You can now install NSolid (see here), or open the NSolid Console (next step) If you happen to be a member of multiple NodeSource organizations, please note that you can still select your personal and/or other org accounts from the org-selector in the top left corner, but access to said accounts will be restricted. To regain full access to said accounts, simply log out and access said organizations via their corresponding email address, SAML or SSO login. If you are using NSolid for the first time it is important that you observe step 1 - 8. If you are a repeat user you can jump to step 9. To use the NSolid Console for the first time you must sign into accounts.nodesource.com first to accept the NodeSource Terms and Conditions. As such this user flow suggests starting with accounts.nodesource.com. Once you accepted the terms you can log into the NSolid Console directly using your oneLogin SAML credentials directly. Visit https://accounts.nodesource.com/sign-in Enter your corporate email address that is associated with your SAML credentials in the EMAIL and SAML Accounts field: You can now install NSolid (see here), or open the NSolid Console. If you happen to be a member of multiple NodeSource organizations, please note that you can still select your personal and/or other org accounts from the org-selector in the top left corner, but access to said accounts will be restricted. To regain full access to said accounts, simply log out and access said organizations via their corresponding email address, SAML or SSO login. If you accepted the NodeSource Terms previously you can directly navigate to the NSolid Console. The URL to do so depends on your organizations chosen method of deployment. Please see the Getting Started Guide for details (here). When logging into the Console for the first time, you must register your console with your SAML details. Enter your corporate email address that is associated with your SAML credentials in the EMAIL and SAML Accounts field: Use your oneLogin credentials (provided by your employer) to sign into oneLogin. Once you authenticated successfully a browser window containing the NSolid Console will open. To connect a process please consult the Quick Start guide here. Once you successfully authenticated, the console is linked to your organization and your team members will be able to use it. Amazon Web Services (AWS) offers reliable, scalable, and inexpensive cloud computing services. We offer a dedicated repository to help you deploy N|Solid on AWS quickly and easily. We have several CloudFormation templates already written, as well as a list of AMI IDs for every region. If you don't already have a license key, please visit the NodeSource Accounts website to start your trial service. Easily run N|Solid on AWS using our CloudFormation templates in the dedicated GitHub repository. You can find a list of templates and their descriptions in the README.md. Once you find a template that you want to use, follow these steps to use the CloudFormation templates in nsolid-aws: Find the template you want to run in the /templates folder, then click the Deploy to AWS button. This will open up CloudFormation in your own account. Click the Next button. Fill in the required parameters and change the Stack Name if desired. Then click the Next button. Adjust any CloudFormation options if desired. Click Next. If the template requires IAM capabilities, you will need to check the \"I acknowledge that AWS CloudFormation might create IAM resources with custom names.\" box. Once you are ready, click the Create button. You can also use our N|Solid AMIs for your own projects. See AMI-LIST.md for a full list of AMI IDs in every" }, { "data": "ssh into the machine as the ubuntu user and set these four N|Solid environment variables: ``` $ ssh -i ~/.ssh/your-aws-key.pem ubuntu@<ip-address> $ NSOLID_COMMAND=localhost:9001 \\ $ NSOLID_DATA=localhost:9002 \\ $ NSOLID_BULK=localhost:9003 \\ $ NSOLIDAPPNAME=\"MyApplication\" node app.js ``` You can customize the name of your application in the N|Solid Console by changing the NSOLID_APPNAME environmental variable. If you do not specify an app name, N|Solid will look for a name property in your package.json, and if that is not found, your application name will default to \"untitled application\". You are now ready to use N|Solid for your own applications! The configuration for the external proxy is located at /etc/nginx/sites-enabled/nsolid-nginx.conf. The default configuration file is located at /etc/nsolid/console-config.toml with all artifacts stored at /var/lib/nsolid/console. Azure is a comprehensive set of cloud services that developers and IT professionals use to build, deploy, and manage applications through a global network of datacenters. We offer a dedicated repository to help you deploy N|Solid on Azure quickly and easily. We have several Resource Manager templates already written. If you don't already have a license key, please visit the NodeSource Accounts website to start your trial service. Easily run N|Solid on Azure using our Resource Manager templates. You can find a list of templates and their descriptions in the README. Once you find a template you want to use, follow these steps to use the Resource Manager templates in nsolid-azure: ``` $ az storage blob copy start --destination-blob nsolid-runtime-disk --destination-container <your-storage-container> --source-uri <runtime-vhd-uri> --account-name <your-storage-account-name> ``` ``` $ az storage blob copy start --destination-blob nsolid-console-disk --destination-container <your-storage-container> --source-uri <console-vhd-uri> --account-name <your-storage-account-name> ``` Find the template you want to run in the /templates folder, then click the Deploy to Azure button. This will open up a Custom Deployment in Azure Resource Manager in your own account. Fill in the required parameters with your VHD URI's created in step 1. Select either an existing Resource Group or create a new one. Set the location to \"West US\". Agree to the Terms and Conditions. Click the Purchase button. You can also use our N|Solid Images for your own projects. See IMAGE-LIST.md for a full list of Image IDs. To connect an N|Solid Runtime to your N|Solid Console, SSH into your N|Solid Runtime instance, and run the following commands: ``` $ ssh ns@<ip-address> $ NSOLIDCOMMAND={nsolidconsoleipaddress}:9001 \\ $ NSOLIDDATA={nsolidconsoleipaddress}:9002 \\ $ NSOLIDBULK={nsolidconsoleipaddress}:9003 \\ $ NSOLID_APPNAME=\"My Application Name\" node app.js ``` In the above code block, you will need to replace {nsolidconsoleip_address} with the IP address of your N|Solid Console. You will also want to replace \"My Application Name\" with the name of your application, as you want it to be displayed in the N|Solid Console. You can customize the name of your application in the N|Solid Console by changing the NSOLID_APPNAME environmental variable. If you do not specify an app name, N|Solid will look for a name property in your package.json, and if that is not found, your application name will default to \"untitled application\". You can connect multiple N|Solid Runtimes to your N|Solid Console, enabling you to monitor all your Node.js deployments from a single dashboard. You are now ready to use N|Solid for your own applications! The configuration for the external proxy is located at /etc/nginx/sites-enabled/nsolid-nginx.conf. The default configuration file is located at /etc/nsolid/console-config.toml and will configure all artifacts stored at /var/lib/nsolid/console. Google Cloud Platform (GCP) lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure. We offer a dedicated repository to help you deploy N|Solid on GCP quickly and easily, with several Deployment Manager templates already written. If you don't already have a license key, please visit the NodeSource Accounts website to start your trial" }, { "data": "Easily run N|Solid on GCP using our Deployment Manager templates. You can find a list of templates and their descriptions in the README.md. Once you find a template that you want to use, follow these steps to use the Deployment Manager templates in nsolid-gcp: ``` $ git clone https://github.com/nodesource/nsolid-gcp ``` Find the template you want to run in the /templates folder. Execute the gcloud Deployment Manager command to create the N|Solid Deployment: ``` $ gcloud deployment-manager deployments create nsolid --config templates/nsolid-quick-start/nsolid.yaml ``` You can also use our N|Solid Images for your own projects. See IMAGE-LIST.md for a full list of Image IDs. ssh into the machine as ubuntu user and set these four N|Solid env variables: ``` $ ssh -i ~/.ssh/your-gcp-key.pem ubuntu@<ip-address> $ NSOLIDCOMMAND=<NSOLIDCONSOLE_IP>:9001 \\ $ NSOLIDDATA=<NSOLIDCONSOLE_IP>:9002 \\ $ NSOLIDBULK=<NSOLIDCONSOLE_IP>:9003 \\ $ NSOLIDAPPNAME=\"MyApplication\" node app.js ``` You can customize the name of your application in the N|Solid Console by changing the NSOLID_APPNAME environmental variable. If you do not specify an app name, N|Solid will look for a name property in your package.json, and if that is not found, your application name will default to \"untitled application\". You are now ready to use N|Solid for your own applications! The configuration for the external proxy is located at /etc/nginx/sites-enabled/nsolid-nginx.conf. The default configuration file is located at /etc/nsolid/console-config.toml, with all artifacts stored at /var/lib/nsolid/console. Cloud Run on GKE is a GCP service that provides a simpler developer experience for deploying stateless services to your GKE cluster. To complement convenience with control, the N|Solid base-image for Cloud Run on GKE provides developers with a drop-and-replace Node.js runtime that delivers sophisticated performance insights out of the box and in production with zero code-modification. Cloud Run abstracts away Kubernetes concepts while providing automatic scaling based on HTTP requests; scaling to zero pods; automatic networking; and integration with Stackdriver. Running in your cluster enables access to custom machine types, PC/Compute Engine networks, and the ability to run side-by-side with other workloads deployed into your cluster. Cloud Run on GKE, based on Knative, provides a consistent experience which enables you to run your serverless workloads anywhere: fully managed on Google Cloud, on GKE, or on your own Kubernetes cluster. Deploying an application with N|Solid is as easy as: Please ensure you have the following prerequisites covered: You can follow these easy steps to get set up with a NodeSource account: here. The N|Solid Console can be set up via the Google Cloud Deployment Manager, which allows you to specify all the resources needed for your application in a declarative format using yaml. NodeSource provides an easy-to-use deployment template. Follow these steps to use the Deployment Manager templates: ``` gcloud auth login gcloud config set project <PROJECT_ID> gcloud config set compute/zone <ZONE> ``` Your PROJECT_ID can be found in your dashboard A Cluster is deployed to a single zone. You can find more about zones here. Or you can select one from the list generated from gcloud compute zones list ``` gcloud deployment-manager deployments create nsolid --config https://raw.githubusercontent.com/nodesource/nsolid-gcp/master/templates/nsolid-console-only/nsolid-console.yaml ``` Navigating back to the Deployment Manager, you should now be able to see the setup being in progress. Congratulations, you have successfully deployed the nsolid console to GCP. This will allow you to select whether you wish to log in with your personal org organization account. Can`t see your organization on the screen? Visit accounts.nodesource.com to create an organization or contact your console administrator to receive an invitation. Finally, you will be shown the nsolid console landing view. The next step is to deploy your" }, { "data": "application to CloudRun and connect the runtime to the console so that you can view your application`s process performance and metrics. Deploying your application with N|Solid and hooking it up to your console can be accomplished in a few easy steps. ``` FROM nodesource/nsolid:dubnium-latest RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY package.json /usr/src/app/package.json RUN npm install --production COPY . /usr/src/app/ ENTRYPOINT [\"nsolid\", \"app.js\"] ``` ``` $ docker build -t gcr.io/[project-ID]/walkthrough:latest . ``` [project-ID] is the ID you captured in 'step Note down your project ID'. ``` $ docker push gcr.io/[project-ID]/walkthrough:latest ``` [project-ID] is the ID you captured in 'step Note down your project ID'. Note: Should you receive a warning that you don't have the needed permissions to perform this operation you can review GCP`s advanced authentication methods here. To check whether you image has been successfully pushed go to https://console.cloud.google.com/gcr/images to see your image listed here: Visit console.cloud.google.com/kubernetes and create a new cluster: Complete the form and select the More node pool options button Select the Allow full access to all Could APIs option: Select Save and Create to create your cluster. Visit console.cloud.google.com/run and select CREATE SERVICE Under container image URL select the image you pushed in step Push your Docker image to the Google Container Registry. Next select your GKE Cluster you created in step 'Set up you Kubernetes Cluster on GKE'. In the show optional settings section, add the following environmental variables: The NSOLID_COMMAND environmental variable comprises of two components. Create the service. As a last step you will be required to patch your kubernetes cluster. In a terminal type: ``` $ gcloud container clusters describe [CLUSTER_NAME] | grep -e clusterIpv4Cidr -e servicesIpv4Cidr ``` Where [CLUSTER_NAME] is replaced with our cloud run cluster that was set up Set up you Kubernetes Cluster on GKE Step. This will produce the following output: ``` clusterIpv4Cidr: 10.XX.X.X/XX servicesIpv4Cidr: 10.XX.XXX.X/XX ``` Finally run: ``` $ kubectl patch ksvc [SERVICENAME] --type merge -p '{\"spec\":{\"runLatest\":{\"configuration\":{\"revisionTemplate\":{\"metadata\":{\"annotations\":{\"traffic.sidecar.istio.io/includeOutboundIPRanges\": \"[IPADDRESS_RANGES]\"}}}}}}}' ``` Where [SERVICE_NAME] is replaced with the name of your Cloud Run name you chose in Set up you cloud service with Cloud Run. Replace [IPADDRESSRANGES] with the output of the previous command, comma separated. In this case 10.XX.X.X/XX, 10.XX.XXX.X/XX. This will only have to be done once per new service. Congratulations. You successfully deployed the nsolid console and an application to Cloud Run. The N|Solid Console allows you to generate and analyze profiles in one action. This is particularly useful if you need to take a series of profiles, as it avoids the overhead of switching between environments. The console also saves a profile history, simplifying navigation between profiles when troubleshooting. For convenience, profiles are saved in your console session so you can easily flip between profiles. If your application ends before the profile is complete, N|Solid will try to save the profile data generated so far. If it is able to do so, your profile will be viewable under the Assets tab of the process detail page. All of the visualizations available for profile data show the unique set of stack traces captured during the profile, with the \"area\" of the stack indicating the proportional time spent in a function compared to that of its parent. The Flame Graph visualization shows the time along the x-axis. The y-axis is used to show the function calls that make up a particular stack trace. The Sunburst visualization is like the Flame Graph, where the x-axis is curved into a circle. Stack traces grow from the center to the outer parts of the graph. The Treemap visualization shows time by" }, { "data": "The larger the rectangle, the more time a function took. The stack traces grow from the outer boxes into the inner boxes. You can use the visualizations to find long-running functions. These can be identified by: The stack trace height - how many functions deep a particular stack trace went - does not necessarily indicate a time issue. Focus on interpreting the time values as discussed above. As you hover over the visualization itself, the function represented by the visualization will be shown. Clicking will show a stack trace to the right of the visualization. When analyzing CPU profiles with the N|Solid Console visualizations, pay careful attention to stack trace entries that take up the most area. This is displayed as a factor of width in the Flame Graph, circumference in the Sunburst, and area in the Treemap. Code highlighting shows the origin of each function. This makes it easier to determine the source of performance issues by selectively highlighting only one of the following categories of functions at a time: The search field allows you to search for nodes with matching function names and paths across the entire profile. It is case-sensitive, and only the first 500 matching results will be displayed. You can also view the generated CPU profile in Chrome Dev Tools: In addition to function execution times, the CPU profile chart also shows what functions were on the stack at the time the process was sampled. N|Solid's Command Line Interface (CLI) is a great way to quickly pull profiles from remote processes for local examination. You can learn more about the profile command in the N|Solid Command Line Interface (CLI) reference. The N|Solid Node API provides an efficient way to trigger a CPU Profile programmatically from within your application and have the resulting profile saved to your N|Solid console. The profile duration can be up to 600000 milliseconds, which is the default if none is specified. ``` const nsolid = require('nsolid') nsolid.profile(durationMilliseconds, err => { if (err) { // The profile could not be started! } }) ``` To complete your profile before the duration expires, call profileEnd: ``` nsolid.profileEnd(err => { if (err) { // Error stopping profile (was one running?) } }) ``` CPU Profiling allows you to understand where opportunities exist to improve the speed and load capacity of your Node processes. Generated CPU profiles can be used to indicate all of the functions on the function call stack. For instance, if a function foo() calls a function bar() during its execution, and a sample was taken while bar() was running, the function call stack will show that foo() called bar(). Because multiple samples may be taken while bar() is executing, there will be an approximate start and stop time recorded for bar(), which is an indication of how long it took to run. In addition, further samples before and after the ones that captured bar() will capture foo(), and likewise to the bottom of the function call stack. This data can then be analyzed to show, for the function foo(), how much time was actually spent in foo() and not in bar(). Every function has two time values, a self time and a total time. For the foo() and bar() case, if foo() only calls the bar() function, then the self time for foo() plus the total time for bar() will equal the total time for foo(): ``` function foo() { processing that takes a lot of time but calls no other functions" }, { "data": "bar() processing that takes a lot of time but calls no other functions } function bar() { } ``` foo() total time = foo() self time + bar() total time Note: Total time shows you which functions are fastest and slowest, from start to finish, but does not definitively tell you if the time was spent in that function or other functions. Named functions are easier to spot in CPU profiles. The stack frame entries available in a CPU profile include the name of a function, and source code location information for detail views. For anonymous functions, the name will often be displayed as \"(anonymous)\". In some cases, the V8 JavaScript engine can calculate an \"inferred name\" for a function. When possible, name functions so that they can be easily spotted in a CPU profile. For instance, a function busyFunction(), which you like to easily track the callers for, is being called from an anonymous function: ``` setInterval(function(){busyFunction()}, 1000) ``` In a CPU profile, you'll see that busyFunction() is being called by (anonymous). To make this easier to spot in a CPU profile, you can simply use: ``` setInterval(function busyFunctionCaller(){busyFunction()}, 1000) ``` In the CPU profile, you'll now see that busyFunction() is called by busyFunctionCaller(). For additional code cleanliness and clearer profiling, consider moving the entire function out into the same scope as the function usage: ``` setInterval(busyFunctionCaller, 1000) //... function busyFunctionCaller() { busyFunction() } ``` Because JavaScript functions are \"hoisted\" to the top level of the scope they're defined in, you can reference busyFunctionCaller before it is actually defined. Utilizing typescript and/or transpilers with N|Solid makes interpreting CPU profiles difficult unless the user is deeply familiar with the code. The integration of Source Maps provides a translation layer that provides a reference between the compiled source code and source code. When compiling code, a Source Map is currently being generated. The integration of Source Maps provides a translation layer that provides a reference between the compiled source code and source code. In an effort to address concrete customer pain-points, NodeSource has introduced SourceMap Support to the N|Solid CPU profiler. This feature continues to evolve with the specific requirements of our customers. NodeSource therefore envisages a phased rollout. Currently the feature can be accessed as follows: Users can now export NSolid Falamegraphs at the click of a button when viewing a CPU profile. In addition to downloading the CPU profile to visualize it in external tools such as Chrome Dev Tools, users can export the Flamegraph visualization to an SVG format in a few easy steps. Step 1: Once a CPU profile has been taken, view it in the NSolid Consoles the flame graph visualization. Step2: In the view, direct your attention to your browsers search bar. Above on the right hand side is a download button. You can use the NSOLID_TAGS environment variable to add tags that will be applied to a process. This can aid in identification and filtering, particularly when creating Saved Views in N|Solid Console. Tags can also be retrieved using the info command in the N|Solid CLI. ``` $ NSOLIDAPPNAME=\"leaderboard\" NSOLIDTAGS=\"DC, us-east-1\" nsolid app ``` ``` $ nsolid-cli info --q app=leaderboard ``` ``` { \"id\": \"57234c38b3f08dd8a3e9a567c82887a552c27b01\", \"app\": \"leaderboard\", \"tags\": [ \"DC\", \"us-east-1\" ], \"pid\": 2440, ... } ``` When faced with a memory leak or performance issue, taking heap snapshots is a great way to help identify the underlying problem. N|Solid provides several ways to capture snapshots: the N|Solid Console, the N|Solid CLI, and the N|Solid Node" }, { "data": "Using the N|Solid Console, take a series of snapshots and quickly switch between them to better identify problematic areas: Note: You can trigger heap snapshots automatically by creating a customized saved view on a memory-related metric (such as Heap Used > 128MB) that triggers a heap snapshot action when when processes exceed the metric threshold. This is particularly useful for generating heap snapshots on applications that have rare or intermittent memory leaks. N|Solid's Command Line Interface (CLI) is a great way to quickly take heap snapshots from remote processes for local examination. You can learn more about the snapshot command in the N|Solid Command Line Interface (CLI) reference. Use the N|Solid Node API to programmatically create a heap snapshot from within your application. The resulting snapshot(s) can be saved to your N|Solid Console. Heap Snapshots can be taken asynchronously and synchronously. The following code snippet takes a heap snapshot asynchronously: ``` const nsolid = require('nsolid') nsolid.snapshot(err => { if (err) { // The snapshot could not be created! } }) ``` The following code snippet takes a heap snapshot synchronously: ``` const nsolid = require('nsolid') try { nsolid.snapshot() } catch (err) { // The snapshot could not be created! } ``` Using constructors will show class names in heap snapshots. Since the heap snapshot groups objects by constructor name, if at all possible you should use named constructors for objects you would like to track, as opposed to using literal objects. Literal objects will be aggregated under the Object constructor name. For instance, the following code snippet creates objects which will be categorized under the Object constructor: ``` trackableObject = {x: \"some value\", y: \"another value\"} ``` To be able to track trackableObject in a snapshot, make it an instance of a specifically named class: ``` trackableObject = new TrackableObject() trackableObject.x = \"some value\" trackableObject.y = \"another value\" // ... function TrackableObject() {} ``` You'll then see these objects grouped by themselves in the heap snapshot. For more of an \"inline\" feel, you can enhance your constructor to take initialization values: ``` trackableObject = new TrackableObject({x: \"some value\", y: \"another value\"}) // ... function TrackableObject(initialValue) { this.x = initialValue.x this.y = initialValue.y ``` With EcmaScript 6 parameter destructuring, you can make it a little shorter: ``` trackableObject = new TrackableObject({x: \"some value\", y: \"another value\"}) // ... function TrackableObject({x, y}) { this.x = x this.y = y } ``` You can also make the constructor open-ended regarding the properties: ``` trackableObject = new TrackableObject({x: \"some value\", y: \"another value\"}) // ... function TrackableObject(initialValue) { for (var key in initialValue) { this[key] = initialValue[key] } } ``` A Saved View is a graph defined by a search query. Saved views allow you to view processes that fulfill certain parameters and to instruct N|Solid to take certain actions (such as taking heap snapshots, CPU profiles, or sending webhooks) when one or more processes cross over performance thresholds or match query parameters that you set. Performance problems often occur in production code non-deterministically, and often the only diagnostics available are an error log or possibly a stack trace. With N|Solid's saved views, you can automatically capture critical information when problems arise in production. Several preset saved views are included to provide out-of-the-box insight into the running state of your Node.js application: Memory Clustering. This preset compares Heap Used and Resident Set Size, and helps capture processes' total memory space. Garbage Collection Clustering. In the GC Clustering preset view, GC Count and GC Duration 99th Percentile are directly compared, providing insight into how garbage collection duration affects processes' memory usage. Garbage Collection Anomalies. By comparing Garbage Collections Count and Process Uptime, this saved view provides insights into GC Count outliers. Active Resource" }, { "data": "With this preset, the number of Active Handles and the Resident Set Size are compared, providing insight into longer term resource usage. Garbage Collection Efficiency. This preset compares Major Garbage Collections Count and GC Median Duration to provide insights into garbage collection efficiency. Click Processes in the Nav Bar to get to the Processes view. The default view shows all processes connected to N|Solid Console. To create a saved view, enter some query parameters and values into the filter search bar. Clicking on the bar will display a partial list of query terms. Type to begin searching for the parameter you wish to use. Click on one of the terms to select an operator and value. Click Set Filter to save this search term to your active filter. Repeat this process until your filter is set up as desired with all of your required search terms. When the filter has been named and saved, it will become available in the View dropdown selector. Actionsare a way to trigger some \"actions\" (as the features name describe) when a saved view filter is attended. Navigate to theProcesses viewand select a saved view from the dropdown menu. If you have never created a saved view before, create a saved view as described in the session below. You might see the default All Processes view Actions and notifications but theActionscan't be added to the All Processes view, so create a new saved view. Click on the gear icon to the right of the saved view name. Select one or more actions or notification integrations, if desired. These will run on each process that matches the saved view filter. The saved views can beWorkers Processes exclusive, ifthe Worker threadsoption is enabled, this view will be triggered only on Apps with active worker threads. Note:Be careful about setting a very high value on your saved view's filter expression, such as 99% Heap, and then selecting an action that uses the same resource, such asTake Heap Snapshot. Taking intensive action at a very high ceiling is likely to negatively impact your application. This one is the easiest Action to set up. Just select theHeap Snapshotoption inthe Actionsdrop-down menu. Learn more about Heap snapshothere. After selecting theTraceorCPU Profilingoption inthe Actionsdrop-down menu, a modal will open, there you can inform how much time you want the trace runs when the saved view has been triggered. Learn more about tracinghere. This action will activate and deactivate the trace option for a few seconds onthe Process configuration After selecting theCPU Profilingoption inthe Actionsdrop-down menu, a modal will open, there you can inform how much time you want the trace runs when the saved view has been triggered. Learn more about CPU Profilinghere. After that, just click onthe save action. After the saved view has been trigged, you could check the results in theApplication assets summary or the assets tab. After the saved view has been triggered, you could check the results filtering by date onDistributed Tracing page. You can enable Saved Views, Vulnerabilities and global event notifications as Microsoft Teams, Slack messages, invoked webhooks, or emails. To configure notifications to use Slack incoming messages, you will need to create a Slack Incoming Webhook. From the Slack Incoming Webhooks New Configuration page, select a channel the messages should be posted to, and then click the Add Incoming Webhooks Integration button. This will take you to a page where the Webhook URL is displayed, and you can further configure the message to be sent. Copy the webhook URL displayed on that" }, { "data": "In the N|Solid Console, click Settings and choose Integrations from the left side menu. Paste the Slack webhook URL into the form and name the new integration. Once you have saved this integration, you will be able to attach it to global events, vulnerabilities, and saved views as a notification. Similar to how teams configure NSolids Slack integration, organizations using Microsoft Teams can now configure notifications to use Microsoft Teams incoming messages feature. To set up Microsoft Teams in NSolid, users need to create a Microsoft Teams Incoming Webhook (see MS Teams docs here) and use it to configure the Microsoft Teams integration in Settings>Integrations. On the New Teams Webhook page, users can enter the channel name they wish to post messages to in the Webhook Name field, and paste the webhook URL into the Webhook Url and save the webhook. Once saved the integration is active and users can select Microsoft Teams as the notification destination of their set alerts. Now the Microsoft Teams integration is available. Once configured users can use the Microsoft Teams integration in both Threshold Event Notifications based on Saved Views and Global Event Notifications . To have NSolid send a Global Notification to your configured Microsoft Teams channel visit Settings>GlobalNotifications and select the configured Microsoft Teams Notification from the drop down. Once selected users can observe the message format and content: Users can also use the Microsoft Teams integration when their processes enter a saved view: Organizations using PagerDuty can now configure notifications to use PagerDuty incidents feature. To have NSolid send a Global Notification to your configured PagerDuty incidents visit Settings>GlobalNotifications and select the configured PagerDuty Notification from the drop down. On the New PagerDuty integration page, users can add a nickname for the identity of the integration and the provided Now the PagerDuty integration is available. Once configured users can use the PagerDuty integration in both Threshold Event Notifications based on Saved Views and Global Event Notifications. To have NSolid send a Global Notification to your configured PagerDuty integration Settings>GlobalNotifications and select the configured PagerDuty Notification from the drop down. Below are examples of typical saved view and vulnerability notifications sent as incoming Slack messages. In case you need support to set up the PagerDuty integration, please visit our support portal: https://support.nodesource.com/ You can also have notifications sent as HTTP POST messages to a webserver you manage. As the messages are sent to your server, you can run your own customized code to handle the notification. Notification data is sent in the HTTP request as a JSON object in the body of the request. The HTTP request will be sent with the Content-Type header set to application/json. In the N|Solid Console, click Settings and choose Integrations from the left side menu. Paste your webhook URL into the form and name the new integration. Once you have saved this integration, you will be able to attach it to global events, vulnerabilities, and saved views as a notification. The data the webhook receives has an event property. The value of this property will change depending on the event that prompts the notification. The following events are supported. | Event | Description | |:|:--| | nsolid-saved-view-match | An agent has entered a saved view. Additionally, the configured delay time period has passed, and this event has not fired in the time period specified by the snooze parameter | | nsolid-agent-exit | An agent that was connected is no longer connected to N|Solid Console | | nsolid-snapshot-generated | A heap snapshot has been" }, { "data": "This could have been done manually or as a result of another event | | nsolid-profile-generated | A CPU profile has been generated. This could have been done manually or as a result of another event | | nsolid-process-blocked | A process has had its event loop blocked for at least the time period specified by the duration parameter | | nsolid-vulnerability-detected | A vulnerability has been detected for this process | All notifications will include the following properties in the notification object and additional properties that may be specific to individual events. | Property | Description | |:--|:| | time | ISO datetime string indicating when this event was triggered | | event | One of the event types, described above | | agents | A list of objects with properties described below | | assets | A list of assets like heap snapshots or cpu profiles that this event caused to be created | | config | The notifications config that caused this notification to be sent | The property named agent will be an object with the following properties: | Property | Description | |:--|:--| | id | The agent id | | info | Information about this agent including tags, app name, and hostname | | metrics | The latest metrics information we have from this agent, if any | In addition to the data specified above, some events may contain additional information. Agent exit events will include the exit code of the process. If there was a stack trace associated with an abnormal exit, that stack trace will also be included. Additionally, if you catch an exception with an uncaughtException handler and exit synchronously, as Node.js documentation recommends, the exception will also be included. ``` const nsolid = require('nsolid'); process.on('uncaughtException', err => { console.error(err); proces.exit(1); }); ``` If you must exit asynchronously from an uncaughtException handler it is still possible to report the exception by passing it to nsolid.saveFatalError() prior to shutting down. ``` const nsolid = require('nsolid'); process.on('uncaughtException', err => { nsolid.saveFatalError(err); shutdownApp(() => { proces.exit(1); }); }); ``` The value of the event property in the notification object will be nsolid-process-blocked. The notification object will include an additional property named stacktrace, whose value is a string version of the stack trace being executed when the event loop was blocked. The value of the event property in the notification object will be nsolid-vulnerability-detected. The notification object will include an additional property named vulnerability, whose value is a list of vulnerabilities for this process. The new vulernability whose discovery triggered this event will have a new property with the value true. In N|Solid Console, click Settings and choose Global Notifications from the left side menu. Select a global event, such as vulnerabilities found, a new CPU profile, or a new HeapSnapshot, etc, that is meant to be notified via email. Click on New Notification dropdown and select Email. Enter one or more email addresses; once you have saved this notification, the previously provided emails will receive the respective notifications. The N|Solid Console can be configured to perform periodic verification of all packages loaded by all N|Solid processes. All loaded packages are verified against a list of known vulnerabilities. When new vulnerabilities are found, information about each vulnerability will be reported in the Console. Notification options can also be configured to streamline reporting. If there are any vulnerabilities found in your applications, the Security link in the Nav Bar will be annotated with a numbered badge indicating the number of vulnerabilities found across all of your applications. Clicking on Security in the Nav Bar will display the Security" }, { "data": "This view displays a list of all the vulnerabilities found across all applications. All of the vulnerabilities found in all applications will be listed on the left. The numbered badge in that list indicates the number of applications which are affected by the vulnerability. Clicking on a vulnerability in the list will display more details about that vulnerability. The Affected Processes tab will display information about the processes which are affected. The Affected Processes subview contains an entry for every application affected by the vulnerability. By clicking on the disclosure triangle next to the application name, a list of the module dependencies for the vulnerable package is displayed. You can use the Hide/Show toggle on the right to have the vulnerability ignored when determining the number of vulnerabilities across all the applications. From the Processes view, you can sort the Processes list on the right by Vulnerabilities, which will sort the currently filtered processes into two sets - Vulnerable and Secure. The vulnerable processes will be shown with a bright red dot in the scatterplot, and secure processes will be shown with a light colored dot in the scatterplot. Clicking on the process in the Processes list will display the Process Details view, which contains additional vulnerability information. The Process Details view contains a Modules subview, which contains information about the vulnerabilities found in the process. Clicking on the vulnerability title will display the Security view for the vulnerable module. You can configure the N|Solid Console to notify you when new vulnerabilities are found in your applications. To configure these notifications, click the Global Notifications link on the left side of the Settings view. Scroll down to the section Vulnerability Notifications. Here you can add Integrations to be invoked when a new vulnerability is found. The Console has a dedicated section for NodeSource Certified Modules v2. Learn more about NCM 2 reports. The N|Solid strict mode can be used with the prompt nsolid-strict instead of the well known nsolid, the key difference between strict and regular mode is that the strict mode will stop any application with encountered vulnerabilities identified by the NodeSource Certified Modules v2. Example running vulnerable apps and secure apps respectively: ``` $ nsolid-strict vulnerable-node-app.js nsolid STRICT MODE verifying... Unsecure server running normally :) nsolid STRICT MODE access denied due to policy violation: { \"package\": \"unsecure-pkg\", \"version\": \"1.19.4\", \"group\": \"risk\", \"name\": \"has-install-scripts\", \"pass\": false, \"severity\": \"CRITICAL\", \"title\": \"This package version has install scripts: postinstall.\" } $ nsolid vulnerable-node-app.js Unsecure server running normally :) ``` ``` $ nsolid-strict secure-node-app.js nsolid STRICT MODE verifying... Secure server running normally :) ``` Users can now export configuration settings using either NSolid Console, the result is a JSON format file which can be used to import settings across different setups. In the N|Solid Console, go to Settings, in the left tab list youll see the new Import/ Export Settings section. PLEASE NOTE: If you cannot see this new section, please go to the application permissions on accounts.nodesource.com and verify that the settings -> import / export settings is checked for your role. To Export settings, choose which ones you want to export, you are able to export Saved Views, Integrations, and Global Notifications individually or all of them at the same time. The console provides you with a file to download with the extension \".nsconfig\", which contains the configuration you selected to export in json format. ``` { \"_metadata\": { \"_timestamp\": \"\", }, \"integrations\": {}, \"notifications\": {}, \"savedViews\": {} } ``` Also, inside the" }, { "data": "folder theres a new folder structure where you are going to have a backup of every export file generated in case you need it in the future. Clicking import will open a modal, you can select what settings you want to import and whether you want to add to the current settings or if you want to clean up and only use imported ones. In the case of integrations, it can not be cleaned, because these can be used by notifications or saved views. You should choose the backup file previously downloaded, remember that it must have the extension .nsconfig. If you lost the export file, remember you can always check inside the .nsolid-console folder for a backup. Once the file is successfully imported, you will see the new configuration applied and ready to run. N|Solid event profiler provides a rich set of events, covering security events, lifecycle events, system events, performance events, user defined machine learning events and assets events. PLEASE NOTE: If you cannot see this new section, please go to the application permissions on accounts.nodesource.com and verify that the view events historic is checked for your role. The console has a dedicated section for event profiling. Click on Events tab on the navigation bar The event profiler section has a summary section: The events report UI scales no matter how many events you have records of. Filtering events with event profiler. Events can be filtered by severity, type, application name, hostname, and agent id; to do this, simply click on the filter box: Sorting events with event profiler. Events can be sorted by time, application name, hostname, agent id, and event name itself, click on the property you want to sort: Limit events with event profiler by dates. Events report can be limited by dates, click on the dates picker and select your desired date range: | Name | Severity | Category | |:|:--|:| | agent-missing | High | Lifecycle | | new-vulnerability-found | High | Security | | console-server-stopped | High | System | | influxdb-error | High | System | | server-disconnected | High | System | | process-blocked | Medium | Performance | | agent-found | Medium | Lifecycle | | influx-truncated | Medium | Security | | package-vulnerabilities-updated | Low | Security | | vulnerabilities-database-updated | Low | Security | | active-vulns-updated | Low | System | | process-unblocked | Low | Performance | | agent-packages-added | Low | Lifecycle | | agent-exit | Low | Lifecycle | | asset-canceled | Low | Assets | | asset-initiated | Low | Assets | | asset-created | Low | Assets | | asset-metadata-updated | Low | Assets | | console-server-started | Low | System | | influxdb-recovered | Low | System | | server-connected | Low | System | | notifications-settings-changed | Low | System | | integrations-settings-changed | Low | System | | savedViews-settings-changed | Low | System | | generalSettings-settings-changed | Low | System | NodeSource has introduced Tracing support, a facility to gather information throughout the lifecycle of an HTTP/DNS/Other request. The collected information can be used for debugging latency issues, service monitoring and more. This is a valuable addition to users for those who are interested in debugging a request latency. Tracing traces user requests through a Node application, collecting data that can help find the cause of latency issues, errors, and other problems. In the N|Solid Console, go to the the applications dashboard and click on the TRACING button of the application to see the" }, { "data": "The view will be shown as below: Tracing is consists of three key components below: A timeline graph displays the density of the number of tracing spans. Below is the description of the color of a slot on the timeline graph: | Color | Description | |:--|:--| | green | everything is ok | | yellow | maybe you should look at this | | red | definitely you should look at this | Assume that a simple request was made to the console service to monitor traces: As a result, the Console displays the whole span information. A span is the building block of a trace and is a named, timed operation that represents a piece of the workflow in the distributed system. Multiple spans are pieced together to create a trace. Traces are often viewed as a tree of spans that reflects the time that each span started and completed. It also shows you the relationship between spans. A trace starts with a root span where the request starts. This root span can have one or more child spans, and each one of those child spans can have child spans. To inspect the span details of a span, click on the title Service & Operation: Below are the attributes of the span: | Attribute | Description | |:|:--| | id | the id of the application | | app | the name of application | | hostname | the name of the host machine | | tags | the tags of the application | | spanattributeshttp_method | the http method of span attributes | | duration | the duration of the span | | spanattributeshttpstatuscode | the http status code of the span attributes | | spanattributeshttpstatustext | the http status text of the span attributes | | spanattributeshttp_url | the http url of the span attributes | | span_end | the end time of the span | | span_id | the id of the span | | span_name | the name of the span | | span_parentId | the parent ID of the span | | span_start | the start time of the span | | spanstatuscode | the status code of the span | | span_threadId | the thread ID of the span | | span_traceId | the trace ID of the span | | span_type | the type of the span | Let's authenticate using the console service, which is going to perform a request to the google-auth-service to authenticate the user. The graph below represents a path, from the console service to the authentication service, N|Solid will monitor the HTTP traces in this distributed system. As such, graph is showing the whole path, starting from the console and finishing with the sms service. This is how distributed tracing works using N|Solid managed systems. The collected information can be used for debugging latency issues, service monitoring, and more. This is a valuable addition to users for those who are interested in debugging a request latency. Tracing traces of user requests through multiple Node applications, and collecting data that can help find the cause of latency issues, errors, and other problems in a distributed system. You can configure the update settings for distributed tracing on the top of the Distributed Tracing view: Basically, you can turn on or off Real Time Update and change the update interval. The default update interval is 10 seconds. Available options are 10, 15, 30 or 60 seconds. To change the time range of tracing, click the calendar icon below the timeline graph: It will show a calendar for selecting a time" }, { "data": "The timeline graph range is updated every 1 minute, with an interval of 1 minute for the date range to move. Filtering the results by attributes of the span is made easy for users. Just tab the Filter input area and select attribute(s) you want to filter as below: To enable the Tracing using N|Solid, set the env var NSOLIDTRACINGENABLED=1. By enabling this feature, you can troubleshoot HTTP, DNS and other network request problems that you might encounter using the N|Solid Console. Distributed tracing in the N|Solid Console is basically an extension of the HTTP tracing in N|Solid. This documentation will use the following examples to cover how it works on the Console side under the following conditions: ``` // console.js const http = require('http') const PORT = process.env.PORT || 3000 http.createServer((req, res) => { if (req.url === '/auth') { console.log('AUTH handler') const authURL = 'http://localhost:4000' const gReq = http.request(authURL) gReq.end() gReq.on('close', () => res.writeHead(200).end('Google made it')) } else if (req.url === '/auth-2fa') { const authURL = 'http://localhost:4000/2fa' const twoFactor = http.request(authURL) twoFactor.end() twoFactor.on('close', () => res.writeHead(200).end('Auth made it with sms')) } else { res.end('ok') } }).listen(PORT) ``` ``` // google-auth.js const http = require('http') const PORT = process.env.PORT || 4000 http.createServer((req, res) => { if (req.url === '/2fa') { const smsServiceURL = 'http://localhost:5000' const smsService = http.request(smsServiceURL) res.writeHead(200).end('Auth made it') smsService.end() } else { console.log('Auth root handler') res.writeHead(200).end('ok') } }).listen(PORT) ``` ``` // twilio.js const http = require('http') const PORT = process.env.PORT || 5000 http.createServer((req, res) => { res.writeHead(200).end('ok') }).listen(PORT) ``` In order to correctly identify an anomaly it is important that the detection method be accurate. CPU is no longer enough of a measurement to scale applications. Other factors such as garbage collection, crypto, and other tasks placed in libuv's thread pool can increase the CPU usage in a way that is not indicative of the application's overall health. Even applications that don't use Worker threads are susceptible to this issue. In addition, there is no cross-platform way of measuring the CPU usage per thread, which doesn't mean that CPU is useless. CPU and event loop utilization (or ELU) is crucial to see if an application is reaching hardware limitations. But not being able to gather metrics on a per-thread basis drastically limits our ability to determine when the application is reaching its threshold. Note: ELU(Event loop utilization) is the ratio of time the event loop is not idling in the event provider to the total time the event loop is running, and is equal to the loop processing time divided by the loop duration. With that being said, N|Solid Console provides ELU-based Scatterplot, which utilizes the most reliable metric to use as a baseline for comparison. The Scatterplot is an animated graph that provides an overview of your applications' performance across all or a subset of connected processes, when an specific process has at least one active worker thread, the process will be highlighted. Using ELU as the axis to compare metrics across multiple processes is a reliable way to identify anomalies without false positives. With this information anomalous processes can be automated to take CPU profiles, heap snapshots and etc. In the N|Solid Console, go to the the applications dashboard and click CPU ANOMALY DETECTION. The blue dots are the raw data. Red line is the regression line (estimated average), yellow and green are the error from the regression. The default y-axis value is delay, which equals to (providerDelay + processingDelay) / 1e6 in microseconds. The blue dots: The blue dots are the raw data from all the" }, { "data": "All the application raw data are the same color. It only highlights the points from the same application when a single point is hovered with the mouse. The red line: The red line is the moving average of all the raw data (blue dots). There is no application specific information to show when those points are hovered. The yellow and green line: The yellow and green lines are the error margin for the moving average (red dots). At the left side, there's a list of anomalies which can be filtered by agent ID. To see the details of an anomaly, click the title of an item to expand it and read the description. Note: If you are redirected from Events tab, the corresponding anomalies will be shown. Understanding memory management reduces the possibility of wasting your applications resources, and the unexpected effects on performance. In many cases, there is no clear understanding as to why the memory grows endlessly, however, a check for correlation between set of metrics who intereact with each other can give insights about memory usage and response time degradation. Memory Anomalies in the N|Solid Console provides a way to detect early cases of memory miss behavior or upcoming Out of Memory situations before it happens. In the N|Solid Console, go to the the Applications Dashboard and click MEMORY ANOMALY DETECTION. If there are no anomalies registered, you'll get an empty list and 0 values in the charts and placeholders. If any of the processes or worker threads under an application name have reported anomaly events, you'll get a list at the left side with a short title, the process id and if it came from them main or a worker thread. You can filter this list by processID with the search box at the top. To see the details and load the metrics of an anomaly, click the title of an item to expand it, and extended description will appear with the timestamp and the charts at the right side will load a snapshot of the metrics one minute before and after the timestamp registered in the event. Note: If you are redirected from Events tab, the corresponding anomalies will be shown. NodeSource has introduced Machine Learning support, this feature allows for the training of models that will later detect similar patterns in your application data and fire custom events. In the N|Solid Console, the Machine Learning feature can be accessed from the app summary or process detail views, each of these handle different data sets and will have a different effect in the model you train. The Machine Learning models can be trained using two kinds of data sets. The models trained in the app summary view will use the aggregated data of all the processes running inside the app. On the other hand, the models trained in the process detail view will use the process specific data. When a process/app is first connected it will take a certain amount of data in order to be successfully trained, you will find a progress loader under process configuration: To train a model in a app summary page click on Train ML Model button. To train a model in a process detail page click on Train ML Model button. After clicking on the Train ML Model button a modal will open, here you can create, filter and train models, this modal is the same for both pages. To create a model click on CREATE NEW MODEL Name and give a short description for the model, then" }, { "data": "Select the created modal an click on TRAIN When the trained model finds a data pattern similar to the one it was trained with, it will fire an event and show a banner on top of the navbar. Click on View Event to be redirected to the events tab, here you will find the most recent machine learning event. The events will also appear in the application status section, clicking on VIEW ANOMALIES will redirect to the events tab. Machine Learning models can be administered in the settings tab where you will find a set of default models and the user trained models, here the frequency of events being fired can be modified and the custom user models can be deactivated, deleted or edited. For a full reset of the created models click on RESET MODELS. Custom user models have an edit and delete icon, these models are found beneath the default models. PLEASE NOTE: Only the name and description of the user created model can be edited, if you want to change the model data please retrain the model in app summary or in the process detail pages. Default models are activated by default, these can only be activated or deactivated. NodeSource offers Serverless support to collect data during a request in serverless settings. This data aids in debugging latency and other issues, making it ideal for those keen on addressing such problems in a serverless context. For using AWS resources, you need to configure the AWS SDK credentials. Check the AWS documentation to do this. After following the guide, your AWS CLI should be set up correctly for smooth AWS interactions. Make sure you have all AWS credentials envs configured in your machine. | Env | |:-| | AWSSECRETACCESS_KEY | | AWSDEFAULTREGION | | AWSACCESSKEY_ID | Using nsolid-serverless, you can create the necessary infra resources and configure your serverless application to send telemetry data to the N|Solid Console. ``` npm i -g @nodesource/nsolid-serverless ``` The first step is to create a SQS Queue where we will send the telemetry data. ``` nsolid-serverless infra --install ``` We need to setup some environment variables and layers in your lambdas to send data to our SQS queue created in the previous step. ``` nsolid-serverless functions --install ``` ``` nsolid-serverless --help ``` After all the functions are set up, you need to configure the N|Solid Console. Navigate to Settings > Integration. On the right side, there is a section called AWS SQS INTEGRATION as below: Click on the New AWS SQS Integration button to add the integration. The following form will be displayed: Fill in the form with the following information: Click on the Save Integration button to save the integration. In the N|Solid Console, go to the the applications dashboard and tab the Functions on the left side. The dashboard for the functions connected will be displayed as below: Select a function from the list to see the details of the function. The detail view will be displayed as below: The detail view will display the following information: The metrics tab will display the metrics of the function. The metrics will display the following information: This is the list of metrics we will be generating that come from the Telemetry API. These metrics are real-time and associated to specific functions. Their corresponding aggregations are to be performed in the Telemetry Aggregator. These metrics are extracted directly from the Cloudwatch API by the Lambda Metrics Forwarder and are not real-time as the ones from the Telemetry" }, { "data": "The list is: The NSOLID_INSTRUMENTATION environment variable is used to specify and load the opentelemetry instrumentation modules that you want to utilize within your application. To enable instrumentation for specific modules, follow these steps: For HTTP requests using the http module, set the NSOLID_INSTRUMENTATION environment variable to http. If you're also performing PostgreSQL queries using the pg module, include it in the NSOLID_INSTRUMENTATION environment variable like this: http,pg. Make sure to list all the relevant instrumentation modules required for your application. This will enable tracing and monitoring for the specified modules, providing valuable insights into their performance and behavior. To enable the Tracing using N|Solid, set the env var NSOLIDTRACINGENABLED=1. By enabling this feature, you can troubleshoot HTTP, DNS and other network request problems that you might encounter using the N|Solid Console. The view will be shown as below: Tracing is consists of three key components below: Note: The default behavior only generates traces related to the lambda invocation. If you require tracing for additional operations, set the NSOLID_INSTRUMENTATION environment variable. Available modules for tracing are as follows: Please ensure you have the necessary modules enabled to trace all the operations you require. A timeline graph displays the density of the number of tracing spans. Below is the description of the color of a slot on the timeline graph: | Color | Description | |:--|:--| | green | everything is ok | | yellow | maybe you should look at this | | red | definitely you should look at this | Assume that a simple request was made to the console service to monitor traces: As a result, the Console displays the whole span information. A span is the building block of a trace and is a named, timed operation that represents a piece of the workflow in the distributed system. Multiple spans are pieced together to create a trace. Traces are often viewed as a tree of spans that reflects the time that each span started and completed. It also shows you the relationship between spans. A trace starts with a root span where the request starts. This root span can have one or more child spans, and each one of those child spans can have child" }, { "data": "To inspect the span details of a span, click on the title Service & Operation: Below are the attributes of the span: | Attribute | Description | |:|:--| | id | the id of the application | | app | the name of application | | hostname | the name of the host machine | | tags | the tags of the application | | spanattributeshttp_method | the http method of span attributes | | duration | the duration of the span | | spanattributeshttpstatuscode | the http status code of the span attributes | | spanattributeshttpstatustext | the http status text of the span attributes | | spanattributeshttp_url | the http url of the span attributes | | span_end | the end time of the span | | span_id | the id of the span | | span_name | the name of the span | | span_parentId | the parent ID of the span | | span_start | the start time of the span | | spanstatuscode | the status code of the span | | span_threadId | the thread ID of the span | | span_traceId | the trace ID of the span | | span_type | the type of the span | | resourceSpans | an array of resource spans | | attributes | an array of attributes | | key | the key of the attribute | | value | the value of the attribute | | stringValue | the string value of the attribute | | telemetry.sdk.language | the language of the telemetry SDK | | telemetry.sdk.name | the name of the telemetry SDK | | telemetry.sdk.version | the version of the telemetry SDK | | cloud.provider | the cloud provider | | cloud.platform | the cloud platform | | cloud.region | the cloud region | | faas.name | the name of the function as a service | | faas.version | the version of the function as a service | | process.pid | the process ID | | process.executable.name | the name of the executable | | process.command | the command | | process.command_line | the command line | | process.runtime.version | the version of the runtime | | process.runtime.name | the name of the runtime | | process.runtime.description | the description of the runtime | | droppedAttributesCount | the number of dropped attributes | | MessageAttributes | the message attributes | | eventType | the event type | | StringValue | the string value | | DataType | the data type | | instanceId | the instance ID | | functionArn | the Amazon Resource Name (ARN) of the function | Let's authenticate using the console service, which is going to perform a request to the google-auth-service to authenticate the user. The graph below represents a path, from the console service to the authentication service, N|Solid will monitor the HTTP traces in this distributed system. As such, graph is showing the whole path, starting from the console and finishing with the sms service. This is how distributed tracing works using N|Solid managed systems. The collected information can be used for debugging latency issues, service monitoring, and more. This is a valuable addition to users for those who are interested in debugging a request latency. Tracing traces of user requests through multiple Node applications, and collecting data that can help find the cause of latency issues, errors, and other problems in a distributed system. The SBOM tab will display the SBOM(Software Bill of Materials) of the function. The SBOM will display the following information: To change the time range, click the calendar icon above the graphs: This will show a calendar from which users can select the time range. The timeline graph range is updated every 1 minute, with an option to change the date range every 1 minute. In summary, with NodeSource's Serverless monitoring, users can gain more insight into the performance of their serverless functions and quickly identify and debug any issues. ``` nsolid-serverless infra --uninstall ``` N|Solid makes available a rich set of metrics covering the Node.js process's host system, the process itself, the internal behavior of Node.js, and the internal behavior of the V8 JavaScript engine. Combining these data points can give you sophisticated insight into your Node.js deployment. Many of the key metrics are displayed directly on the N|Solid Console. To make numeric metrics available to your StatsD collector directly from the N|Solid Process, configure using the NSOLID_STATSD environment variable (see the StatsD section for more detail). For the full set of metrics, use the nsolid-cli utility. | Property | Description | |:-|:| | app | The user-specified application name as set in package.json or the NSOLID_APP environment variable. | | appVersion | The user-specified application version as set in package.json. | | execPath | The absolute path of the N|Solid executable running the application. | | id | The unique N|Solid agent ID. Unique per N|Solid process. | | main | The absolute path to the root module in the process's module" }, { "data": "| | nodeEnv | The user-specified NODE_ENV environment variable. | | pid | The system process ID for the process. | | processStart | The time at which the process started, in seconds. | | tags | The user-specified tags as set by the NSOLID_TAGS environment variable. | | vulns | The number of known vulnerabilities found in the modules of the process. | Note: The host system may be actual hardware, a virtual machine, or a container. | Property | Description | |:--|:| | arch | The host system's CPU architecture. | | cpuCores | The host system's number of CPU cores. | | cpuModel | The host system's CPU model. | | hostname | The host system's name. | | platform | The host system's operating system platform. | | totalMem | The host system's total available memory. | | Property | Description | |:-|:--| | cpuSpeed | The current speed of the host system's CPU (averaged across all cores), in MHz. | | freeMem | The host system's amount of free (unused) memory, in bytes. | | load1m | The host system's one-minute load average. | | load5m | The host system's five-minute load average. | | load15m | The host system's fifteen-minute load average. | | systemUptime | The host system's uptime, in seconds. | | Property | Description | |:--|:--| | blockInputOpCount | The total number of block input operations on the process. | | blockOutputOpCount | The total number of block output operations on the process. | | cpuSystemPercent | The percent CPU used by the process in system calls. | | cpuUserPercent | The percent CPU used by the process in user code. | | ctxSwitchInvoluntaryCount | The number of involuntary context switches away from the process. | | ctxSwitchVoluntaryCount | The number of voluntary context switches away from the process. | | cpuPercent | The percent CPU used by the process. | | externalMem | The process's memory allocated by Node.js outside of V8's heap, in bytes. This may exceed RSS if large Buffers are soft-allocated by V8. | | ipcReceivedCount | The number of IPC messages received by the process. | | ipcSentCount | The number of IPC messages sent by the process. | | pageFaultHardCount | The number of hard page faults triggered by the process. | | pageFaultSoftCount | The number of soft page faults (page reclaims) triggered by the process. | | rss | The resident set size (total memory) used by the process, in bytes. | | signalCount | The number of signals received by the process. | | swapCount | The number of times the process has been swapped out of memory. | | title | The current system title of the process. | | uptime | The process's uptime, in seconds. | | user | The system's user the process is running from. | Note: The memory in \"heap\" is a subset of the resident set size. | Property | Description | |:|:-| | totalAvailableSize | The remaining amount of memory the heap can allocate on the process before hitting the maximum heap size, in bytes. | | totalHeapSizeExecutable | The total amount of executable memory allocated in the process's heap, in bytes. | | totalPhysicalSize | The amount of physical memory currently committed for the heap of the process, in bytes. | | heapSizeLimit | The maximum amount of memory reserved for the heap by the process, as allocated by the host system, in bytes. V8 will terminate with allocation failures if memory is used beyond" }, { "data": "| | heapTotal | The process's total allocated JavaScript heap size, in bytes. | | heapUsed | The process's total used JavaScript heap size, in bytes. | Note: To learn more about event loop utilization visit The Event loop utilization blogpost. | Property | Description | |:--|:-| | loopAvgTasks | The process's average number of async JavaScript entries per event loop cycle. | | loopEstimatedLag | The estimated amount of time a I/O response may have to wait in the process, in milliseconds. | | loopIdlePercent | The percent time that the process is waiting (idle) for I/O or timers. | | loopsPerSecond | The amount of event loop cycles completed in the process, within the last second. | | loopTotalCount | The cumulative count of all event loop cycles in the process. | | Property | Description | |:|:-| | gcCount | The total number of garbage collections done by the process. | | gcCpuPercent | The percent CPU used during garbage collection by the process. | | gcDurUs99Ptile | The process's 99th percentile duration of garbage collections, in microseconds. | | gcDurUsMedian | The process's median duration of garbage collections, in microseconds. | | gcForcedCount | The process's number of externally forced garbage collections. | | gcFullCount | The number of garbage collections run by the process which collected all available garbage. Usually only observed when the heapTotal is approaching heapSizeLimit. | | gcMajorCount | The number of significant garbage collections done by the process. An example of a \"significant\" garbage collection is a \"Mark-Sweep\". | | Property | Description | |:|:--| | dnsCount | The process's total number of DNS lookups performed. | | dnsMedian | The process's median duration of DNS lookups performed, in milliseconds. | | httpClientAbortCount | The process's total number of outgoing HTTP(S) client requests canceled due to inactivity. | | httpClientCount | The process's total number of outgoing HTTP(S) client requests performed. | | httpClientMedian | The process's median duration of outgoing HTTP(S) client requests completed, in milliseconds. | | httpClient99Ptile | The process's 99th percentile duration of outgoing HTTP(s) client requests completed, in milliseconds. | | httpServerAbortCount | The process's total number of served incoming HTTP(S) requests canceled. | | httpServerCount | The process's total number of incoming HTTP(s) requests served. | | httpServerMedian | The process's median duration of served incoming HTTP(S) requests completed, in milliseconds. | | httpServer99Ptile | The process's 99th percentile duration of served incoming HTTP(S) requests completed, in milliseconds.. | | Property | Description | |:--|:--| | time | The ISO8601 timestamp representing when a given metrics window completed. | Below are some important terms and phrases related to N|Solid and its ecosystem that appear throughout the documentation. | Term | Description | |:|:-| | Affected Processes Subview | Part of the Security view in the Console, contains an entry for every application affected by a given vulnerability | | Asynchronous | Function timing paradigm powered by libuv that enables Node.js to be non-blocking | | CPU Profile | A sample of an application's stack trace that captures function call frequency and duration | | Containers | Isolated software that includes everything needed to run it: code, runtime, and system-level tools, libraries, and settings. A tool to help standardize application environments across machines | | Docker | A containerization platform provider. Visit their website for more info | | Event Loop | A construct that represents the event-based and phase-controlled asynchronous logic of" }, { "data": "| | Flame Graph | A CPU profiling visualization that shows function call hierarchy and time on-CPU as a measure of width | | Heap Snapshot | A memory profiling tool that outlines the structure and sizes of the objects and variables in memory and accessible to JavaScript | | Image | A lightweight, standalone executable package for running containers | | Integrations | The use of Slack notifications, emails, or custom webhooks to help your team monitor supported application events | | JSON | JavaScript Object Notation; a lightweight, easy to parse data-interchange for JavaScript and the Web | | License Key | A unique identifier for authorizing N|Solid access | | Module | Simple, focused Node.js file with related functions | | Node.js | A JavaScript runtime built on Google's V8; uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Visit their website for more info | | npm | The Node.js and JavaScript package manager. Visit their website for more info | | N|Solid Agent | An advanced native C++ component that runs on its own thread inside your application, with direct access to the core elements of Node.js, libuv and the V8 JavaScript engine | | N|Solid CLI | The Command Line Interface for N|Solid. Read the Command Line Interface section for more details | | N|Solid Console | A web application that provides centralized access to all of your applications, and an aggregated view of each application's processes. This holistic view simplifies triage at runtime, makes it easier to find outliers, and takes advantage of advanced diagnostic tools directly from your browser | | N|Solid Runtime | A build of Node.js bundled with an advanced native C++ component that provides access to detailed metrics and allows you to control application behavior in production | | Package | A suite of Node.js modules packaged together | | Process | An N|Solid instance that has its N|Solid Agent connected to the N|Solid Console | | Process Detail View | A view in the Console that shows detailed information relating to a specific process. At the top of the view it displays data that N|Solid receives from the process, including the pid, host, platform, and CPU configuration | | Process Targeting | Causes one minute of the process's recent historic metrics to be visualized as a trail following the process around the graph. Available on the scatterplot in the Processes View. One process can be targeted at a time | | Processes View | A view in the Console that provides a visual overview of all or a subset of N|Solid applications monitored by the Console. Includes the Scatterplot | | Saved Views | A graph defined by a search query. Allow you to view processes that fulfill certain parameters and to instruct N|Solid to take certain actions based on conditions within a given saved view | | Scatterplot | An animated, customizable graph that provides an overview of application performance across all or a subset of connected processes | | Security View | A view in the Console that displays a list of all the vulnerabilities found across all applications | | Sunburst | A CPU profiling visualization that shows function call hierarchy and time on-CPU as a measure of radius | | Tags | Keywords that can be attached to processes that aid in identification and filtering | | Threshold | A performance limit around which actions such as notifications and heap snapshots can be configured | | Treemap | A CPU profiling visualization that shows function call hierarchy and time on-CPU as" } ]
{ "category": "Observability and Analysis", "file_name": "standards-and-conventions.md", "project_name": "OpenTracing", "subcategory": "Observability" }
[ { "data": "The OpenTracing project is archived. Learn more. Migrate to OpenTelemetry today! OpenTracing defines an API through which application instrumentation can log data to a pluggable tracer. In general, there is no guarantee made by OpenTracing about the way that data will be handled by an underlying tracer. So what type of data should be provided to the APIs in order to best ensure compatibility across tracer implementations? A high-level understanding between the instrumentor and tracer developers adds great value: if certain known tag key/values are used for common application scenarios, tracers can choose to pay special attention to them. The same is true of logged events, and span structure in general. As an example, consider the common case of a HTTP-based application server. The URL of an incoming request that the application is handling is often useful for diagnostics, as well as the HTTP verb and the resultant status code. An instrumentor could choose to report the URL in a tag named URL, or perhaps named http.url: either would be valid from the pure API perspective. But if the Tracer wishes to add intelligence, such as indexing on the URL value or sampling proactively for requests to a particular endpoint, it must know where to look for relevant data. In short, when tag names and other instrumentor-provided values are used consistently, the tracers on the other side of the API can employ more intelligence. The guidelines provided here describe a common ground on which instrumentors and tracer authors can build beyond pure data collection. Adherence to the guidelines is optional but highly recommended for instrumentors. The complete list of semantic conventions for instrumentation can be found in the Semantic Conventions specification. If you see an opportunity for additional standardization, please file an issue against the specification repository or raise the point on OpenTracings public Gitter channel." } ]
{ "category": "Observability and Analysis", "file_name": "supported-tracers.md", "project_name": "OpenTracing", "subcategory": "Observability" }
[ { "data": "The OpenTracing project is archived. Learn more. Migrate to OpenTelemetry today! Jaeger \\y-gr\\ is a distributed tracing system, originally open sourced by Uber Technologies. It provides distributed context propagation, distributed transaction monitoring, root cause analysis, service dependency analysis, and performance / latency optimization. Built with OpenTracing support from inception, Jaeger includes OpenTracing client libraries in several languages, including Java, Go, Python, Node.js, C++ and C#. It is a Cloud Native Computing Foundation member project. LightStep operates a SaaS solution with OpenTracing-native tracers in production environments. There are OpenTracing-compatible LightStep Tracers available for Go, Python, Javascript, Objective-C, Java, PHP, Ruby, and C++. Instana provides an APM solution supporting OpenTracing in Crystal, Go, Java, Node.js, Python and Ruby. The Instana OpenTracing tracers are interoperable with the other Instana out of the box tracers for .Net, Crystal, Java, Scala, NodeJs, PHP, Python and Ruby. Apache SkyWalking is an APM (application performance monitor) tool for distributed systems, specially designed for microservices, cloud native and container-based (Docker, K8s, Mesos) architectures. Underlying technology is a distributed tracing system. The SkyWalking javaagent is interoperable with OpenTracing-java APIs. inspectIT aims to be an End-to-End APM solution for Java with support for OpenTracing. The instrumentation capability allows to set up inspectIT in no time with an extensive support for different frameworks and application servers. For more information, take a look at the documentation. Stagemonitor is an open-source tracing, profiling and metrics solution for Java applications. It uses byte code manipulation to automatically trace your application without code changes. Stagemonitor is compatible with various OpenTracing implementations and can report to multiple back-ends like Elasticsearch and Zipkin. It also tracks metrics, like response time and error rates. Datadog APM supports OpenTracing, and aims to provide OpenTracing-compatible tracers for all supported languages. Wavefront is a cloud-native monitoring and analytics platform that provides three dimensional microservices observability with metrics, histograms and OpenTracing-compatible distributed tracing. With minimal code change, developers can now visualize, monitor and analyze key health performance metrics and distributed traces of Java, Python and .NET applications built on common frameworks such as Dropwizard and gRPC. Check out the distributed tracing demo here. Elastic APM is an open source APM solution based on top of the Elastic Stack. Elastic APM agents are available for Java, Node.js, Python, Ruby, Real User Monitoring JavaScript, and Go. Elastic APM records distributed traces, application metrics, and errors in Elasticsearch to be visualized via a curated UI in Kibana, integrating with machine learning and alerting, and seamless correlation with application logs and infrastructure monitoring." } ]
{ "category": "Observability and Analysis", "file_name": "index.html.md", "project_name": "OpenTSDB", "subcategory": "Observability" }
[ { "data": "Whats New Welcome to OpenTSDB 2.4, the scalable, distributed time series database. We recommend that you start with the User Guide then test your understanding with an Installation and read on the HTTP API if you need to develop against it. Documentation for the OpenTSDB 3.0 work-in-progress can be found here: OpenTSDB 3.0. Index Search Page" } ]
{ "category": "Observability and Analysis", "file_name": "github-terms-of-service.md", "project_name": "Opstrace", "subcategory": "Observability" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": "docs.md", "project_name": "Opstrace", "subcategory": "Observability" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | assets | assets | assets | nan | nan | | guides | guides | guides | nan | nan | | references | references | references | nan | nan | | LICENSE | LICENSE | LICENSE | nan | nan | | README.md | README.md | README.md | nan | nan | | manifest.json | manifest.json | manifest.json | nan | nan | | quickstart.md | quickstart.md | quickstart.md | nan | nan | | View all files | View all files | View all files | nan | nan | The Opstrace Distribution is a secure, horizontally-scalable, open source observability platform that you can install in your cloud account. Opstrace automates the creation and management of a secure, horizontally-scalable metrics and logs platform. It consists of an installer that runs as a CLI to use your cloud credentials to provision and configure an Opstrace instance in your account, as well as internal components that manage its ongoing lifecycle (repairing and updating components as needed). Opstrace instances expose a horizontally scalable Prometheus API, backed by Cortex, and a Loki API for logs collection. You can point your existing Prometheus or Fluentd/Promtail instances to it. We also plan to support a wide variety of other APIs, such as the Datadog agent. Creating an Opstrace instance requires our command-line interface, which talks directly to your cloud provider. It orchestrates the toilsome process of setting everything up inside your account, for example creating a variety of resources (which includes its own VPC to live in). After your instance is running, our controller component inside the instance will maintain things over time. All of your data resides safely (and inexpensively) in your own S3 or GCS buckets. Frequently whole teams take weeks or months to set up a stack like this, and then it's an ongoing maintenance headache. And even if a team does all of this, they often skimp on certain critical aspects of configuration, such as exposing API endpoints securely or timely upgrades. Opstrace looks a little something like this... First, give our Quick Start a try. It takes about half an hour to spin up the instance, but it's a great way to get a feel for how it works. Furthermore, you don't often need to set up instances since once it's up and running it manages itself. Then you can check out our three guides on the leftUser, Administrator, and Contributorfor more details on how to create and use an Opstrace instance. See also Key Concepts to understand the core concepts of an Opstrace instance. Missing something? Check out our issues to see if it's planned, and if not, submit a proposal and/or contact us in our community discussions. Contributions encouraged!" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Pixie", "subcategory": "Observability" }
[ { "data": "Before installing Pixie to your Kubernetes cluster, please ensure that your system meets the requirements below. Please refer to the install guides for information on how to install Pixie to your K8s cluster. Kubernetes v1.21+ is required. The following tables list Kubernetes environments that have been tested with Pixie. | K8s Environment | Support | |:|:-| | AKS | Supported | | EKS | Supported (includes support on Bottlerocket AMIs) | | EKS Fargate | Not Supported (Fargate does not support eBPF) | | GKE | Supported | | GKE Autopilot | Not Supported (Autopilot does not support eBPF) | | OKE | Supported | | OpenShift | Supported | | kOps | Supported | | Self-hosted | Generally supported, see requirements below including Linux kernel version. | For local development, we recommend using Minikube with a VM driver (kvm2 on Linux, hyperkit on Mac). Note that Kubernetes environments that run inside a container are not currently supported. | K8s Environment | Support | |:|:-| | Docker Desktop | Not supported | | Rancher Desktop | Supported for containerd container runtime (not supported for dockerd runtime) | | k0s | Supported | | k3s | Supported | | k3d | Not Supported (k3d runs k3s clusters inside Docker container \"nodes\") | | kind | Not Supported (kind runs K8s clusters inside Docker container \"nodes\") | | minikube with driver=kvm2 | Supported | | minikube with driver=hyperkit | Supported | | minikube with driver=docker | Not Supported | | minikube with driver=none | Not Supported | Pixie runs on Linux nodes only. You can configure Pixie to deploy to a subset of the nodes in your cluster. | Unnamed: 0 | Support | Version | |:-|:--|:| | Linux | Supported | v4.14+ | | Windows | Not Supported | Not in roadmap | The following table lists Linux distributions that are known to work with Pixie. | Unnamed: 0 | Version | |:|:-| | CentOS | 7.3+ | | Debian | 10+ | | RedHat Enterprise Linux | 8+ | | Ubuntu | 18.04+ | Pixie requires an x86-64 architecture. | Unnamed: 0 | Support | |:-|:--| | x86-64 | Supported | | ARM | Not Supported | Pixie requires the following memory per node: | Minimum | Notes | |:-|:--| | 1GiB | To accommodate application pods, we recommend using no more than 25% of the nodes' total memory for Pixie. | Pixie deploys its PEMs as a DaemonSet on your cluster in order to collect and store telemetry data. The default memory limit is 2Gi per PEM. The lowest recommended value is 1Gi per PEM. For more information on how to configure Pixie's memory usage, see the Tuning Memory Usage page. Pixie's Vizier module sends outgoing HTTPS/2 requests to Pixie's Cloud on port 443. Your cluster's telemetry data flows through Pixie's Cloud via a reverse proxy as encrypted traffic without any persistence. This allows users to access data without being in the same VPC/network as the cluster. Pixie offers end-to-end encryption for telemetry data in flight. Pixie interacts with the Linux kernel to install BPF programs to collect telemetry data. In order to install BPF programs, Pixie vizier-pem-* pods require privileged access." } ]
{ "category": "Observability and Analysis", "file_name": "docs.github.com.md", "project_name": "Opstrace", "subcategory": "Observability" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": "docs.md", "project_name": "Sematext", "subcategory": "Observability" }
[ { "data": "Welcome to Sematext, a full-stack observability tool where you can combine metrics and logs, with custom alerts, in any way you want! Have everything in one place. If youre new here, read below for a high-level overview of Sematext. Hassle-free log monitoring & analysis Map and monitor your whole infrastructure in real-time Frontend performance and user experience monitoring Monitor APIs, websites and user journeys Get notified via Slack, PagerDuty, WebHooks, email, etc. Capture builds, deployments, restarts, failures & other events Add new reports or customize pre-built dashboards For better filtering and grouping in dynamic infrastructures App vs. Account access, team account, inviting others, setting roles Agent fleet, discovery & automatic setup of services and logs Stay informed about the most recent developments in our product and agent releases. Check the latest product updates Learn about the latest Sematext Agent features Sematext integrations let you collect metrics, logs and events across your whole stack from frontend to backend. Our solution goes beyond collecting metrics and detects anomalies, uncovers your slowest transactions, communication between servers and applications, etc. We expose an Elasticsearch API. Sematext works with all standard log shippers and agents you're already used to, such as syslog, Logstash, Fluentd, Filebeat, Vector, NXLog, log4j and many others, and integrates in minutes. With Experience you can monitor your frontend or website performance and receive alerts when end-user experience is affected by performance. Sematext Experience provides invaluable insights that keep your business in control of how happy your customers are when interacting with your website or webapp. With Synthetics you can monitor your website uptime, API performance and availability, user journeys in your webapp, and more from a number of different locations in the world. Sematext Cloud is a SaaS available in multiple locations, so you can choose where your data is stored. Sematext Enterprise is a non-SaaS version you can deploy on your own infrastructure. Data shipped to Sematext is grouped into Apps. Integrations are ways to monitor, collect logs, and other data from numerous different types of software and tools, as well as deliver data to external systems, such as alert notifications. Sematext provides over 100+ built-in integrations used to collect metrics from servers, VMs, containers, logs, services, frontend, send alert notifications, etc. We also provide you with easy to install Agents that collect data about your software and send it to Sematext. But, also an open API you can use. If you have questions, were here to help. Dont hesitate to contact us at support@sematext.com." } ]
{ "category": "Observability and Analysis", "file_name": "using-pixie.md", "project_name": "Pixie", "subcategory": "Observability" }
[ { "data": "The following tutorials demonstrate the different ways you can interact with Pixie's observability platform. All three interfaces execute PxL scripts. Scripts are written using the Pixie Language (PxL). PxL is a DSL that follows the API of the the popular Python data processing library Pandas. All PxL is valid Python. PxL scripts can both: Pixie provides many open source scripts, which appear under the px/ namespace in Pixie's Live UI and CLI. These community scripts enable the developer community with a broad repository of use-case specific scripts out of the box. Over time, we hope this grows into a community driven knowledge base of tools to observe, debug, secure and manage applications. To use Pixie for a specific use case, check one of the following tutorials:" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Sensu", "subcategory": "Observability" }
[ { "data": "Dynamic runtime assets are shareable, reusable packages that make it easier to deploy Sensu plugins. You can use dynamic runtime assets to provide the plugins, libraries, and runtimes you need to automate your monitoring workflows. Sensu supports dynamic runtime assets for checks, filters, mutators, and handlers. Use the Sensu Catalog to find, configure, and install many dynamic runtime assets directly from your browser. Follow the Catalog prompts to configure the Sensu resources you need and start processing your observability data with a few clicks. You can also discover, download, and share dynamic runtime assets using Bonsai, the Sensu asset hub. Read Use assets to install plugins to get started. NOTE: Dynamic runtime assets are not required to use Sensu Go. You can install Sensu plugins using the sensu-install tool or a configuration management solution. The Sensu backend executes handler, filter, and mutator dynamic runtime assets. The Sensu agent executes check dynamic runtime assets. At runtime, the backend or agent sequentially evaluates dynamic runtime assets that appear in the runtime_assets attribute of the handler, filter, mutator, or check being executed. This example shows a dynamic runtime asset resource definition that includes the minimum required attributes: ``` type: Asset api_version: core/v2 metadata: name: check_script spec: builds: sha512: 4f926bf4328fbad2b9cac873d117f771914f4b837c9c85584c38ccf55a3ef3c2e8d154812246e5dda4a87450576b2c58ad9ab40c9e2edc31b288d066b195b21b url: http://example.com/asset.tar.gz``` ``` { \"type\": \"Asset\", \"api_version\": \"core/v2\", \"metadata\": { \"name\": \"check_script\" }, \"spec\": { \"builds\": [ { \"url\": \"http://example.com/asset.tar.gz\", \"sha512\": \"4f926bf4328fbad2b9cac873d117f771914f4b837c9c85584c38ccf55a3ef3c2e8d154812246e5dda4a87450576b2c58ad9ab40c9e2edc31b288d066b195b21b\" } ] } }``` If you use a Sensu package, dynamic runtime assets are installed at /var/cache. If you use a Sensu Docker image, dynamic runtime assets are installed at /var/lib. A dynamic runtime asset build is the combination of an artifact URL, SHA512 checksum, and optional Sensu query expression filters. Each asset definition may describe one or more builds. NOTE: Dynamic runtime assets that provide url and sha512 attributes at the top level of the spec scope are single-build assets, and this form of asset defintion is deprecated. We recommend using multiple-build asset defintions, which specify one or more builds under the spec scope. This example shows the resource definition for the sensu/check-cpu-usage dynamic runtime asset, which has multiple builds: ``` type: Asset api_version: core/v2 metadata: name: check-cpu-usage labels: annotations: io.sensu.bonsai.url: https://bonsai.sensu.io/assets/sensu/check-cpu-usage io.sensu.bonsai.api_url: https://bonsai.sensu.io/api/v1/assets/sensu/check-cpu-usage io.sensu.bonsai.tier: Community io.sensu.bonsai.version: 0.2.2 io.sensu.bonsai.namespace: sensu io.sensu.bonsai.name: check-cpu-usage io.sensu.bonsai.tags: '' spec: builds: url: https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2windows_amd64.tar.gz sha512: 900cfdf28d6088b929c4bf9a121b628971edee5fa5cbc91a6bc1df3bd9a7f8adb1fcfb7b1ad70589ed5b4f5ec87d9a9a3ba95bcf2acda56b0901406f14f69fe7 filters: entity.system.os == 'windows' entity.system.arch == 'amd64' url: https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2darwin_amd64.tar.gz sha512: db81ee70426114e4cd4b3f180f2b0b1e15b4bffc09d7f2b41a571be2422f4399af3fbd2fa2918b8831909ab4bc2d3f58d0aa0d7b197d3a218b2391bb5c1f6913 filters: entity.system.os == 'darwin' entity.system.arch == 'amd64' url: https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_armv7.tar.gz sha512: 400aacce297176e69f3a88b0aab0ddfdbe9dd6a37a673cb1774c8d4750a91cf7713a881eef26ea21d200f74cb20818161c773490139e6a6acb92cbd06dee994c filters: entity.system.os == 'linux' entity.system.arch == 'armv7' url: https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_arm64.tar.gz sha512: bef7802b121ac2a2a5c5ad169d6003f57d8b4f5e83eae998a0e0dd1e7b89678d4a62e678d153edacdd65fd1d0123b5f51308622690455e77cec6deccfa183397 filters: entity.system.os == 'linux' entity.system.arch == 'arm64' url: https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_386.tar.gz sha512: a2dcb5324952567a61d76a2e331c1c16df69ef0e0b9899515dad8d1531b204076ad0c008f59fc2f4735a5a779afb0c1baa132268c41942b203444e377fe8c8e5 filters: entity.system.os == 'linux' entity.system.arch == '386' url: https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_amd64.tar.gz sha512: 24539739b5eb19bbab6eda151d0bcc63a0825afdfef3bc1ec3670c7b0a00fbbb2fd006d605a7a038b32269a22026d8947324f2bc0acdf35e8563cf4cb8660d7f filters: entity.system.os == 'linux' entity.system.arch == 'amd64'``` ``` { \"type\": \"Asset\", \"api_version\": \"core/v2\", \"metadata\": { \"name\": \"check-cpu-usage\", \"labels\": null, \"annotations\": { \"io.sensu.bonsai.url\": \"https://bonsai.sensu.io/assets/sensu/check-cpu-usage\", \"io.sensu.bonsai.api_url\": \"https://bonsai.sensu.io/api/v1/assets/sensu/check-cpu-usage\", \"io.sensu.bonsai.tier\": \"Community\", \"io.sensu.bonsai.version\": \"0.2.2\", \"io.sensu.bonsai.namespace\": \"sensu\", \"io.sensu.bonsai.name\": \"check-cpu-usage\", \"io.sensu.bonsai.tags\": \"\" } }, \"spec\": { \"builds\": [ { \"url\": \"https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2windows_amd64.tar.gz\", \"sha512\": \"900cfdf28d6088b929c4bf9a121b628971edee5fa5cbc91a6bc1df3bd9a7f8adb1fcfb7b1ad70589ed5b4f5ec87d9a9a3ba95bcf2acda56b0901406f14f69fe7\", \"filters\": [ \"entity.system.os == 'windows'\", \"entity.system.arch == 'amd64'\" ] }, { \"url\": \"https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2darwin_amd64.tar.gz\", \"sha512\": \"db81ee70426114e4cd4b3f180f2b0b1e15b4bffc09d7f2b41a571be2422f4399af3fbd2fa2918b8831909ab4bc2d3f58d0aa0d7b197d3a218b2391bb5c1f6913\", \"filters\": [ \"entity.system.os == 'darwin'\", \"entity.system.arch == 'amd64'\" ] }, { \"url\": \"https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_armv7.tar.gz\", \"sha512\": \"400aacce297176e69f3a88b0aab0ddfdbe9dd6a37a673cb1774c8d4750a91cf7713a881eef26ea21d200f74cb20818161c773490139e6a6acb92cbd06dee994c\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'armv7'\" ] }, { \"url\": \"https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_arm64.tar.gz\", \"sha512\": \"bef7802b121ac2a2a5c5ad169d6003f57d8b4f5e83eae998a0e0dd1e7b89678d4a62e678d153edacdd65fd1d0123b5f51308622690455e77cec6deccfa183397\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'arm64'\" ] }, { \"url\": \"https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_386.tar.gz\", \"sha512\": \"a2dcb5324952567a61d76a2e331c1c16df69ef0e0b9899515dad8d1531b204076ad0c008f59fc2f4735a5a779afb0c1baa132268c41942b203444e377fe8c8e5\", \"filters\": [ \"entity.system.os == 'linux'\"," }, { "data": "== '386'\" ] }, { \"url\": \"https://assets.bonsai.sensu.io/a7ced27e881989c44522112aa05dd3f25c8f1e49/check-cpu-usage0.2.2linux_amd64.tar.gz\", \"sha512\": \"24539739b5eb19bbab6eda151d0bcc63a0825afdfef3bc1ec3670c7b0a00fbbb2fd006d605a7a038b32269a22026d8947324f2bc0acdf35e8563cf4cb8660d7f\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ] } ] } }``` This example shows the resource definition for a dynamic runtime asset with a single build: ``` type: Asset api_version: core/v2 metadata: name: checkcpulinux_amd64 labels: origin: bonsai annotations: project_url: https://bonsai.sensu.io/assets/asachs01/sensu-go-cpu-check version: 0.0.3 spec: url: https://assets.bonsai.sensu.io/981307deb10ebf1f1433a80da5504c3c53d5c44f/sensu-go-cpu-check0.0.3linux_amd64.tar.gz sha512: 487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3 filters: entity.system.os == 'linux' entity.system.arch == 'amd64' headers: Authorization: 'Bearer {{ .annotations.asset_token | default \"N/A\" }}' X-Forwarded-For: client1, proxy1, proxy2``` ``` { \"type\": \"Asset\", \"api_version\": \"core/v2\", \"metadata\": { \"name\": \"checkcpulinux_amd64\", \"labels\": { \"origin\": \"bonsai\" }, \"annotations\": { \"project_url\": \"https://bonsai.sensu.io/assets/asachs01/sensu-go-cpu-check\", \"version\": \"0.0.3\" } }, \"spec\": { \"url\": \"https://assets.bonsai.sensu.io/981307deb10ebf1f1433a80da5504c3c53d5c44f/sensu-go-cpu-check0.0.3linux_amd64.tar.gz\", \"sha512\": \"487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ], \"headers\": { \"Authorization\": \"Bearer {{ .annotations.asset_token | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } } }``` For each build provided in a dynamic runtime asset, Sensu will evaluate any defined filters to determine whether any build matches the agent or backend services environment. If all filters specified on a build evaluate to true, that build is considered a match. For dynamic runtime assets with multiple builds, only the first build that matches will be downloaded and installed. Sensu downloads the dynamic runtime asset build on the host system where the asset contents are needed to execute the requested command. For example, if a check definition references a dynamic runtime asset, the Sensu agent that executes the check will download the asset the first time it executes the check. The dynamic runtime asset build the agent downloads will depend on the filter rules associated with each build defined for the asset. Sensu backends follow a similar process when pipeline elements (filters, mutators, and handlers) request dynamic runtime asset installation as part of operation. NOTE: Dynamic runtime asset builds are not downloaded until they are needed for command execution. When Sensu finds a matching build, it downloads the build artifact from the specified URL. If the asset definition includes headers, they are passed along as part of the HTTP request. If the downloaded artifacts SHA512 checksum matches the checksum provided by the build, it is unpacked into the Sensu services local cache directory. Set the backend or agents local cache path with the cache-dir configuration option. Disable dynamic runtime assets for an agent with the agent disable-assets configuration option. NOTE: Dynamic runtime asset builds are unpacked into the cache directory that is configured with the cache-dir configuration option. Use the assets-rate-limit and assets-burst-limit configuration options for the agent and backend to configure a global rate limit for fetching dynamic runtime assets. The directory path of each dynamic runtime asset listed in a check, event filter, handler, or mutator resources runtime_assets array is appended to the PATH before the resources command is executed. Subsequent check, event filter, handler, or mutator executions look for the dynamic runtime asset in the local cache and ensure that the contents match the configured checksum. The following example demonstrates a use case with a Sensu check resource and an asset: ``` type: Asset api_version: core/v2 metadata: name: sensu-prometheus-collector spec: builds: url: https://assets.bonsai.sensu.io/ef812286f59de36a40e51178024b81c69666e1b7/sensu-prometheus-collector1.1.6linux_amd64.tar.gz sha512: a70056ca02662fbf2999460f6be93f174c7e09c5a8b12efc7cc42ce1ccb5570ee0f328a2dd8223f506df3b5972f7f521728f7bdd6abf9f6ca2234d690aeb3808 filters: entity.system.os == 'linux'" }, { "data": "== 'amd64' type: CheckConfig api_version: core/v2 metadata: name: prometheus_collector spec: command: \"sensu-prometheus-collector -prom-url http://localhost:9090 -prom-query up\" interval: 10 publish: true outputmetrichandlers: influxdb outputmetricformat: influxdb_line runtime_assets: sensu-prometheus-collector subscriptions: system``` ``` { \"type\": \"Asset\", \"api_version\": \"core/v2\", \"metadata\": { \"name\": \"sensu-email-handler\" }, \"spec\": { \"builds\": [ { \"url\": \"https://assets.bonsai.sensu.io/45eaac0851501a19475a94016a4f8f9688a280f6/sensu-email-handler0.2.0linux_amd64.tar.gz\", \"sha512\": \"d69df76612b74acd64aef8eed2ae10d985f6073f9b014c8115b7896ed86786128c20249fd370f30672bf9a11b041a99adb05e3a23342d3ad80d0c346ec23a946\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ] } ] } } { \"type\": \"CheckConfig\", \"api_version\": \"core/v2\", \"metadata\": { \"name\": \"prometheus_collector\" }, \"spec\": { \"command\": \"sensu-prometheus-collector -prom-url http://localhost:9090 -prom-query up\", \"handlers\": [ \"influxdb\" ], \"interval\": 10, \"publish\": true, \"outputmetricformat\": \"influxdb_line\", \"runtime_assets\": [ \"sensu-prometheus-collector\" ], \"subscriptions\": [ \"system\" ] } }``` Sensu expects a dynamic runtime asset to be a tar archive (optionally gzipped) that contains one or more executables within a bin folder. Any scripts or executables should be within a bin/ folder in the archive. Read the Sensu Go Plugin template for an example dynamic runtime asset and Bonsai configuration. The following are injected into the execution context: NOTE: You cannot create a dynamic runtime asset by creating an archive of an existing project (as in previous versions of Sensu for plugins from the Sensu Plugins community). Follow the steps outlined in Contributing Assets for Existing Ruby Sensu Plugins, a Sensu Discourse guide. For further examples of Sensu users who have added the ability to use a community plugin as a dynamic runtime asset, read this Discourse post. | system | sensu-backend | sensu-agent | |:|:-|:| | Linux | /var/cache/sensu/sensu-backend | /var/cache/sensu/sensu-agent | | Windows | nan | C:\\ProgramData\\sensu\\cache\\sensu-agent | If the requested dynamic runtime asset is not in the local cache, it is downloaded from the asset URL. The Sensu backend acts as an index of dynamic runtime asset builds, and does not provide storage or hosting for the build artifacts. Sensu expects dynamic runtime assets to be retrieved over HTTP or HTTPS. ``` sensu-example-handler1.0.0linux_amd64 CHANGELOG.md LICENSE README.md bin my-check.sh lib include``` When you download and install a dynamic runtime asset, the asset files are saved to a local path on disk. Most of the time, you wont need to know this path except in cases where you need to provide the full path to dynamic runtime asset files as part of a command argument. The dynamic runtime asset directory path includes the assets checksum, which changes every time underlying asset artifacts are updated. This would normally require you to manually update the commands for any of your checks, handlers, hooks, or mutators that consume the dynamic runtime asset. However, because the dynamic runtime asset directory path is exposed to asset consumers via environment variables and the assetPath custom function, you can avoid these manual updates. You can retrieve the dynamic runtime assets path as an environment variable in the command context for checks, handlers, hooks, and mutators. Token substitution with the assetPath custom function is only available for check and hook commands. The Sensu Windows agent uses cmd.exe for the check execution environment. For all other operating systems, the Sensu agent uses the Bourne shell (sh). For each dynamic runtime asset, a corresponding environment variable will be available in the command context. Sensu generates the environment variable name by capitalizing the dynamic runtime assets complete name, replacing any special characters with underscores, and appending the _PATH suffix. The value of the variable will be the path on disk where the dynamic runtime asset build has been unpacked. Each asset page in Bonsai lists the assets complete" }, { "data": "This example shows where the complete name for the sensu/http-checks dynamic runtime asset is located in Bonsai: An assets complete name includes both the part before the forward slash (sometimes called the Bonsai namespace) and the part after the forward slash. Consequently, the environment variable for the sensu/http-checks asset path is: ``` SENSUHTTPCHECKS_PATH``` The Linux environment interprets the content between the ${ and } characters as an environment variable name and will substitute the value of that environment variable. For example, to reference the path for the sensu/http-checks asset in your checks, handlers, hooks, and mutators: ``` ${SENSUHTTPCHECKS_PATH}``` The Windows console environment interprets the content between paired % characters as an environment variable name and will substitute the value of that environment variable. For example, to reference the path for the sensu/sensu-windows-powershell-checks asset in your checks, handlers, hooks, and mutators: ``` %SENSUSENSUWINDOWSPOWERSHELLCHECKS_PATH%``` The assetPath token subsitution function allows you to substitute a dynamic runtime assets local path on disk so that you will not need to manually update your check or hook commands every time the asset is updated. NOTE: The assetPath function is only available where token substitution is available: the command attribute of a check or hook resource. To access a dynamic runtime asset path in a handler or mutator command, you must use the environment variable. To use the assetPath token substitution function in a Linux environment, place it immediately after the $ character. For example, to use the assetPath function to reference the path for the sensu/http-checks asset in your check or hook resources: ``` ${{assetPath \"sensu/http-checks\"}}``` To use the assetPath token substitution function in a Linux environment, place it between paired % characters. For example, to use the assetPath function to reference the path for the sensu/sensu-windows-powershell-checks asset in your check or hook resources: ``` %{{assetPath \"sensu/sensu-windows-powershell-checks\"}}%``` When running PowerShell plugins on Windows, the exit status codes that Sensu captures may not match the expected values. To correctly capture exit status codes from PowerShell plugins distributed as dynamic runtime assets, use the asset path to construct the command. The following example uses the assetPath function for this purpose: ``` type: CheckConfig api_version: core/v2 metadata: name: win-cpu-check spec: command: powershell.exe -ExecutionPolicy ByPass -f %{{assetPath \"sensu/sensu-windows-powershell-checks\"}}%\\bin\\check-windows-cpu-load.ps1 90 95 subscriptions: windows handlers: slack email runtime_assets: sensu/sensu-windows-powershell-checks interval: 10 publish: true``` ``` { \"type\": \"CheckConfig\", \"api_version\": \"core/v2\", \"metadata\": { \"name\": \"win-cpu-check\" }, \"spec\": { \"command\": \"powershell.exe -ExecutionPolicy ByPass -f %{{assetPath \\\"sensu/sensu-windows-powershell-checks\\\"}}%\\\\bin\\\\check-windows-cpu-load.ps1 90 95\", \"subscriptions\": [ \"windows\" ], \"handlers\": [ \"slack\", \"email\" ], \"runtime_assets\": [ \"sensu/sensu-windows-powershell-checks\" ], \"interval\": 10, \"publish\": true } }``` In this example, youll run a script that outputs Hello World: ``` hello-world.sh STRING=\"Hello World\" echo $STRING if [ $? -eq 0 ]; then exit 0 else exit 2 fi``` The first step is to ensure that your directory structure is in place. As noted in Example dynamic runtime asset structure, your script could live in three potential directories in the project: /bin, /lib, or /include. For this example, put your script in the /bin directory. Create the directory sensu-go-hello-world: ``` mkdir sensu-go-hello-world``` Navigate to the sensu-go-hello-world directory: ``` cd sensu-go-hello-world``` Create the directory /bin: ``` mkdir bin``` Copy the script into the /bin directory: ``` cp hello-world.sh bin/``` Confirm that the script is in the /bin directory: ``` tree``` The response should list the" }, { "data": "script in the /bin directory: ``` . bin hello-world.sh``` If you receive a command not found response, install tree and run the command again. Make sure that the script is marked as executable: ``` chmod +x bin/hello-world.sh ``` If you do not receive a response, the command was successful. Now that the script is in the directory, move on to the next step: packaging the sensu-go-hello-world directory as a dynamic runtime asset tarball. Dynamic runtime assets are archives, so packaging the asset requires creating a tar.gz archive of your project. Navigate to the directory you want to tar up. Create the tar.gz archive: ``` tar -C sensu-go-hello-world -cvzf sensu-go-hello-world-0.0.1.tar.gz .``` Generate a SHA512 sum for the tar.gz archive (this is required for the dynamic runtime asset to work): ``` sha512sum sensu-go-hello-world-0.0.1.tar.gz | tee sha512sum.txt``` From here, you can host your dynamic runtime asset wherever youd like. To make the asset available via Bonsai, youll need to host it on GitHub. Learn more in The Hello World of Sensu Assets at the Sensu Community Forum on Discourse. To host your dynamic runtime asset on a different platform like Gitlab or Bitbucket, upload your asset there. You can also use Artifactory or even Apache or NGINX to serve your asset. All thats required for your dynamic runtime asset to work is the URL to the asset and the SHA512 sum for the asset to be downloaded. | type | Unnamed: 1 | |:|:--| | description | Top-level attribute that specifies the sensuctl create resource type. Dynamic runtime assets should always be type Asset. | | required | Required for asset definitions in wrapped-json or yaml format for use with sensuctl create. | | type | String | | example | type: Asset { \"type\": \"Asset\" } | ``` type: Asset``` ``` { \"type\": \"Asset\" }``` | api_version | Unnamed: 1 | |:--|:-| | description | Top-level attribute that specifies the Sensu API group and version. For dynamic runtime assets in this version of Sensu, the api_version should always be core/v2. | | required | Required for asset definitions in wrapped-json or yaml format for use with sensuctl create. | | type | String | | example | apiversion: core/v2 { \"apiversion\": \"core/v2\" } | ``` api_version: core/v2``` ``` { \"api_version\": \"core/v2\" }``` | metadata | Unnamed: 1 | |:|:| | description | Top-level collection of metadata about the dynamic runtime asset, including name, namespace, and created_by as well as custom labels and annotations. The metadata map is always at the top level of the asset definition. This means that in wrapped-json and yaml formats, the metadata scope occurs outside the spec scope. Read metadata attributes for details. | | required | Required for asset definitions in wrapped-json or yaml format for use with sensuctl create. | | type | Map of key-value pairs | | example | metadata: name: checkscript namespace: default createdby: admin labels: region: us-west-1 annotations: playbook: www.example.url { \"metadata\": { \"name\": \"checkscript\", \"namespace\": \"default\", \"createdby\": \"admin\", \"labels\": { \"region\": \"us-west-1\" }, \"annotations\": { \"playbook\": \"www.example.url\" } } } | ``` metadata: name: check_script namespace: default created_by: admin labels: region: us-west-1 annotations: playbook: www.example.url``` ``` { \"metadata\": { \"name\": \"check_script\", \"namespace\": \"default\", \"created_by\": \"admin\", \"labels\": { \"region\": \"us-west-1\" }, \"annotations\": { \"playbook\":" }, { "data": "} } }``` | spec | Unnamed: 1 | |:--|:-| | description | Top-level map that includes the dynamic runtime asset spec attributes. | | required | Required for asset definitions in wrapped-json or yaml format for use with sensuctl create. | | type | Map of key-value pairs | | example (multiple builds) | spec: builds: - url: http://example.com/asset-linux-amd64.tar.gz sha512: 487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3 filters: - entity.system.os == 'linux' - entity.system.arch == 'amd64' headers: Authorization: Bearer {{ .annotations.assettoken | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2 - url: http://example.com/asset-linux-armv7.tar.gz sha512: 70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b filters: - entity.system.os == 'linux' - entity.system.arch == 'arm' - entity.system.armversion == 7 headers: Authorization: Bearer {{ .annotations.assettoken | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2 { \"spec\": { \"builds\": [ { \"url\": \"http://example.com/asset-linux-amd64.tar.gz\", \"sha512\": \"487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ], \"headers\": { \"Authorization\": \"Bearer {{ .annotations.assettoken | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } }, { \"url\": \"http://example.com/asset-linux-armv7.tar.gz\", \"sha512\": \"70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'arm'\", \"entity.system.armversion == 7\" ], \"headers\": { \"Authorization\": \"Bearer {{ .annotations.assettoken | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } } ] } } | | example (single build, deprecated) | spec: url: http://example.com/asset.tar.gz sha512: 4f926bf4328fbad2b9cac873d117f771914f4b837c9c85584c38ccf55a3ef3c2e8d154812246e5dda4a87450576b2c58ad9ab40c9e2edc31b288d066b195b21b filters: - entity.system.os == 'linux' - entity.system.arch == 'amd64' headers: Authorization: Bearer {{ .annotations.assettoken | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2 { \"spec\": { \"url\": \"http://example.com/asset.tar.gz\", \"sha512\": \"4f926bf4328fbad2b9cac873d117f771914f4b837c9c85584c38ccf55a3ef3c2e8d154812246e5dda4a87450576b2c58ad9ab40c9e2edc31b288d066b195b21b\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ], \"headers\": { \"Authorization\": \"Bearer {{ .annotations.assettoken | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } } } | ``` spec: builds: url: http://example.com/asset-linux-amd64.tar.gz sha512: 487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3 filters: entity.system.os == 'linux' entity.system.arch == 'amd64' headers: Authorization: Bearer {{ .annotations.asset_token | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2 url: http://example.com/asset-linux-armv7.tar.gz sha512: 70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b filters: entity.system.os == 'linux' entity.system.arch == 'arm' entity.system.arm_version == 7 headers: Authorization: Bearer {{ .annotations.asset_token | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2``` ``` { \"spec\": { \"builds\": [ { \"url\": \"http://example.com/asset-linux-amd64.tar.gz\", \"sha512\": \"487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ], \"headers\": { \"Authorization\": \"Bearer {{ .annotations.asset_token | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } }, { \"url\": \"http://example.com/asset-linux-armv7.tar.gz\", \"sha512\": \"70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'arm'\", \"entity.system.arm_version == 7\" ], \"headers\": { \"Authorization\": \"Bearer {{ .annotations.asset_token | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } } ] } }``` ``` spec: url: http://example.com/asset.tar.gz sha512: 4f926bf4328fbad2b9cac873d117f771914f4b837c9c85584c38ccf55a3ef3c2e8d154812246e5dda4a87450576b2c58ad9ab40c9e2edc31b288d066b195b21b filters: entity.system.os == 'linux' entity.system.arch == 'amd64' headers: Authorization: Bearer {{ .annotations.asset_token | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2``` ``` { \"spec\": { \"url\": \"http://example.com/asset.tar.gz\", \"sha512\": \"4f926bf4328fbad2b9cac873d117f771914f4b837c9c85584c38ccf55a3ef3c2e8d154812246e5dda4a87450576b2c58ad9ab40c9e2edc31b288d066b195b21b\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ], \"headers\": { \"Authorization\": \"Bearer {{ .annotations.asset_token | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } } }``` | name | Unnamed: 1 | |:|:| | description | Unique name of the dynamic runtime asset, validated with Go regex \\A[\\w\\.\\-]+\\z. | | required | true | | type | String | | example | name: checkscript { \"name\": \"checkscript\" } | ``` name: check_script``` ``` { \"name\": \"check_script\" }``` | namespace | Unnamed: 1 | |:|:-| | description | Sensu RBAC namespace that the dynamic runtime asset belongs" }, { "data": "| | required | false | | type | String | | default | default | | example | namespace: production { \"namespace\": \"production\" } | ``` namespace: production``` ``` { \"namespace\": \"production\" }``` | created_by | Unnamed: 1 | |:-|:--| | description | Username of the Sensu user who created the dynamic runtime asset or last updated the asset. Sensu automatically populates the created_by field when the dynamic runtime asset is created or updated. | | required | false | | type | String | | example | createdby: admin { \"createdby\": \"admin\" } | ``` created_by: admin``` ``` { \"created_by\": \"admin\" }``` | labels | Unnamed: 1 | |:|:| | description | Custom attributes to include with observation event data that you can use for response and web UI view filtering.If you include labels in your event data, you can filter API responses, sensuctl responses, and web UI views based on them. In other words, labels allow you to create meaningful groupings for your data.Limit labels to metadata you need to use for response filtering. For complex, non-identifying metadata that you will not need to use in response filtering, use annotations rather than labels. | | required | false | | type | Map of key-value pairs. Keys can contain only letters, numbers, and underscores and must start with a letter. Values can be any valid UTF-8 string. | | default | nan | | example | labels: environment: development region: us-west-2 { \"labels\": { \"environment\": \"development\", \"region\": \"us-west-2\" } } | ``` labels: environment: development region: us-west-2``` ``` { \"labels\": { \"environment\": \"development\", \"region\": \"us-west-2\" } }``` | annotations | Unnamed: 1 | |:--|:-| | description | Non-identifying metadata to include with observation event data that you can access with event filters. You can use annotations to add data thats meaningful to people or external tools that interact with Sensu.In contrast to labels, you cannot use annotations in API response filtering, sensuctl response filtering, or web UI views. | | required | false | | type | Map of key-value pairs. Keys and values can be any valid UTF-8 string. | | default | nan | | example | annotations: managed-by: ops playbook: www.example.url { \"annotations\": { \"managed-by\": \"ops\", \"playbook\": \"www.example.url\" } } | ``` annotations: managed-by: ops playbook: www.example.url``` ``` { \"annotations\": { \"managed-by\": \"ops\", \"playbook\": \"www.example.url\" } }``` | builds | Unnamed: 1 | |:|:-| | description | List of dynamic runtime asset builds used to define multiple artifacts that provide the named asset. | | required | true, if url, sha512 and filters are not provided | | type | Array | | example | builds: - url: http://example.com/asset-linux-amd64.tar.gz sha512: 487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3 filters: - entity.system.os == 'linux' - entity.system.arch == 'amd64' - url: http://example.com/asset-linux-armv7.tar.gz sha512: 70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b filters: - entity.system.os == 'linux' - entity.system.arch == 'arm' - entity.system.armversion == 7 { \"builds\": [ { \"url\": \"http://example.com/asset-linux-amd64.tar.gz\", \"sha512\": \"487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ] }, { \"url\": \"http://example.com/asset-linux-armv7.tar.gz\", \"sha512\": \"70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'arm'\", \"entity.system.armversion == 7\" ] } ] } | ``` builds: url: http://example.com/asset-linux-amd64.tar.gz sha512: 487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3 filters: entity.system.os == 'linux' entity.system.arch == 'amd64' url: http://example.com/asset-linux-armv7.tar.gz sha512: 70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b filters: entity.system.os == 'linux' entity.system.arch == 'arm' entity.system.arm_version == 7``` ``` { \"builds\": [ { \"url\": \"http://example.com/asset-linux-amd64.tar.gz\", \"sha512\": \"487ab34b37da8ce76d2657b62d37b35fbbb240c3546dd463fa0c37dc58a72b786ef0ca396a0a12c8d006ac7fa21923e0e9ae63419a4d56aec41fccb574c1a5d3\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\" ] }, { \"url\": \"http://example.com/asset-linux-armv7.tar.gz\", \"sha512\": \"70df8b7e9aa36cf942b972e1781af04815fa560441fcdea1d1538374066a4603fc5566737bfd6c7ffa18314edb858a9f93330a57d430deeb7fd6f75670a8c68b\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'arm'\"," }, { "data": "== 7\" ] } ] }``` | url | Unnamed: 1 | |:|:| | description | URL location of the dynamic runtime asset. You can use token substitution in the URLs of your asset definitions so each backend or agent can download dynamic runtime assets from the appropriate URL without duplicating your assets (for example, if you want to host your assets at different datacenters). | | required | true, unless builds are provided | | type | String | | example | url: http://example.com/asset.tar.gz { \"url\": \"http://example.com/asset.tar.gz\" } | ``` url: http://example.com/asset.tar.gz``` ``` { \"url\": \"http://example.com/asset.tar.gz\" }``` | sha512 | Unnamed: 1 | |:|:-| | description | Checksum of the dynamic runtime asset. | | required | true, unless builds are provided | | type | String | | example | sha512: 4f926bf4328... { \"sha512\": \"4f926bf4328...\" } | ``` sha512: 4f926bf4328...``` ``` { \"sha512\": \"4f926bf4328...\" }``` | filters | Unnamed: 1 | |:|:| | description | Set of Sensu query expressions used to determine if the dynamic runtime asset should be installed. If multiple expressions are included, each expression must return true for Sensu to install the asset.Filters for check dynamic runtime assets should match agent entity platforms. Filters for handler and filter dynamic runtime assets should match your Sensu backend platform. You can create asset filter expressions using any supported entity.system attributes, including os, arch, platform, and platform_family. PRO TIP: Dynamic runtime asset filters let you reuse checks across platforms safely. Assign dynamic runtime assets for multiple platforms to a single check, and rely on asset filters to ensure that only the appropriate asset is installed on each agent. | | required | false | | type | Array | | example | filters: - entity.system.os=='linux' - entity.system.arch=='amd64' { \"filters\": [ \"entity.system.os=='linux'\", \"entity.system.arch=='amd64'\" ] } | PRO TIP: Dynamic runtime asset filters let you reuse checks across platforms safely. Assign dynamic runtime assets for multiple platforms to a single check, and rely on asset filters to ensure that only the appropriate asset is installed on each agent. ``` filters: entity.system.os=='linux' entity.system.arch=='amd64'``` ``` { \"filters\": [ \"entity.system.os=='linux'\", \"entity.system.arch=='amd64'\" ] }``` | headers | Unnamed: 1 | |:|:| | description | HTTP headers to apply to dynamic runtime asset retrieval requests. You can use headers to access secured dynamic runtime assets. For headers that require multiple values, separate the values with a comma. You can use token substitution in your dynamic runtime asset headers (for example, to include secure information for authentication). | | required | false | | type | Map of key-value string pairs | | example | headers: Authorization: Bearer {{ .annotations.assettoken | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2 { \"headers\": { \"Authorization\": \"Bearer {{ .annotations.assettoken | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } } | ``` headers: Authorization: Bearer {{ .annotations.asset_token | default \"N/A\" }} X-Forwarded-For: client1, proxy1, proxy2``` ``` { \"headers\": { \"Authorization\": \"Bearer {{ .annotations.asset_token | default \\\"N/A\\\" }}\", \"X-Forwarded-For\": \"client1, proxy1, proxy2\" } }``` Use the entity.system attributes in dynamic runtime asset filters to specify which systems and configurations an asset or asset builds can be used with. For example, the sensu/sensu-ruby-runtime dynamic runtime asset definition includes several builds, each with filters for several entity.system attributes: ``` type: Asset api_version: core/v2 metadata: name: sensu-ruby-runtime labels: annotations: io.sensu.bonsai.url: https://bonsai.sensu.io/assets/sensu/sensu-ruby-runtime io.sensu.bonsai.api_url: https://bonsai.sensu.io/api/v1/assets/sensu/sensu-ruby-runtime io.sensu.bonsai.tier: Community io.sensu.bonsai.version: 0.0.10 io.sensu.bonsai.namespace: sensu io.sensu.bonsai.name: sensu-ruby-runtime" }, { "data": "'' spec: builds: url: https://assets.bonsai.sensu.io/5123017d3dadf2067fa90fc28275b92e9b586885/sensu-ruby-runtime0.0.10ruby-2.4.4centos6linux_amd64.tar.gz sha512: cbee19124b7007342ce37ff9dfd4a1dde03beb1e87e61ca2aef606a7ad3c9bd0bba4e53873c07afa5ac46b0861967a9224511b4504dadb1a5e8fb687e9495304 filters: entity.system.os == 'linux' entity.system.arch == 'amd64' entity.system.platform_family == 'rhel' parseInt(entity.system.platform_version.split('.')[0]) == 6 url: https://assets.bonsai.sensu.io/5123017d3dadf2067fa90fc28275b92e9b586885/sensu-ruby-runtime0.0.10ruby-2.4.4debianlinux_amd64.tar.gz sha512: a28952fd93fc63db1f8988c7bc40b0ad815eb9f35ef7317d6caf5d77ecfbfd824a9db54184400aa0c81c29b34cb48c7e8c6e3f17891aaf84cafa3c134266a61a filters: entity.system.os == 'linux' entity.system.arch == 'amd64' entity.system.platform_family == 'debian' url: https://assets.bonsai.sensu.io/5123017d3dadf2067fa90fc28275b92e9b586885/sensu-ruby-runtime0.0.10ruby-2.4.4alpinelinux_amd64.tar.gz sha512: 8d768d1fba545898a8d09dca603457eb0018ec6829bc5f609a1ea51a2be0c4b2d13e1aa46139ecbb04873449e4c76f463f0bdfbaf2107caf37ab1c8db87d5250 filters: entity.system.os == 'linux' entity.system.arch == 'amd64' entity.system.platform == 'alpine' entity.system.platform_version.split('.')[0] == '3'``` ``` { \"type\": \"Asset\", \"api_version\": \"core/v2\", \"metadata\": { \"name\": \"sensu-ruby-runtime\", \"labels\": null, \"annotations\": { \"io.sensu.bonsai.url\": \"https://bonsai.sensu.io/assets/sensu/sensu-ruby-runtime\", \"io.sensu.bonsai.api_url\": \"https://bonsai.sensu.io/api/v1/assets/sensu/sensu-ruby-runtime\", \"io.sensu.bonsai.tier\": \"Community\", \"io.sensu.bonsai.version\": \"0.0.10\", \"io.sensu.bonsai.namespace\": \"sensu\", \"io.sensu.bonsai.name\": \"sensu-ruby-runtime\", \"io.sensu.bonsai.tags\": \"\" } }, \"spec\": { \"builds\": [ { \"url\": \"https://assets.bonsai.sensu.io/5123017d3dadf2067fa90fc28275b92e9b586885/sensu-ruby-runtime0.0.10ruby-2.4.4centos6linux_amd64.tar.gz\", \"sha512\": \"cbee19124b7007342ce37ff9dfd4a1dde03beb1e87e61ca2aef606a7ad3c9bd0bba4e53873c07afa5ac46b0861967a9224511b4504dadb1a5e8fb687e9495304\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\", \"entity.system.platform_family == 'rhel'\", \"parseInt(entity.system.platform_version.split('.')[0]) == 6\" ] }, { \"url\": \"https://assets.bonsai.sensu.io/5123017d3dadf2067fa90fc28275b92e9b586885/sensu-ruby-runtime0.0.10ruby-2.4.4debianlinux_amd64.tar.gz\", \"sha512\": \"a28952fd93fc63db1f8988c7bc40b0ad815eb9f35ef7317d6caf5d77ecfbfd824a9db54184400aa0c81c29b34cb48c7e8c6e3f17891aaf84cafa3c134266a61a\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\", \"entity.system.platform_family == 'debian'\" ] }, { \"url\": \"https://assets.bonsai.sensu.io/5123017d3dadf2067fa90fc28275b92e9b586885/sensu-ruby-runtime0.0.10ruby-2.4.4alpinelinux_amd64.tar.gz\", \"sha512\": \"8d768d1fba545898a8d09dca603457eb0018ec6829bc5f609a1ea51a2be0c4b2d13e1aa46139ecbb04873449e4c76f463f0bdfbaf2107caf37ab1c8db87d5250\", \"filters\": [ \"entity.system.os == 'linux'\", \"entity.system.arch == 'amd64'\", \"entity.system.platform == 'alpine'\", \"entity.system.platform_version.split('.')[0] == '3'\" ] } ] } }``` In this example, if you install the dynamic runtime asset on a system running Linux AMD64 Alpine version 3.xx, Sensu will ignore the first two builds and install the third. NOTE: Sensu downloads and installs the first build whose filter expressions all evaluate as true. If your system happens to match all of the filters for more than one build of a dynamic runtime asset, Sensu will only install the first build. All of the dynamic runtime asset filter expressions must evaluate as true for Sensu to download and install the asset and run the check, handler, or filter that references the asset. To continue this example, if you try to install the dynamic runtime asset on a system running Linux AMD64 Alpine version 2.xx, the entity.system.platform_version argument will evaluate as false. In this case, the asset will not be downloaded and the check, handler, or filter that references the asset will fail to run. Add dynamic runtime asset filters to specify that an asset is compiled for any of the entity.system attributes, including operating system, platform, platform version, and architecture. Then, you can rely on dynamic runtime asset filters to ensure that you install only the appropriate asset for each of your agents. Share your open-source dynamic runtime assets on Bonsai and connect with the Sensu community. Bonsai supports dynamic runtime assets hosted on GitHub and released using GitHub releases. For more information about creating Sensu plugins, read the plugins reference. Bonsai requires a bonsai.yml configuration file in the root directory of your repository that includes the project description, platforms, asset filenames, and SHA-512 checksums. For a Bonsai-compatible dynamic runtime asset template using Go and GoReleaser, review the Sensu Go plugin skeleton. To share your dynamic runtime asset on Bonsai, log in to Bonsai with your GitHub account and authorize Sensu. After you are logged in, you can register your dynamic runtime asset on Bonsai by adding the GitHub repository, a description, and tags. Make sure to provide a helpful README for your dynamic runtime asset with configuration examples. ``` description: \"#{repo}\" builds: platform: \"linux\" arch: \"amd64\" assetfilename: \"#{repo}#{version}linuxamd64.tar.gz\" shafilename: \"#{repo}#{version}_sha512-checksums.txt\" filter: \"entity.system.os == 'linux'\" \"entity.system.arch == 'amd64'\" platform: \"Windows\" arch: \"amd64\" assetfilename: \"#{repo}#{version}windowsamd64.tar.gz\" shafilename: \"#{repo}#{version}_sha512-checksums.txt\" filter: \"entity.system.os == 'windows'\" \"entity.system.arch == 'amd64'\"``` | description | Unnamed: 1 | |:--|:--| | description | Project" }, { "data": "| | required | true | | type | String | | example | description: \"#{repo}\" | ``` description: \"#{repo}\"``` | builds | Unnamed: 1 | |:|:| | description | Array of dynamic runtime asset details per platform. | | required | true | | type | Array | | example | builds: - platform: \"linux\" arch: \"amd64\" assetfilename: \"#{repo}#{version}linuxamd64.tar.gz\" shafilename: \"#{repo}#{version}_sha512-checksums.txt\" filter: - \"entity.system.os == 'linux'\" - \"entity.system.arch == 'amd64'\" | ``` builds: platform: \"linux\" arch: \"amd64\" assetfilename: \"#{repo}#{version}linuxamd64.tar.gz\" shafilename: \"#{repo}#{version}_sha512-checksums.txt\" filter: \"entity.system.os == 'linux'\" \"entity.system.arch == 'amd64'\"``` | platform | Unnamed: 1 | |:|:-| | description | Platform supported by the dynamic runtime asset. | | required | true | | type | String | | example | - platform: \"linux\" | ``` platform: \"linux\"``` | arch | Unnamed: 1 | |:|:--| | description | Architecture supported by the dynamic runtime asset. | | required | true | | type | String | | example | arch: \"amd64\" | ``` arch: \"amd64\"``` | asset_filename | Unnamed: 1 | |:--|:| | description | File name of the archive that contains the dynamic runtime asset. | | required | true | | type | String | | example | assetfilename: \"#{repo}#{version}linuxamd64.tar.gz\" | ``` assetfilename: \"#{repo}#{version}linuxamd64.tar.gz\"``` | sha_filename | Unnamed: 1 | |:|:--| | description | SHA-512 checksum for the dynamic runtime asset archive. | | required | true | | type | String | | example | shafilename: \"#{repo}#{version}_sha512-checksums.txt\" | ``` shafilename: \"#{repo}#{version}_sha512-checksums.txt\"``` | filter | Unnamed: 1 | |:|:--| | description | Filter expressions that describe the operating system and architecture supported by the asset. | | required | false | | type | Array | | example | filter: - \"entity.system.os == 'linux'\" - \"entity.system.arch == 'amd64'\" | ``` filter: \"entity.system.os == 'linux'\" \"entity.system.arch == 'amd64'\"``` Delete dynamic runtime assets with a DELETE request to the /assets API endpoint or with the sensuctl asset delete command. Removing a dynamic runtime asset from Sensu does not remove references to the deleted asset in any other resource (including checks, filters, mutators, handlers, and hooks). You must also update resources and remove any reference to the deleted dynamic runtime asset. Failure to do so will result in errors like sh: asset.sh: command not found. Errors as a result of failing to remove the dynamic runtime asset from checks and hooks will surface in the event data. Errors as a result of failing to remove the dynamic runtime asset reference on a mutator, handler, or filter will only surface in the backend logs. Deleting a dynamic runtime asset does not delete the archive or downloaded files on disk. You must remove the archive and downloaded files from the asset cache manually. Take 2 minutes to register your installation and let Sensu help you along your journey. We'll send you a free reward when you add your first 10 checks and connect at least 10 agents! The Sensu monitoring event pipeline empowers businesses to automate their monitoring workflows and gain deep visibility into their multi-cloud infrastructure, from Kubernetes to bare metal. Companies like Sony, Box.com, and Activision rely on Sensu to help deliver value faster, at scale. Our team is here to help. Use our contact form or reach out on GitHub, Slack, or Twitter. Become a member of the Sensu community and get access to the community Slack channel and our weekly newsletter containing updates about Sensu." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Prometheus", "subcategory": "Observability" }
[ { "data": "Prometheus is configured via command-line flags and a configuration file. While the command-line flags configure immutable system parameters (such as storage locations, amount of data to keep on disk and in memory, etc.), the configuration file defines everything related to scraping jobs and their instances, as well as which rule files to load. To view all available command-line flags, run ./prometheus -h. Prometheus can reload its configuration at runtime. If the new configuration is not well-formed, the changes will not be applied. A configuration reload is triggered by sending a SIGHUP to the Prometheus process or sending a HTTP POST request to the /-/reload endpoint (when the --web.enable-lifecycle flag is enabled). This will also reload any configured rule files. To specify which configuration file to load, use the --config.file flag. The file is written in YAML format, defined by the scheme described below. Brackets indicate that a parameter is optional. For non-list parameters the value is set to the specified default. Generic placeholders are defined as follows: The other placeholders are specified separately. A valid example file can be found here. The global configuration specifies parameters that are valid in all other configuration contexts. They also serve as defaults for other configuration sections. ``` global: [ scrape_interval: <duration> | default = 1m ] [ scrape_timeout: <duration> | default = 10s ] [ scrape_protocols: [<string>, ...] | default = [ OpenMetricsText1.0.0, OpenMetricsText0.0.1, PrometheusText0.0.4 ] ] [ evaluation_interval: <duration> | default = 1m ] external_labels: [ <labelname>: <labelvalue> ... ] [ querylogfile: <string> ] [ bodysizelimit: <size> | default = 0 ] [ sample_limit: <int> | default = 0 ] [ label_limit: <int> | default = 0 ] [ labelnamelength_limit: <int> | default = 0 ] [ labelvaluelength_limit: <int> | default = 0 ] [ target_limit: <int> | default = 0 ] [ keepdroppedtargets: <int> | default = 0 ] rule_files: [ - <filepath_glob> ... ] scrapeconfigfiles: [ - <filepath_glob> ... ] scrape_configs: [ - <scrape_config> ... ] alerting: alertrelabelconfigs: [ - <relabel_config> ... ] alertmanagers: [ - <alertmanager_config> ... ] remote_write: [ - <remote_write> ... ] remote_read: [ - <remote_read> ... ] storage: [ tsdb: <tsdb> ] [ exemplars: <exemplars> ] tracing: [ <tracing_config> ] ``` A scrape_config section specifies a set of targets and parameters describing how to scrape them. In the general case, one scrape configuration specifies a single job. In advanced configurations, this may change. Targets may be statically configured via the static_configs parameter or dynamically discovered using one of the supported service-discovery mechanisms. Additionally, relabel_configs allow advanced modifications to any target and its labels before scraping. ``` jobname: <jobname> [ scrapeinterval: <duration> | default = <globalconfig.scrape_interval> ] [ scrapetimeout: <duration> | default = <globalconfig.scrape_timeout> ] [ scrapeprotocols: [<string>, ...] | default = <globalconfig.scrape_protocols> ] [ scrapeclassichistograms: <boolean> | default = false ] [ metrics_path: <path> | default = /metrics ] [ honor_labels: <boolean> | default = false ] [ honor_timestamps: <boolean> | default = true ] [ tracktimestampsstaleness: <boolean> | default = false ] [ scheme: <scheme> | default = http ] params: [ <string>: [<string>," }, { "data": "] [ enable_compression: <boolean> | default = true ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] azuresdconfigs: [ - <azuresdconfig> ... ] consulsdconfigs: [ - <consulsdconfig> ... ] digitaloceansdconfigs: [ - <digitaloceansdconfig> ... ] dockersdconfigs: [ - <dockersdconfig> ... ] dockerswarmsdconfigs: [ - <dockerswarmsdconfig> ... ] dnssdconfigs: [ - <dnssdconfig> ... ] ec2sdconfigs: [ - <ec2sdconfig> ... ] eurekasdconfigs: [ - <eurekasdconfig> ... ] filesdconfigs: [ - <filesdconfig> ... ] gcesdconfigs: [ - <gcesdconfig> ... ] hetznersdconfigs: [ - <hetznersdconfig> ... ] httpsdconfigs: [ - <httpsdconfig> ... ] ionossdconfigs: [ - <ionossdconfig> ... ] kubernetessdconfigs: [ - <kubernetessdconfig> ... ] kumasdconfigs: [ - <kumasdconfig> ... ] lightsailsdconfigs: [ - <lightsailsdconfig> ... ] linodesdconfigs: [ - <linodesdconfig> ... ] marathonsdconfigs: [ - <marathonsdconfig> ... ] nervesdconfigs: [ - <nervesdconfig> ... ] nomadsdconfigs: [ - <nomadsdconfig> ... ] openstacksdconfigs: [ - <openstacksdconfig> ... ] ovhcloudsdconfigs: [ - <ovhcloudsdconfig> ... ] puppetdbsdconfigs: [ - <puppetdbsdconfig> ... ] scalewaysdconfigs: [ - <scalewaysdconfig> ... ] serversetsdconfigs: [ - <serversetsdconfig> ... ] tritonsdconfigs: [ - <tritonsdconfig> ... ] uyunisdconfigs: [ - <uyunisdconfig> ... ] static_configs: [ - <static_config> ... ] relabel_configs: [ - <relabel_config> ... ] metricrelabelconfigs: [ - <relabel_config> ... ] [ bodysizelimit: <size> | default = 0 ] [ sample_limit: <int> | default = 0 ] [ label_limit: <int> | default = 0 ] [ labelnamelength_limit: <int> | default = 0 ] [ labelvaluelength_limit: <int> | default = 0 ] [ target_limit: <int> | default = 0 ] [ keepdroppedtargets: <int> | default = 0 ] [ nativehistogrambucket_limit: <int> | default = 0 ] [ nativehistogramminbucketfactor: <float> | default = 0 ] ``` Where <job_name> must be unique across all scrape configurations. A tls_config allows configuring TLS connections. ``` [ ca: <string> ] [ ca_file: <filename> ] [ cert: <string> ] [ cert_file: <filename> ] [ key: <secret> ] [ key_file: <filename> ] [ server_name: <string> ] [ insecureskipverify: <boolean> ] [ min_version: <string> ] [ max_version: <string> ] ``` OAuth 2.0 authentication using the client credentials grant type. Prometheus fetches an access token from the specified endpoint with the given client access and secret keys. ``` client_id: <string> [ client_secret: <secret> ] [ clientsecretfile: <filename> ] scopes: [ - <string> ... ] token_url: <string> endpoint_params: [ <string>: <string> ... ] tls_config: [ <tls_config> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] ``` Azure SD configurations allow retrieving scrape targets from Azure VMs. The following meta labels are available on targets during relabeling: See below for the configuration options for Azure discovery: ``` [ environment: <string> | default = AzurePublicCloud ] [ authentication_method: <string> | default = OAuth] subscription_id: <string> [ tenant_id: <string> ] [ client_id: <string> ] [ client_secret: <secret> ] [ resource_group: <string> ] [ refresh_interval: <duration> | default = 300s ] [ port: <int> | default = 80 ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] ``` Consul SD configurations allow retrieving scrape targets from Consul's Catalog" }, { "data": "The following meta labels are available on targets during relabeling: ``` [ server: <host> | default = \"localhost:8500\" ] [ path_prefix: <string> ] [ token: <secret> ] [ datacenter: <string> ] [ namespace: <string> ] [ partition: <string> ] [ scheme: <string> | default = \"http\" ] [ username: <string> ] [ password: <secret> ] services: [ - <string> ] tags: [ - <string> ] [ node_meta: [ <string>: <string> ... ] ] [ tag_separator: <string> | default = , ] [ allow_stale: <boolean> | default = true ] [ refresh_interval: <duration> | default = 30s ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] ``` Note that the IP number and port used to scrape the targets is assembled as <meta_consul_address>:<metaconsulservice_port>. However, in some Consul setups, the relevant address is in metaconsulservice_address. In those cases, you can use the relabel feature to replace the special address label. The relabeling phase is the preferred and more powerful way to filter services or nodes for a service based on arbitrary labels. For users with thousands of services it can be more efficient to use the Consul API directly which has basic support for filtering nodes (currently by node metadata and a single tag). DigitalOcean SD configurations allow retrieving scrape targets from DigitalOcean's Droplets API. This service discovery uses the public IPv4 address by default, by that can be changed with relabeling, as demonstrated in the Prometheus digitalocean-sd configuration file. The following meta labels are available on targets during relabeling: ``` basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] [ port: <int> | default = 80 ] [ refresh_interval: <duration> | default = 60s ] ``` Docker SD configurations allow retrieving scrape targets from Docker Engine hosts. This SD discovers \"containers\" and will create a target for each network IP and port the container is configured to expose. Available meta labels: See below for the configuration options for Docker discovery: ``` host: <string> [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] tls_config: [ <tls_config> ] [ port: <int> | default = 80 ] [ hostnetworkinghost: <string> | default = \"localhost\" ] [ filters: [ - name: <string> values: <string>, [...] ] [ refresh_interval: <duration> | default = 60s ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] ``` The relabeling phase is the preferred and more powerful way to filter" }, { "data": "For users with thousands of containers it can be more efficient to use the Docker API directly which has basic support for filtering containers (using filters). See this example Prometheus configuration file for a detailed example of configuring Prometheus for Docker Engine. Docker Swarm SD configurations allow retrieving scrape targets from Docker Swarm engine. One of the following roles can be configured to discover targets: The services role discovers all Swarm services and exposes their ports as targets. For each published port of a service, a single target is generated. If a service has no published ports, a target per service is created using the port parameter defined in the SD configuration. Available meta labels: The tasks role discovers all Swarm tasks and exposes their ports as targets. For each published port of a task, a single target is generated. If a task has no published ports, a target per task is created using the port parameter defined in the SD configuration. Available meta labels: The metadockerswarmnetwork_* meta labels are not populated for ports which are published with mode=host. The nodes role is used to discover Swarm nodes. Available meta labels: See below for the configuration options for Docker Swarm discovery: ``` host: <string> [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] tls_config: [ <tls_config> ] role: <string> [ port: <int> | default = 80 ] [ filters: [ - name: <string> values: <string>, [...] ] [ refresh_interval: <duration> | default = 60s ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] ``` The relabeling phase is the preferred and more powerful way to filter tasks, services or nodes. For users with thousands of tasks it can be more efficient to use the Swarm API directly which has basic support for filtering nodes (using filters). See this example Prometheus configuration file for a detailed example of configuring Prometheus for Docker Swarm. A DNS-based service discovery configuration allows specifying a set of DNS domain names which are periodically queried to discover a list of targets. The DNS servers to be contacted are read from /etc/resolv.conf. This service discovery method only supports basic DNS A, AAAA, MX, NS and SRV record queries, but not the advanced DNS-SD approach specified in RFC6763. The following meta labels are available on targets during relabeling: ``` names: [ - <string> ] [ type: <string> | default = 'SRV' ] [ port: <int>] [ refresh_interval: <duration> | default = 30s ] ``` EC2 SD configurations allow retrieving scrape targets from AWS EC2 instances. The private IP address is used by default, but may be changed to the public IP address with relabeling. The IAM credentials used must have the ec2:DescribeInstances permission to discover scrape targets, and may optionally have the ec2:DescribeAvailabilityZones permission if you want the availability zone ID available as a label (see" }, { "data": "The following meta labels are available on targets during relabeling: See below for the configuration options for EC2 discovery: ``` [ region: <string> ] [ endpoint: <string> ] [ access_key: <string> ] [ secret_key: <secret> ] [ profile: <string> ] [ role_arn: <string> ] [ refresh_interval: <duration> | default = 60s ] [ port: <int> | default = 80 ] filters: [ - name: <string> values: <string>, [...] ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] ``` The relabeling phase is the preferred and more powerful way to filter targets based on arbitrary labels. For users with thousands of instances it can be more efficient to use the EC2 API directly which has support for filtering instances. OpenStack SD configurations allow retrieving scrape targets from OpenStack Nova instances. One of the following <openstack_role> types can be configured to discover targets: The hypervisor role discovers one target per Nova hypervisor node. The target address defaults to the host_ip attribute of the hypervisor. The following meta labels are available on targets during relabeling: The instance role discovers one target per network interface of Nova instance. The target address defaults to the private IP address of the network interface. The following meta labels are available on targets during relabeling: See below for the configuration options for OpenStack discovery: ``` role: <openstack_role> region: <string> [ identity_endpoint: <string> ] [ username: <string> ] [ userid: <string> ] [ password: <secret> ] [ domain_name: <string> ] [ domain_id: <string> ] [ project_name: <string> ] [ project_id: <string> ] [ applicationcredentialname: <string> ] [ applicationcredentialid: <string> ] [ applicationcredentialsecret: <secret> ] [ all_tenants: <boolean> | default: false ] [ refresh_interval: <duration> | default = 60s ] [ port: <int> | default = 80 ] [ availability: <string> | default = \"public\" ] tls_config: [ <tls_config> ] ``` OVHcloud SD configurations allow retrieving scrape targets from OVHcloud's dedicated servers and VPS using their API. Prometheus will periodically check the REST endpoint and create a target for every discovered server. The role will try to use the public IPv4 address as default address, if there's none it will try to use the IPv6 one. This may be changed with relabeling. For OVHcloud's public cloud instances you can use the openstacksdconfig. See below for the configuration options for OVHcloud discovery: ``` application_key: <string> application_secret: <secret> consumer_key: <secret> service: <string> [ endpoint: <string> | default = \"ovh-eu\" ] [ refresh_interval: <duration> | default = 60s ] ``` PuppetDB SD configurations allow retrieving scrape targets from PuppetDB resources. This SD discovers resources and will create a target for each resource returned by the API. The resource address is the certname of the resource and can be changed during relabeling. The following meta labels are available on targets during relabeling: See below for the configuration options for PuppetDB discovery: ``` url: <string> query: <string> [ include_parameters: <boolean> | default = false ] [ refresh_interval: <duration> | default = 60s ] [ port: <int> | default = 80 ] tls_config: [ <tls_config> ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>," }, { "data": "] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] ``` See this example Prometheus configuration file for a detailed example of configuring Prometheus with PuppetDB. File-based service discovery provides a more generic way to configure static targets and serves as an interface to plug in custom service discovery mechanisms. It reads a set of files containing a list of zero or more <static_config>s. Changes to all defined files are detected via disk watches and applied immediately. Files may be provided in YAML or JSON format. Only changes resulting in well-formed target groups are applied. Files must contain a list of static configs, using these formats: JSON ``` [ { \"targets\": [ \"<host>\", ... ], \"labels\": { \"<labelname>\": \"<labelvalue>\", ... } }, ... ] ``` YAML ``` targets: [ - '<host>' ] labels: [ <labelname>: <labelvalue> ... ] ``` As a fallback, the file contents are also re-read periodically at the specified refresh interval. Each target has a meta label meta_filepath during the relabeling phase. Its value is set to the filepath from which the target was extracted. There is a list of integrations with this discovery mechanism. ``` files: [ - <filename_pattern> ... ] [ refresh_interval: <duration> | default = 5m ] ``` Where <filename_pattern> may be a path ending in .json, .yml or .yaml. The last path segment may contain a single that matches any character sequence, e.g. my/path/tg_.json. GCE SD configurations allow retrieving scrape targets from GCP GCE instances. The private IP address is used by default, but may be changed to the public IP address with relabeling. The following meta labels are available on targets during relabeling: See below for the configuration options for GCE discovery: ``` project: <string> zone: <string> [ filter: <string> ] [ refresh_interval: <duration> | default = 60s ] [ port: <int> | default = 80 ] [ tag_separator: <string> | default = , ] ``` Credentials are discovered by the Google Cloud SDK default client by looking in the following places, preferring the first location found: If Prometheus is running within GCE, the service account associated with the instance it is running on should have at least read-only permissions to the compute resources. If running outside of GCE make sure to create an appropriate service account and place the credential file in one of the expected locations. Hetzner SD configurations allow retrieving scrape targets from Hetzner Cloud API and Robot API. This service discovery uses the public IPv4 address by default, but that can be changed with relabeling, as demonstrated in the Prometheus hetzner-sd configuration file. The following meta labels are available on all targets during relabeling: The labels below are only available for targets with role set to hcloud: The labels below are only available for targets with role set to robot: ``` role: <string> basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] [ port: <int> | default = 80 ] [ refresh_interval: <duration> | default = 60s ] ``` HTTP-based service discovery provides a more generic way to configure static targets and serves as an interface to plug in custom service discovery" }, { "data": "It fetches targets from an HTTP endpoint containing a list of zero or more <static_config>s. The target must reply with an HTTP 200 response. The HTTP header Content-Type must be application/json, and the body must be valid JSON. Example response body: ``` [ { \"targets\": [ \"<host>\", ... ], \"labels\": { \"<labelname>\": \"<labelvalue>\", ... } }, ... ] ``` The endpoint is queried periodically at the specified refresh interval. The prometheussdhttpfailurestotal counter metric tracks the number of refresh failures. Each target has a meta label meta_url during the relabeling phase. Its value is set to the URL from which the target was extracted. ``` url: <string> [ refresh_interval: <duration> | default = 60s ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] ``` IONOS SD configurations allows retrieving scrape targets from IONOS Cloud API. This service discovery uses the first NICs IP address by default, but that can be changed with relabeling. The following meta labels are available on all targets during relabeling: ``` datacenter_id: <string> basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] [ port: <int> | default = 80 ] [ refresh_interval: <duration> | default = 60s ] ``` Kubernetes SD configurations allow retrieving scrape targets from Kubernetes' REST API and always staying synchronized with the cluster state. One of the following role types can be configured to discover targets: The node role discovers one target per cluster node with the address defaulting to the Kubelet's HTTP port. The target address defaults to the first existing address of the Kubernetes node object in the address type order of NodeInternalIP, NodeExternalIP, NodeLegacyHostIP, and NodeHostName. Available meta labels: In addition, the instance label for the node will be set to the node name as retrieved from the API server. The service role discovers a target for each service port for each service. This is generally useful for blackbox monitoring of a service. The address will be set to the Kubernetes DNS name of the service and respective service port. Available meta labels: The pod role discovers all pods and exposes their containers as targets. For each declared port of a container, a single target is generated. If a container has no specified ports, a port-free target per container is created for manually adding a port via relabeling. Available meta labels: The endpoints role discovers targets from listed endpoints of a service. For each endpoint address one target is discovered per port. If the endpoint is backed by a pod, all additional container ports of the pod, not bound to an endpoint port, are discovered as targets as well. Available meta labels: The endpointslice role discovers targets from existing endpointslices. For each endpoint address referenced in the endpointslice object one target is" }, { "data": "If the endpoint is backed by a pod, all additional container ports of the pod, not bound to an endpoint port, are discovered as targets as well. Available meta labels: The ingress role discovers a target for each path of each ingress. This is generally useful for blackbox monitoring of an ingress. The address will be set to the host specified in the ingress spec. Available meta labels: See below for the configuration options for Kubernetes discovery: ``` [ api_server: <host> ] role: <string> [ kubeconfig_file: <filename> ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] namespaces: own_namespace: <boolean> names: [ - <string> ] [ selectors: [ - role: <string> [ label: <string> ] [ field: <string> ] ]] attach_metadata: [ node: <boolean> | default = false ] ``` See this example Prometheus configuration file for a detailed example of configuring Prometheus for Kubernetes. You may wish to check out the 3rd party Prometheus Operator, which automates the Prometheus setup on top of Kubernetes. Kuma SD configurations allow retrieving scrape target from the Kuma control plane. This SD discovers \"monitoring assignments\" based on Kuma Dataplane Proxies, via the MADS v1 (Monitoring Assignment Discovery Service) xDS API, and will create a target for each proxy inside a Prometheus-enabled mesh. The following meta labels are available for each target: See below for the configuration options for Kuma MonitoringAssignment discovery: ``` server: <string> [ client_id: <string> ] [ refresh_interval: <duration> | default = 30s ] [ fetch_timeout: <duration> | default = 2m ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] tls_config: [ <tls_config> ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] ``` The relabeling phase is the preferred and more powerful way to filter proxies and user-defined tags. Lightsail SD configurations allow retrieving scrape targets from AWS Lightsail instances. The private IP address is used by default, but may be changed to the public IP address with relabeling. The following meta labels are available on targets during relabeling: See below for the configuration options for Lightsail discovery: ``` [ region: <string> ] [ endpoint: <string> ] [ access_key: <string> ] [ secret_key: <secret> ] [ profile: <string> ] [ role_arn: <string> ] [ refresh_interval: <duration> | default = 60s ] [ port: <int> | default = 80 ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>," }, { "data": "] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] ``` Linode SD configurations allow retrieving scrape targets from Linode's Linode APIv4. This service discovery uses the public IPv4 address by default, by that can be changed with relabeling, as demonstrated in the Prometheus linode-sd configuration file. The following meta labels are available on targets during relabeling: ``` basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ region: <string> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] [ port: <int> | default = 80 ] [ tag_separator: <string> | default = , ] [ refresh_interval: <duration> | default = 60s ] ``` Marathon SD configurations allow retrieving scrape targets using the Marathon REST API. Prometheus will periodically check the REST endpoint for currently running tasks and create a target group for every app that has at least one healthy task. The following meta labels are available on targets during relabeling: See below for the configuration options for Marathon discovery: ``` servers: <string> [ refresh_interval: <duration> | default = 30s ] [ auth_token: <secret> ] [ authtokenfile: <filename> ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] ``` By default every app listed in Marathon will be scraped by Prometheus. If not all of your services provide Prometheus metrics, you can use a Marathon label and Prometheus relabeling to control which instances will actually be scraped. See the Prometheus marathon-sd configuration file for a practical example on how to set up your Marathon app and your Prometheus configuration. By default, all apps will show up as a single job in Prometheus (the one specified in the configuration file), which can also be changed using relabeling. Nerve SD configurations allow retrieving scrape targets from AirBnB's Nerve which are stored in Zookeeper. The following meta labels are available on targets during relabeling: ``` servers: <host> paths: <string> [ timeout: <duration> | default = 10s ] ``` Nomad SD configurations allow retrieving scrape targets from Nomad's Service API. The following meta labels are available on targets during relabeling: ``` [ allow_stale: <boolean> | default = true ] [ namespace: <string> | default = default ] [ refresh_interval: <duration> | default = 60s ] [ region: <string> | default = global ] [ server: <host> ] [ tag_separator: <string> | default = ,] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>," }, { "data": "] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] ``` Serverset SD configurations allow retrieving scrape targets from Serversets which are stored in Zookeeper. Serversets are commonly used by Finagle and Aurora. The following meta labels are available on targets during relabeling: ``` servers: <host> paths: <string> [ timeout: <duration> | default = 10s ] ``` Serverset data must be in the JSON format, the Thrift format is not currently supported. Triton SD configurations allow retrieving scrape targets from Container Monitor discovery endpoints. One of the following <triton_role> types can be configured to discover targets: The container role discovers one target per \"virtual machine\" owned by the account. These are SmartOS zones or lx/KVM/bhyve branded zones. The following meta labels are available on targets during relabeling: The cn role discovers one target for per compute node (also known as \"server\" or \"global zone\") making up the Triton infrastructure. The account must be a Triton operator and is currently required to own at least one container. The following meta labels are available on targets during relabeling: See below for the configuration options for Triton discovery: ``` account: <string> [ role : <string> | default = \"container\" ] dns_suffix: <string> endpoint: <string> groups: [ - <string> ... ] [ port: <int> | default = 9163 ] [ refresh_interval: <duration> | default = 60s ] [ version: <int> | default = 1 ] tls_config: [ <tls_config> ] ``` Eureka SD configurations allow retrieving scrape targets using the Eureka REST API. Prometheus will periodically check the REST endpoint and create a target for every app instance. The following meta labels are available on targets during relabeling: See below for the configuration options for Eureka discovery: ``` server: <string> basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] tls_config: [ <tls_config> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] [ refresh_interval: <duration> | default = 30s ] ``` See the Prometheus eureka-sd configuration file for a practical example on how to set up your Eureka app and your Prometheus configuration. Scaleway SD configurations allow retrieving scrape targets from Scaleway instances and baremetal services. The following meta labels are available on targets during relabeling: This role uses the private IPv4 address by default. This can be changed with relabeling, as demonstrated in the Prometheus scaleway-sd configuration file. This role uses the public IPv4 address by default. This can be changed with relabeling, as demonstrated in the Prometheus scaleway-sd configuration file. See below for the configuration options for Scaleway discovery: ``` access_key: <string> [ secret_key: <secret> ] [ secretkeyfile: <filename> ] project_id: <string> role: <string> [ port: <int> | default = 80 ] [ api_url: <string> | default = \"https://api.scaleway.com\" ] [ zone: <string> | default = fr-par-1 ] [ name_filter: <string> ] tags_filter: [ - <string> ] [ refresh_interval: <duration> | default = 60s ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] tls_config: [ <tls_config> ] ``` Uyuni SD configurations allow retrieving scrape targets from managed systems via Uyuni" }, { "data": "The following meta labels are available on targets during relabeling: See below for the configuration options for Uyuni discovery: ``` server: <string> username: <string> password: <secret> [ entitlement: <string> | default = monitoring_entitled ] [ separator: <string> | default = , ] [ refresh_interval: <duration> | default = 60s ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] ``` See the Prometheus uyuni-sd configuration file for a practical example on how to set up Uyuni Prometheus configuration. Vultr SD configurations allow retrieving scrape targets from Vultr. This service discovery uses the main IPv4 address by default, which that be changed with relabeling, as demonstrated in the Prometheus vultr-sd configuration file. The following meta labels are available on targets during relabeling: ``` basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] tls_config: [ <tls_config> ] [ port: <int> | default = 80 ] [ refresh_interval: <duration> | default = 60s ] ``` A static_config allows specifying a list of targets and a common label set for them. It is the canonical way to specify static targets in a scrape configuration. ``` targets: [ - '<host>' ] labels: [ <labelname>: <labelvalue> ... ] ``` Relabeling is a powerful tool to dynamically rewrite the label set of a target before it gets scraped. Multiple relabeling steps can be configured per scrape configuration. They are applied to the label set of each target in order of their appearance in the configuration file. Initially, aside from the configured per-target labels, a target's job label is set to the job_name value of the respective scrape configuration. The address label is set to the <host>:<port> address of the target. After relabeling, the instance label is set to the value of address by default if it was not set during relabeling. The scheme and metrics_path labels are set to the scheme and metrics path of the target respectively. The param_<name> label is set to the value of the first passed URL parameter called <name>. The scrape_interval and scrape_timeout labels are set to the target's interval and timeout. Additional labels prefixed with meta_ may be available during the relabeling phase. They are set by the service discovery mechanism that provided the target and vary between mechanisms. Labels starting with will be removed from the label set after target relabeling is completed. If a relabeling step needs to store a label value only temporarily (as the input to a subsequent relabeling step), use the tmp label name prefix. This prefix is guaranteed to never be used by Prometheus itself. ``` [ source_labels: '[' <labelname> [, ...] ']' ] [ separator: <string> | default = ; ] [ target_label: <labelname> ] [ regex: <regex> | default =" }, { "data": "] [ modulus: <int> ] [ replacement: <string> | default = $1 ] [ action: <relabel_action> | default = replace ] ``` <regex> is any valid RE2 regular expression. It is required for the replace, keep, drop, labelmap,labeldrop and labelkeep actions. The regex is anchored on both ends. To un-anchor the regex, use .<regex>.. <relabel_action> determines the relabeling action to take: Care must be taken with labeldrop and labelkeep to ensure that metrics are still uniquely labeled once the labels are removed. Metric relabeling is applied to samples as the last step before ingestion. It has the same configuration format and actions as target relabeling. Metric relabeling does not apply to automatically generated timeseries such as up. One use for this is to exclude time series that are too expensive to ingest. Alert relabeling is applied to alerts before they are sent to the Alertmanager. It has the same configuration format and actions as target relabeling. Alert relabeling is applied after external labels. One use for this is ensuring a HA pair of Prometheus servers with different external labels send identical alerts. An alertmanager_config section specifies Alertmanager instances the Prometheus server sends alerts to. It also provides parameters to configure how to communicate with these Alertmanagers. Alertmanagers may be statically configured via the static_configs parameter or dynamically discovered using one of the supported service-discovery mechanisms. Additionally, relabel_configs allow selecting Alertmanagers from discovered entities and provide advanced modifications to the used API path, which is exposed through the alerts_path label. ``` [ timeout: <duration> | default = 10s ] [ api_version: <string> | default = v2 ] [ path_prefix: <path> | default = / ] [ scheme: <scheme> | default = http ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] sigv4: [ region: <string> ] [ access_key: <string> ] [ secret_key: <secret> ] [ profile: <string> ] [ role_arn: <string> ] oauth2: [ <oauth2> ] tls_config: [ <tls_config> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] azuresdconfigs: [ - <azuresdconfig> ... ] consulsdconfigs: [ - <consulsdconfig> ... ] dnssdconfigs: [ - <dnssdconfig> ... ] ec2sdconfigs: [ - <ec2sdconfig> ... ] eurekasdconfigs: [ - <eurekasdconfig> ... ] filesdconfigs: [ - <filesdconfig> ... ] digitaloceansdconfigs: [ - <digitaloceansdconfig> ... ] dockersdconfigs: [ - <dockersdconfig> ... ] dockerswarmsdconfigs: [ - <dockerswarmsdconfig> ... ] gcesdconfigs: [ - <gcesdconfig> ... ] hetznersdconfigs: [ - <hetznersdconfig> ... ] httpsdconfigs: [ - <httpsdconfig> ... ] ionossdconfigs: [ - <ionossdconfig> ... ] kubernetessdconfigs: [ - <kubernetessdconfig> ... ] lightsailsdconfigs: [ - <lightsailsdconfig> ... ] linodesdconfigs: [ - <linodesdconfig> ... ] marathonsdconfigs: [ - <marathonsdconfig> ... ] nervesdconfigs: [ - <nervesdconfig> ... ] nomadsdconfigs: [ - <nomadsdconfig> ... ] openstacksdconfigs: [ - <openstacksdconfig> ... ] ovhcloudsdconfigs: [ - <ovhcloudsdconfig> ... ] puppetdbsdconfigs: [ - <puppetdbsdconfig> ... ] scalewaysdconfigs: [ - <scalewaysdconfig> ... ] serversetsdconfigs: [ - <serversetsdconfig> ... ] tritonsdconfigs: [ - <tritonsdconfig> ... ] uyunisdconfigs: [ - <uyunisdconfig> ... ] vultrsdconfigs: [ - <vultrsdconfig> ... ] static_configs: [ - <static_config> ... ] relabel_configs: [ - <relabel_config> ... ] alertrelabelconfigs: [ - <relabel_config> ... ] ``` writerelabelconfigs is relabeling applied to samples before sending them to the remote endpoint. Write relabeling is applied after external labels. This could be used to limit which samples are sent. There is a small demo of how to use this" }, { "data": "``` url: <string> [ remote_timeout: <duration> | default = 30s ] headers: [ <string>: <string> ... ] writerelabelconfigs: [ - <relabel_config> ... ] [ name: <string> ] [ send_exemplars: <boolean> | default = false ] [ sendnativehistograms: <boolean> | default = false ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] sigv4: [ region: <string> ] [ access_key: <string> ] [ secret_key: <secret> ] [ profile: <string> ] [ role_arn: <string> ] oauth2: [ <oauth2> ] azuread: [ cloud: <string> | default = AzurePublic ] [ managed_identity: [ client_id: <string> ] ] [ oauth: [ client_id: <string> ] [ client_secret: <string> ] [ tenant_id: <string> ] ] [ sdk: [ tenant_id: <string> ] ] tls_config: [ <tls_config> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] queue_config: [ capacity: <int> | default = 10000 ] [ max_shards: <int> | default = 50 ] [ min_shards: <int> | default = 1 ] [ maxsamplesper_send: <int> | default = 2000] [ batchsenddeadline: <duration> | default = 5s ] [ min_backoff: <duration> | default = 30ms ] [ max_backoff: <duration> | default = 5s ] [ retryonhttp_429: <boolean> | default = false ] [ sampleagelimit: <duration> | default = 0s ] metadata_config: [ send: <boolean> | default = true ] [ send_interval: <duration> | default = 1m ] [ maxsamplesper_send: <int> | default = 500] ``` There is a list of integrations with this feature. ``` url: <string> [ name: <string> ] required_matchers: [ <labelname>: <labelvalue> ... ] [ remote_timeout: <duration> | default = 1m ] headers: [ <string>: <string> ... ] [ read_recent: <boolean> | default = false ] basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ] authorization: [ type: <string> | default: Bearer ] [ credentials: <secret> ] [ credentials_file: <filename> ] oauth2: [ <oauth2> ] tls_config: [ <tls_config> ] [ proxy_url: <string> ] [ no_proxy: <string> ] [ proxyfromenvironment: <boolean> | default: false ] [ proxyconnectheader: [ <string>: [<secret>, ...] ] ] [ follow_redirects: <boolean> | default = true ] [ enable_http2: <boolean> | default: true ] [ filterexternallabels: <boolean> | default = true ] ``` There is a list of integrations with this feature. tsdb lets you configure the runtime-reloadable configuration settings of the TSDB. ``` [ outofordertimewindow: <duration> | default = 0s ] ``` Note that exemplar storage is still considered experimental and must be enabled via --enable-feature=exemplar-storage. ``` [ max_exemplars: <int> | default = 100000 ] ``` tracing_config configures exporting traces from Prometheus to a tracing backend via the OTLP protocol. Tracing is currently an experimental feature and could change in the future. ``` [ client_type: <string> | default = grpc ] [ endpoint: <string> ] [ sampling_fraction: <float> | default = 0 ] [ insecure: <boolean> | default = false ] headers: [ <string>: <string> ... ] [ compression: <string> ] [ timeout: <duration> | default = 10s ] tls_config: [ <tls_config> ] ``` This documentation is open-source. Please help improve it by filing issues or pull requests. Prometheus Authors 2014-2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Observability and Analysis", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Sidekick", "subcategory": "Observability" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Sentry", "subcategory": "Observability" }
[ { "data": "With performance monitoring, Sentry tracks application performance, measuring metrics like throughput and latency, and displaying the impact of errors across multiple services. Sentry captures distributed traces consisting of transactions and spans to measure individual services and operations within those services. Learn more about traces in the full Tracing documentation. The Performance page is the main view in sentry.io where you can search or browse for transaction data. A transaction represents a single instance of an activity you want to measure or track, such as a page load, page navigation, or an asynchronous task. The page displays graphs that visualize transactions or trends, as well as a table where you can view relevant transactions and drill down to get more information about them. Using the information on this page, you can trace issues back through services (for instance, frontend to backend) to identify poorly performing code. You'll be able to determine whether your application performance is getting better or worse, see if your last release is running more slowly than previous ones, and identify specific services that are slow. Once you've found the cause of the problem, you'll be able to address the specific code thats degrading performance. The Performance page provides you with several filter and display options so that you can focus on the performance data that's most important to you. You can use the project, environment, and date filters to customize the information displayed on the page, including what's shown in the widgets and transactions table. You can also search to find and filter for the specific transactions you want to investigate. The trends view allows you to see transactions that have had significant performance changes over time. This view is ideal for insights about transactions with large counts. When you find a transaction of interest, you can investigate further by going to its Transaction Summary page. Every transaction has a summary view that gives you a better understanding of its overall health. With this view, you'll find graphs, instances of these events, stats, facet maps, related errors, and" }, { "data": "The summary page for Frontend transactions has a \"Web Vitals\" tab, where you can see a detailed view of the Web Vitals associated with the transaction. You can also access a Transaction Summary page from the transactions table on the Performance page. There are several types of metrics that you can visualize in the graphs, such as Apdex, Transactions Per Minute, P50 Duration, and User Misery to get a full understanding of how your software is performing. If your application is configured for Performance Monitoring, Sentry will detect common performance problems, and group them into issues just like it does with errors. Performance issues help to surface performance problems in your application and provide a workflow for resolving them. Learn more about performance issues. Get started with Sentry's Performance Monitoring, which allows you to see macro-level metrics to micro-level spans, cross-reference transactions with related issues, and customize queries. Manage the information on the Performance page using search or page-level display filters to quickly identify performance issues. Learn more about the Trends view, which allows you to see significant changes in performance over time. Learn more about the transaction summary, which helps you evaluate the transaction's overall health. This view includes graphs, instances of these events, stats, facet maps, and related errors. Learn more about cache monitoring with Sentry and how it can help improve your application's performance. Monitor the performance of your database queries and drill into samples to investigate further. Learn how to monitor your queues with Sentry for improved application performance and health. Learn more about browser resource performance monitoring, which allows you to debug the performance of loading JavaScript and CSS on your frontend. Track the performance of your application's HTTP requests and drill into the domains serving those requests. Understand your application's performance score and how each web vital contributes to it. Drill in to explore opportunities to improve your app's overall performance. Learn more about Mobile Vitals in Sentry, and how they give you a better understanding of your mobile application's health. Learn more about Sentry's Performance metrics such as Apdex, failure rate, throughput, and latency, and the user experience insights about your application that they provide. Learn more about how enabling Performance Monitoring in Sentry impacts the performance of your application. Learn how Sentry determines retention priorities and how to update them." } ]
{ "category": "Observability and Analysis", "file_name": "flags.md", "project_name": "Sentry", "subcategory": "Observability" }
[ { "data": "Sentry is a developer-first error tracking and performance monitoring platform. If you use it, we probably support it. Sentry is an application monitoring platformbuilt by devs, for devs. Real talk in real time. Get in it. See how we can help. 2024 Sentry is a registered trademark of Functional Software, Inc." } ]
{ "category": "Observability and Analysis", "file_name": "github-privacy-statement.md", "project_name": "Sidekick", "subcategory": "Observability" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Observability and Analysis", "file_name": "api.md", "project_name": "Sumo Logic", "subcategory": "Observability" }
[ { "data": "Use the Sumo Logic Application Programming Interfaces (APIs) to interact with our platform and access resources and data programmatically from third-party scripts and apps. To connect with other Sumo Logic users, post feedback, or ask a question, visit the Sumo Logic API and Apps Forum and Sumo Dojo. API authentication and the Sumo Logic endpoints to use for your API client. Manage Collectors, Collector versions, and Sources. Copyright 2024 by Sumo Logic, Inc." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Sumo Logic", "subcategory": "Observability" }
[ { "data": "Sumo Logic provides open-source solutions and resources for customers via GitHub. Submit issues or questions about Sumo Logic open-source solutions through GitHub. These solutions are not supported by Sumo Logic Support. Sumo Logic Developers on GitHub is a central location that lists all of the open-source repositories that Sumo Logic is aware of. Repos are divided into two categories: For complete details, visit http://sumologic.github.io. Browse the official Sumo Logic GitHub repository for CLI clients, Collectors, log appenders, and other tools that will enable you to send your data to Sumo Logic. The following open-source solutions are collected in Sumo Logics GitHub repository at https://github.com/SumoLogic. For complete documentation of each solution, see the readme file. | Solution | Description | |:-|:| | Sumo Logic Distribution for OpenTelemetry Collector | Sumo Logic Distribution for OpenTelemetry Collector is a Sumo Logic-supported distribution of the OpenTelemetry Collector. It is a single agent to send logs, metrics and traces to Sumo Logic. | | Kubernetes Collection | Sumo Logic Helm Chart lets you collect Kubernetes logs, metrics, traces and events; enrich them with deployment, pod, and service level metadata; and send them to Sumo Logic. | | Solution | Description | |:--|:--| | AWS Lambda | Sumo Logic Lambda Functions are designed to collect and process data from a variety of sources and pass it onto the Sumo Logic platform. Here, the data can be stored, aggregated, searched, and visualized for a variety of insightful use cases. For complete details, see Collect CloudWatch Logs Using a Lambda Function. | | Azure | This library provides a collection of Azure functions to collect and send data to Sumo Logic. | | Docker | This repository offers several variants of Docker images to run the Sumo Logic Collector. When images are run, the Collector automatically registers with the Sumo Logic service and create sources based on a sumo-sources.json file. The Collector is configured ephemeral. | | FluentD | This plugin sends logs or metrics to Sumo Logic via an HTTP endpoint. | | JavaScript Logging SDK | The JavaScript Logging SDK library enables you to send custom log messages to an HTTP Source without installing a Collector on your server. | | Jenkins | A Sumo Logic Jenkins plugin. | | Kinesis | The Kinesis-Sumologic Connector is a Java connector that acts as a pipeline between an Amazon Kinesis stream and a Sumo Logic Collector. Data is fetched from the Kinesis Stream, transformed into a POJO, and then sent to the Sumologic Collection as JSON. For complete details, see Sumo Logic App for Amazon VPC Flow Logs using Kinesis. | | Logback appender | This solution is a Logback appender that sends straight to Sumo Logic. | | Logstash | This solution is a Logstash Sumo Logic output plugin. | | Log4J appender | This solution is a Log4J appender that sends straight to Sumo Logic. | | Log4j2 appender | This solution is a Log4J 2 appender that sends straight to Sumo Logic. | | Maven | This solution is a Maven plugin to report build statistics to Sumo Logic. | | NET appenders | Several appenders for .NET developers to use that send logs straight to Sumo" }, { "data": "| | okta-events | This solution is a Python script to collect event logs from Okta. | | Scala | This solution provides a Scala logging library wrapping SLF4J and Log4j 2 in a convenient and performant fashion. | | Solution | Description | |:--|:-| | CollectD | This plugin sends metrics to Sumo Logic via an HTTP endpoint. | | Prometheus | The Prometheus Scraper provides a configurable mechanism to send Prometheus formatted metrics to Sumo Logic. | | StatsD | See Collect StatsD Metrics for information. | | Solution | Description | |:--|:-| | Autotel | This project adds the OpenTelemetry instrumentation for Go applications by automatically modifying their source code in similar way as compiler. It can instrument any golang project. It depends only on standard libraries and is platform agnostic. | | Solution | Description | |:-|:| | dmail | A simple way to capture a screenshot of a Sumo Logic Dashboard, which is then embedded into an email. | | livetail-cli | The Live Tail Command Line Interface (CLI) is a standalone application that allows you to start and use a Live Tail session from the command line, similar to tail -f The output is directed to stdout - so you can pipe the output to commands (grep, awk, etc.). For complete details, see Live Tail CLI. | | sumo-report-generator | This tool allows a user to execute multiple searches, and compile the data in a single report. Currently, the only format is Excel. Each tab in Excel would correspond to a search executed in Sumo Logic. NOTE: You must have access to the Sumo Search API in order to use this tool. | | sumobot | This solution is a Sumo Logic Slack bot. | | Terraform | Terraform provider for Sumo Logic. | | Tailing Sidecar | Tailing Sidecar is a streaming sidecar container, the cluster-level logging agent for Kubernetes. | | Solution | Description | |:-|:--| | collector-management-client | This solution is aPython script for quickly managing a subset of Installed Collectors. | | sumo-collector-puppet-module | This solution is a Puppet module for installing the Sumo Logic Collector. This downloads the Collector from the Internet, so Internet access is required on your machines. | | sumo-java-client | This library provides a Java client to execute searches on the data collected by Sumo Logic. | | sumo-powershell-sdk | This is a community-supported Windows PowerShell Module to work with the Sumo Logic REST API. It is free and open source, subject to the terms of the Apache 2.0 license. | | sumologic-collector-chef-cookbook | This solution is a Chef Cookbook for installing and configuring the Sumo Logic Collector. The Cookbook installs the Collector or updates an existing one if it was set to use Local Configuration File Management. | | sumologic-python-sdk | This solution is a Community-supported Python interface to the Sumo Logic REST API. | | Sumotoolbox | This is a GUI utility for accessing the various Sumo Logic APIs (currently the search, content, and collector APIs.) The idea is to make it easier to perform common API tasks such as copying sources and generating CSV files from searches. | Copyright 2024 by Sumo Logic, Inc." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "StackState", "subcategory": "Observability" }
[ { "data": "These documentation pages cover all functionality in the StackState for Kubernetes troubleshooting product. Visit the Kubernetes quick start guide page. Search for it! Use the search bar on the top right. If you believe any documentation is missing, please let us know on the StackState support site. Any questions? We love to help! Find our support team on the StackState support site. Last updated 12 months ago Legal notices" } ]
{ "category": "Observability and Analysis", "file_name": "how-to-contribute-to-docs.md#adopters-logos.md", "project_name": "Thanos", "subcategory": "Observability" }
[ { "data": "./docs directory is used as markdown source files using blackfriday to render Thanos website resources. However the aim for those is to also have those *.md files renderable and usable (including links) via GitHub. To make that happen we use following rules and helpers that are listed here. Front Matter is essential on top of every markdown file if you want to link this file into any menu/submenu option. We use YAML formatting. This will render in GitHub as markdown just fine: ``` title: <title> type: ... weight: <weight> menu: <where to link files in> # This also is referred in permalinks. ``` Aim is to match linking behavior in website being THE SAME as Github. This means: Then everywhere use native markdown absolute path to the project repository if you want to link to exact commit e.g: ``` ``` Small post processing script adjusts link for Hugo rendering. NOTE: Spaces matters so: [xxx]( link and [xxx] (link will not work. Why? New menus .Site.Menus are added as soon as some file has Front Matter with certain menu. Keep menu the same as sub-directory the file is in. This will help to manage all docs. Show new menu section in main page by changing website/layouts/_default/baseof.html file. Wed love to showcase your companys logo on our main page and README! Requirements for the company: If all those are met, add yourself in website/data/adopters.yml like so: ``` name: My Awesome Company url: https://wwww.company.com logo: company.png ``` Copy your companys logo in website/static/logos, make sure it follows these rules: and create PR against Thanos main branch. We want all docs to not have any white noise. To achieve it, we provide cleanup-white-noise.sh under scripts to check. You can call it before a pull request, also PR test would call it too. On every PR we build the website and on success display the link to the preview under the checks at the bottom of the github PR. To test the changes to the docs locally just start serving the website by running the following command and you will be able to access the website on localhost:1313 by default: ``` make web-serve ``` We use Netlify for hosting. We are using Open Source license (PRO). Thanks Netlify for this! On every commit to main netlify runs CI that invokes make web (defined in netlify.toml) NOTE: Check for status badge in README for build status on the page. If main build for netlify succeed, the new content is published automatically. Found a typo, inconsistency or missing information in our docs? Help us to improve Thanos documentation by proposing a fix on GitHub here Thanos is an OSS licensed project as Apache License 2.0 Founded by Improbable" } ]
{ "category": "Observability and Analysis", "file_name": "docs.md", "project_name": "SkyWalking", "subcategory": "Observability" }
[ { "data": "This directory includes the official SkyWalking repositories and some ecosystem projects developed by the community. Looking for downloadable releases? You can find them in the Downloads page. A demo music application to showcase features of Apache SkyWalking in action. This is the repository including all source codes of https://skywalking.apache.org SkyWalking primary repository and docs. The native UI for SkyWalking v9 SkyWalking Grafana Plugins provide extensions to visualize topology on Grafana. The Java Agent for Apache SkyWalking, which provides the native tracing/metrics/logging/event/profiling abilities for Java projects. The Python Agent for Apache SkyWalking, which provides the native tracing/metrics/logging/profiling abilities for Python projects. The NodeJS Agent for Apache SkyWalking, which provides the native tracing abilities for NodeJS projects. The Go Agent for Apache SkyWalking, which provides the native tracing/metrics/logging abilities for Golang projects. The Rust Agent for Apache SkyWalking, which provides the native tracing/metrics/logging abilities for Rust projects. The PHP Agent for Apache SkyWalking, which provides the native tracing abilities for PHP projects. Apache SkyWalking Client-side JavaScript exception and tracing library. SkyWalking Nginx Agent provides the native tracing capability for Nginx powered by Nginx LUA module. SkyWalking Kong Agent provides the native tracing capability. A lightweight collector/sidecar that could be deployed close to the target (monitored) system, to collect metrics, traces, and logs. Watch, filter, and send Kubernetes events into Apache SkyWalking. Metrics collector and profiler powered by eBPF to diagnose CPU and network performance. SkyWalking CLI is a command interaction tool for SkyWalking users or OPS teams. SkyWalking Kubernetes Helm repository provides ways to install and configure SkyWalking in a Kubernetes cluster. The scripts are written in Helm 3. A bridge project between Apache SkyWalking and Kubernetes. Apache SkyWalking data collect protocol. Query Protocol defines the communication protocol in query stage. SkyWalking native UI and CLI use this protocol to fetch data from the backend consistently. Apache SkyWalking APIs in Golang An observability database aims to ingest, analyze and store Metrics, Tracing and Logging data. The client implementation for SkyWalking BanyanDB in Java BanyanDB Helm Chart repository provides ways to install and configure BanyanDB in a Kubernetes cluster. SkyWalking Agent Test Tool is a tremendously useful test tools suite in a wide variety of languages of Agent. Includes mock collector and validator. A full-featured license tool to check and fix license headers and resolve dependencies' licenses. An End-to-End Testing framework that aims to help developers to set up, debug, and verify E2E tests with ease. Docker files for Apache SkyWalking(version <= 8.7.0) javaagent, OAP, and UI. Apache SkyWalking UI for SkyWalking v6/v7/v8 The web UI for skywalking APM v5 Observability Analysis Language(OAL) Tool is a code generation tool for SkyWalking. From Nov. 6th 2018, merged into main codebase. SkyAPM-dotnet provides the native support agent in C# and .NETStandard platform, with the helps from Apache SkyWalking committer team. Distributed tracing and monitor SDK in CPP for Apache SkyWalking APM. JetBrains-powered plugin. Continuous Feedback for Developers / Feedback-Driven Development Tool. Java agent plugin extensions for Apache SkyWalking. A tool helps on locating witness class for Apache SkyWalking plugin. The CN translation version of Apache SkyWalking document. This is NOT official docs and has been out-dated for years. 3rd-party transporter implementation of Apache SkyWalking. No one is maintaining this. Replaced by https://skywalking.apache.org/docs/#GoAgent auto-instrument agent. Replaced by https://skywalking.apache.org/docs/#GoAgent auto-instrument agent. Replaced by https://skywalking.apache.org/docs/#PHPAgent Replaced by https://skywalking.apache.org/docs/#NodeJSAgent SkyWalking team announced, BanyanDB has replaced Elasticsearch and is up and running on SkyWalking official demo site." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Tracetest", "subcategory": "Observability" }
[ { "data": "Tracetest is a trace-based testing tool for integration and end-to-end testing using OpenTelemetry traces. Verify end-to-end transactions and side-effects across microservices & event-driven apps by using trace data as test specs. Set up Tracetest and start trace-based testing your distributed system. Check out the Tracetest GitHub repo! Please consider giving us a star! Configure app access & connect tracing backend or OTLP ingestion! Read about the concepts of trace-based testing to learn more! You can: Visually - Build tests in the Web UI Programmatically - Build tests in YAML ``` type: Test spec: id: Yg9sN-94g name: Pokeshop - Import description: Import a Pokemon trigger: type: http httpRequest: url: http://demo-api:8081/pokemon/import method: POST headers: - key: Content-Type value: application/json body: '{\"id\":52}' specs: - name: 'All Database Spans: Processing time is less than 100ms' selector: span[tracetest.span.type=\"database\"] assertions: - attr:tracetest.span.duration < 100ms ``` Tracetest is a cloud-native application, designed to run in the cloud. Get started in three ways. The easiest and most reliable way to test microservices and distributed apps with OpenTelemetry distributed traces is signing up for free at app.tracetest.io. After creating an account, getting started is as easy as installing the Tracetest Agent. Get the same experience as with the Cloud-based Managed Tracetest but self-hosted in your own infrastructure. Book a call to get into early access. Deploy a hobby self-hosted instance of Tracetest Core as a Docker container. It's not suitable for production, but a great way to start using Tracetest Core in local environments. Understand how Tracetest works. Our users are typically developers or QA engineers building distributed systems with microservices using back-end languages like Go, Rust, Node.js and Python. Tracetest can be compared with Cypress or Selenium; however Tracetest is fundamentally different. Cypress and Selenium are constrained by using the browser for testing. Tracetest bypasses this entirely by using your existing OpenTelemetry instrumentation and trace data to run tests and assertions against traces in every step of a request transaction." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Trickster", "subcategory": "Observability" }
[ { "data": "Explore how to use Trickster to accelerate your projects. If youre new to Trickster, check out the Overview and the Gettting Started. Get a high-level introduction to Trickster. Try Trickster with Docker Compose and minimal setup. The following articles cover getting up and running with Trickster. The following articles cover queries with Trickster. What can your user do with your project? The following articles exlpain how Trickster leverages caching. The following articles exlpain how Trickster works with origins. The following articles provide information on application load balancers and health checks. The following articles cover tracing and metrics in Trickster. The following articles cover customizing and extending Trickster. Here you can find information on the latest Trickster updates and our plans going forward. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve. 2021 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Observability and Analysis", "file_name": "deployment.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning Vector is an end-to-end data pipeline designed to collect, process, and route data. This means that Vector serves all roles in building your pipeline. You can deploy it as an agent, sidecar, or aggregator. You combine these roles to form topologies. In this section, well cover each role in detail and help you understand when to use each. The Aggregator role is designed for central processing, collecting data from multiple upstream sources and performing cross-host aggregation and analysis. For Vector, this role should be reserved for exactly that: cross-host aggregation and analysis. Vector is unique in the fact that it can serve both as an Agent and an Aggregator. This makes it possible to distribute processing along the edge (recommended). We highly recommend pushing processing to the edge when possible since it is more efficient and easier to manage. You can install the Vector as an Aggregator on Kubernetes using Helm. For more information about getting started with the Aggregator role, see the Helm install docs. On this page Meta Security Releases Versioning Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning Split a stream of events into multiple sub-streams based on user-supplied conditions ``` { \"transforms\": { \"mytransformid\": { \"type\": \"route\", \"inputs\": [ \"my-source-or-transform-id\" ] } } }``` ``` [transforms.mytransformid] type = \"route\" inputs = [ \"my-source-or-transform-id\" ] ``` ``` transforms: mytransformid: type: route inputs: my-source-or-transform-id ``` ``` { \"transforms\": { \"mytransformid\": { \"type\": \"route\", \"inputs\": [ \"my-source-or-transform-id\" ], \"reroute_unmatched\": true } } }``` ``` [transforms.mytransformid] type = \"route\" inputs = [ \"my-source-or-transform-id\" ] reroute_unmatched = true ``` ``` transforms: mytransformid: type: route inputs: my-source-or-transform-id reroute_unmatched: true ``` A list of upstream source or transform IDs. Wildcards (*) are supported. See configuration for more info. ``` [ \"my-source-or-transform-id\", \"prefix-*\" ]``` Reroutes unmatched events to a named output instead of silently discarding them. Normally, if an event doesnt match any defined route, it is sent to the <transformname>.unmatched output for further processing. In some cases, you may want to simply discard unmatched events and not process them any further. In these cases, rerouteunmatched can be set to false to disable the <transformname>._unmatched output and instead silently discard any unmatched events. A table of route identifiers to logical conditions representing the filter of the route. Each route can then be referenced as an input by other components with the name <transformname>.<routeid>. If an event doesnt match any route, and if reroute_unmatched is set to true (the default), it is sent to the <transformname>.unmatched output. Otherwise, the unmatched event is instead silently discarded. Both unmatched, as well as default, are reserved output names and thus cannot be used as a route name. | Syntax | Description | Example | |:|:--|:| | vrl | A Vector Remap Language (VRL) Boolean expression. | .status_code != 200 && !includes([\"info\", \"debug\"], .severity) | | datadog_search | A Datadog Search query string. | *stack | | is_log | Whether the incoming event is a log. | nan | | is_metric | Whether the incoming event is a metric. | nan | | is_trace | Whether the incoming event is a trace. | nan | If you opt for the vrl syntax for this condition, you can set the condition as a string via the condition parameter, without needing to specify both a source and a" }, { "data": "The table below shows some examples: | Config format | Example | |:-|:| | YAML | condition: .status == 200 | | TOML | condition = \".status == 200\" | | JSON | \"condition\": \".status == 200\" | ``` *: type: \"vrl\" source: \".status == 500\"``` ``` = { type = \"vrl\", source = \".status == 500\" }``` ``` \"*\": { \"type\": \"vrl\", \"source\": \".status == 500\" }``` ``` *: type: \"datadog_search\" source: \"*stack\"``` ``` = { type = \"datadog_search\", source = \"stack\" }``` ``` \"*\": { \"type\": \"datadog_search\", \"source\": \"*stack\" }``` ``` *: \".status == 500\"``` ``` = \".status == 500\"``` ``` \"*\": \".status == 500\"``` A histogram of the number of events passed in each internal batch in Vectors internal topology. Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches. ``` { \"log\": { \"level\": \"info\" } }``` ``` transforms: mytransformid: type: route inputs: my-source-or-transform-id route: debug: .level == \"debug\" info: .level == \"info\" warn: .level == \"warn\" error: .level == \"error\" ``` ``` [transforms.mytransformid] type = \"route\" inputs = [ \"my-source-or-transform-id\" ] [transforms.mytransformid.route] debug = '.level == \"debug\"' info = '.level == \"info\"' warn = '.level == \"warn\"' error = '.level == \"error\"' ``` ``` { \"transforms\": { \"mytransformid\": { \"type\": \"route\", \"inputs\": [ \"my-source-or-transform-id\" ], \"route\": { \"debug\": \".level == \\\"debug\\\"\", \"info\": \".level == \\\"info\\\"\", \"warn\": \".level == \\\"warn\\\"\", \"error\": \".level == \\\"error\\\"\" } } } }``` ``` { \"level\": \"info\" }``` ``` { \"metric\": { \"counter\": { \"value\": 10000 }, \"kind\": \"absolute\", \"name\": \"memoryavailablebytes\", \"namespace\": \"host\", \"tags\": {} } }``` ``` transforms: mytransformid: type: route inputs: my-source-or-transform-id route: app: .namespace == \"app\" host: .namespace == \"host\" ``` ``` [transforms.mytransformid] type = \"route\" inputs = [ \"my-source-or-transform-id\" ] [transforms.mytransformid.route] app = '.namespace == \"app\"' host = '.namespace == \"host\"' ``` ``` { \"transforms\": { \"mytransformid\": { \"type\": \"route\", \"inputs\": [ \"my-source-or-transform-id\" ], \"route\": { \"app\": \".namespace == \\\"app\\\"\", \"host\": \".namespace == \\\"host\\\"\" } } } }``` ``` { \"counter\": { \"value\": 10000 }, \"kind\": \"absolute\", \"name\": \"memoryavailablebytes\", \"namespace\": \"host\", \"tags\": {} }``` On this page Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": "going-to-prod.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning Store observability events in the AWS S3 object storage system ``` { \"sinks\": { \"mysinkid\": { \"type\": \"aws_s3\", \"inputs\": [ \"my-source-or-transform-id\" ], \"bucket\": \"my-bucket\" } } }``` ``` [sinks.mysinkid] type = \"aws_s3\" inputs = [ \"my-source-or-transform-id\" ] bucket = \"my-bucket\" ``` ``` sinks: mysinkid: type: aws_s3 inputs: my-source-or-transform-id bucket: my-bucket ``` ``` { \"sinks\": { \"mysinkid\": { \"type\": \"aws_s3\", \"inputs\": [ \"my-source-or-transform-id\" ], \"acl\": \"authenticated-read\", \"bucket\": \"my-bucket\", \"compression\": \"gzip\", \"content_encoding\": \"gzip\", \"content_type\": \"application/gzip\", \"endpoint\": \"http://127.0.0.0:5000/path/to/service\", \"filenameappenduuid\": true, \"filename_extension\": \"json\", \"filenametimeformat\": \"%s\", \"grantfullcontrol\": \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\", \"grant_read\": \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\", \"grantreadacp\": \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\", \"grantwriteacp\": \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\", \"key_prefix\": \"date=%F\", \"region\": \"us-east-1\", \"serversideencryption\": \"AES256\", \"ssekmskeyid\": \"abcd1234\", \"storage_class\": \"STANDARD\", \"tags\": { \"Classification\": \"confidential\", \"PHI\": \"True\", \"Project\": \"Blue\" }, \"timezone\": \"local\" } } }``` ``` [sinks.mysinkid] type = \"aws_s3\" inputs = [ \"my-source-or-transform-id\" ] acl = \"authenticated-read\" bucket = \"my-bucket\" compression = \"gzip\" content_encoding = \"gzip\" content_type = \"application/gzip\" endpoint = \"http://127.0.0.0:5000/path/to/service\" filenameappenduuid = true filename_extension = \"json\" filenametimeformat = \"%s\" grantfullcontrol = \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\" grant_read = \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\" grantreadacp = \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\" grantwriteacp = \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\" key_prefix = \"date=%F\" region = \"us-east-1\" serversideencryption = \"AES256\" ssekmskeyid = \"abcd1234\" storage_class = \"STANDARD\" timezone = \"local\" [sinks.mysinkid.tags] Classification = \"confidential\" PHI = \"True\" Project = \"Blue\" ``` ``` sinks: mysinkid: type: aws_s3 inputs: my-source-or-transform-id acl: authenticated-read bucket: my-bucket compression: gzip content_encoding: gzip content_type: application/gzip endpoint: http://127.0.0.0:5000/path/to/service filenameappenduuid: true filename_extension: json filenametimeformat: \"%s\" grantfullcontrol: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be grant_read: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be grantreadacp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be grantwriteacp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be key_prefix: date=%F region: us-east-1 serversideencryption: AES256 ssekmskeyid: abcd1234 storage_class: STANDARD tags: Classification: confidential PHI: \"True\" Project: Blue timezone: local ``` Controls how acknowledgements are handled for this sink. See End-to-end Acknowledgements for more information on how event acknowledgement is handled. Whether or not end-to-end acknowledgements are enabled. When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source. Enabling or disabling acknowledgements at the sink level takes precedence over any global acknowledgements configuration. Canned ACL to apply to the created objects. For more information, see Canned ACL. | Option | Description | |:--|:-| | authenticated-read | Bucket/object can be read by authenticated users.The bucket/object owner is granted the FULL_CONTROL permission, and anyone in the AuthenticatedUsers grantee group is granted the READ permission. | | aws-exec-read | Bucket/object are private, and readable by EC2.The bucket/object owner is granted the FULL_CONTROL permission, and the AWS EC2 service is granted the READ permission for the purpose of reading Amazon Machine Image (AMI) bundles from the given bucket. | | bucket-owner-full-control | Object is semi-private.Both the object owner and bucket owner are granted the FULL_CONTROL permission.Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket. | | bucket-owner-read | Object is private, except to the bucket owner.The object owner is granted the FULL_CONTROL permission, and the bucket owner is granted the READ permission.Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket. | | log-delivery-write | Bucket can have logs written.The LogDelivery grantee group is granted WRITE and READ_ACP permissions.Only relevant when specified for a bucket: this canned ACL is otherwise ignored when specified for an object.For more information about logs, see Amazon S3 Server Access Logging. | | private | Bucket/object are" }, { "data": "bucket/object owner is granted the FULL_CONTROL permission, and no one else has access.This is the default. | | public-read | Bucket/object can be read publicly.The bucket/object owner is granted the FULL_CONTROL permission, and anyone in the AllUsers grantee group is granted the READ permission. | | public-read-write | Bucket/object can be read and written publicly.The bucket/object owner is granted the FULL_CONTROL permission, and anyone in the AllUsers grantee group is granted the READ and WRITE permissions.This is generally not recommended. | Bucket/object can be read by authenticated users. The bucket/object owner is granted the FULL_CONTROL permission, and anyone in the AuthenticatedUsers grantee group is granted the READ permission. Bucket/object are private, and readable by EC2. The bucket/object owner is granted the FULL_CONTROL permission, and the AWS EC2 service is granted the READ permission for the purpose of reading Amazon Machine Image (AMI) bundles from the given bucket. Object is semi-private. Both the object owner and bucket owner are granted the FULL_CONTROL permission. Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket. Object is private, except to the bucket owner. The object owner is granted the FULL_CONTROL permission, and the bucket owner is granted the READ permission. Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket. Bucket can have logs written. The LogDelivery grantee group is granted WRITE and READ_ACP permissions. Only relevant when specified for a bucket: this canned ACL is otherwise ignored when specified for an object. For more information about logs, see Amazon S3 Server Access Logging. Bucket/object are private. The bucket/object owner is granted the FULL_CONTROL permission, and no one else has access. This is the default. Bucket/object can be read publicly. The bucket/object owner is granted the FULL_CONTROL permission, and anyone in the AllUsers grantee group is granted the READ permission. Bucket/object can be read and written publicly. The bucket/object owner is granted the FULL_CONTROL permission, and anyone in the AllUsers grantee group is granted the READ and WRITE permissions. This is generally not recommended. ``` \"AKIAIOSFODNN7EXAMPLE\"``` ``` \"arn:aws:iam::123456789098:role/my_role\"``` ``` \"/my/aws/credentials\"``` ``` \"randomEXAMPLEidString\"``` Timeout for successfully loading any credentials, in seconds. Relevant when the default credentials chain or assume_role is used. ``` 30``` The credentials profile to use. Used to select AWS credentials from a provided credentials file. ``` \"develop\"``` The AWS region to send STS requests to. If not set, this defaults to the configured region for the service itself. ``` \"us-west-2\"``` ``` \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\"``` The maximum size of a batch that is processed by a sink. This is based on the uncompressed size of the batched events, before they are serialized/compressed. The S3 bucket name. This must not include a leading s3:// or a trailing /. ``` \"my-bucket\"``` Configures the buffering behavior for this sink. More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section. The maximum size of the buffer on disk. Must be at least ~256 megabytes (268435488 bytes). | Option | Description | |:|:--| | disk | Events are buffered on disk.This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes.Data is synchronized to disk every 500ms. | | memory | Events are buffered in memory.This is more performant, but less" }, { "data": "Data will be lost if Vector is restarted forcefully or crashes. | Events are buffered on disk. This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. | Option | Description | |:|:-| | block | Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. | | drop_newest | Drops the event instead of waiting for free space in buffer.The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. Compression configuration. All compression algorithms use the default compression level unless otherwise specified. Some cloud storage API clients and browsers handle decompression transparently, so depending on how they are accessed, files may not always appear to be compressed. | Option | Description | |:|:--| | gzip | Gzip compression. | | none | No compression. | | snappy | Snappy compression. | | zlib | Zlib compression. | | zstd | Zstandard compression. | Overrides what content encoding has been applied to the object. Directly comparable to the Content-Encoding HTTP header. If not specified, the compression scheme used dictates this value. ``` \"gzip\"``` Overrides the MIME type of the object. Directly comparable to the Content-Type HTTP header. If not specified, the compression scheme used dictates this value. When compression is set to none, the value text/x-log is used. ``` \"application/gzip\"``` ``` \"{ \\\"type\\\": \\\"record\\\", \\\"name\\\": \\\"log\\\", \\\"fields\\\": [{ \\\"name\\\": \\\"message\\\", \\\"type\\\": \\\"string\\\" }] }\"``` | Option | Description | |:|:| | avro | Encodes an event as an Apache Avro message. | | csv | Encodes an event as a CSV message.This codec must be configured with fields to encode. | | gelf | Encodes an event as a GELF message.This codec is experimental for the following reason:The GELF specification is more strict than the actual Graylog receiver. Vectors encoder currently adheres more strictly to the GELF spec, with the exception that some characters such as @ are allowed in field names.Other GELF codecs such as Lokis, use a Go SDK that is maintained by Graylog, and is much more relaxed than the GELF spec.Going forward, Vector will use that Go SDK as the reference implementation, which means the codec may continue to relax the enforcement of specification. | | json | Encodes an event as JSON. | | logfmt | Encodes an event as a logfmt" }, { "data": "| | native | Encodes an event in the native Protocol Buffers format.This codec is experimental. | | native_json | Encodes an event in the native JSON format.This codec is experimental. | | protobuf | Encodes an event as a Protobuf message. | | raw_message | No encoding.This encoding uses the message field of a log event.Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. | | text | Plain text encoding.This encoding uses the message field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. | Encodes an event as a CSV message. This codec must be configured with fields to encode. Encodes an event as a GELF message. This codec is experimental for the following reason: The GELF specification is more strict than the actual Graylog receiver. Vectors encoder currently adheres more strictly to the GELF spec, with the exception that some characters such as @ are allowed in field names. Other GELF codecs such as Lokis, use a Go SDK that is maintained by Graylog, and is much more relaxed than the GELF spec. Going forward, Vector will use that Go SDK as the reference implementation, which means the codec may continue to relax the enforcement of specification. Encodes an event in the native Protocol Buffers format. This codec is experimental. Encodes an event in the native JSON format. This codec is experimental. No encoding. This encoding uses the message field of a log event. Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. Plain text encoding. This encoding uses the message field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format. Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. ``` \"avro\"``` ``` \"csv\"``` ``` \"gelf\"``` ``` \"json\"``` ``` \"logfmt\"``` ``` \"native\"``` ``` \"native_json\"``` ``` \"protobuf\"``` ``` \"raw_message\"``` ``` \"text\"``` Enable double quote escapes. This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled. The escape character to use when writing CSV. In some variants of CSV, quotes are escaped using a special escape character like \\ (instead of escaping quotes by doubling them). To use this, double_quotes needs to be disabled as well otherwise it is ignored. Configures the fields that will be encoded, as well as the order in which they appear in the output. If a field is not present in the event, the output will be an empty" }, { "data": "Values of type Array, Object, and Regex are not supported and the output will be an empty string. | Option | Description | |:|:--| | always | Always puts quotes around every field. | | necessary | Puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter, or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field). | | never | Never writes quotes, even if it produces invalid CSV data. | | non_numeric | Puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes are used even if they arent strictly necessary. | Controls how metric tag values are encoded. When set to single, only the last non-bare value of tags are displayed with the metric. When set to full, all metric tags are exposed as separate assignments. | Option | Description | |:|:--| | full | All tags are exposed as arrays of either string or null values. | | single | Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored. | The path to the protobuf descriptor set file. This file is the output of protoc -o <path> ... ``` \"/etc/vector/protobufdescriptorset.desc\"``` ``` \"package.Message\"``` | Option | Description | |:--|:| | rfc3339 | Represent the timestamp as a RFC 3339 timestamp. | | unix | Represent the timestamp as a Unix timestamp. | | unix_float | Represent the timestamp as a Unix timestamp in floating point. | | unix_ms | Represent the timestamp as a Unix timestamp in milliseconds. | | unix_ns | Represent the timestamp as a Unix timestamp in nanoseconds. | | unix_us | Represent the timestamp as a Unix timestamp in microseconds | ``` \"http://127.0.0.0:5000/path/to/service\"``` Whether or not to append a UUID v4 token to the end of the object key. The UUID is appended to the timestamp portion of the object key, such that if the object key generated is date=2022-07-18/1658176486, setting this field to true results in an object key that looks like date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547. This ensures there are no name collisions, and can be useful in high-volume workloads where object keys must be unique. The filename extension to use in the object key. This overrides setting the extension based on the configured compression. ``` \"json\"``` The timestamp format for the time component of the object key. By default, object keys are appended with a timestamp that reflects when the objects are sent to S3, such that the resulting object key is functionally equivalent to joining the key prefix with the formatted timestamp, such as date=2022-07-18/1658176486. This would represent a key_prefix set to date=%F/ and the timestamp of Mon Jul 18 2022 20:34:44 GMT+0000, with the filenametimeformat being set to %s, which renders timestamps in seconds since the Unix epoch. Supports the common strftime specifiers found in most languages. When set to an empty string, no timestamp is appended to the key prefix. | Option | Description | |:--|:--| | bytes | Event data is not delimited at all. | | character_delimited | Event data is delimited by a single ASCII (7-bit) character. | | length_delimited | Event data is prefixed with its length in bytes.The prefix is a 32-bit unsigned integer, little" }, { "data": "| | newline_delimited | Event data is delimited by a newline (LF) character. | Event data is prefixed with its length in bytes. The prefix is a 32-bit unsigned integer, little endian. ``` \"bytes\"``` ``` \"character_delimited\"``` ``` \"length_delimited\"``` ``` \"newline_delimited\"``` Grants READ, READACP, and WRITEACP permissions on the created objects to the named grantee. This allows the grantee to read the created objects and their metadata, as well as read and modify the ACL on the created objects. ``` \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\"``` ``` \"person@email.com\"``` ``` \"http://acs.amazonaws.com/groups/global/AllUsers\"``` Grants READ permissions on the created objects to the named grantee. This allows the grantee to read the created objects and their metadata. ``` \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\"``` ``` \"person@email.com\"``` ``` \"http://acs.amazonaws.com/groups/global/AllUsers\"``` Grants READ_ACP permissions on the created objects to the named grantee. This allows the grantee to read the ACL on the created objects. ``` \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\"``` ``` \"person@email.com\"``` ``` \"http://acs.amazonaws.com/groups/global/AllUsers\"``` Grants WRITE_ACP permissions on the created objects to the named grantee. This allows the grantee to modify the ACL on the created objects. ``` \"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be\"``` ``` \"person@email.com\"``` ``` \"http://acs.amazonaws.com/groups/global/AllUsers\"``` A list of upstream source or transform IDs. Wildcards (*) are supported. See configuration for more info. ``` [ \"my-source-or-transform-id\", \"prefix-*\" ]``` A prefix to apply to all object keys. Prefixes are useful for partitioning objects, such as by creating an object key that stores objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path. A trailing / is not automatically added. ``` \"date=%F/hour=%H\"``` ``` \"year=%Y/month=%m/day=%d\"``` ``` \"applicationid={{ applicationid }}/date=%F\"``` Proxy configuration. Configure to proxy traffic through an HTTP(S) proxy when making external requests. Similar to common proxy configuration convention, you can set different proxies to use based on the type of traffic being proxied, as well as set specific hosts that should not be proxied. Proxy endpoint to use when proxying HTTP traffic. Must be a valid URI string. ``` \"http://foo.bar:3128\"``` Proxy endpoint to use when proxying HTTPS traffic. Must be a valid URI string. ``` \"http://foo.bar:3128\"``` A list of hosts to avoid proxying. Multiple patterns are allowed: | Pattern | Example match | |:--|:| | Domain names | example.com matches requests to example.com | | Wildcard domains | .example.com matches requests to example.com and its subdomains | | IP addresses | 127.0.0.1 matches requests to 127.0.0.1 | | CIDR blocks | 192.168.0.0/16 matches requests to any IP addresses in this range | | Splat | * matches all hosts | ``` \"us-east-1\"``` Middleware settings for outbound requests. Various settings can be configured, such as concurrency and rate limits, timeouts, retry behavior, etc. Note that the retry backoff policy follows the Fibonacci sequence. Configuration of adaptive concurrency parameters. These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution. The fraction of the current value to set the new concurrency limit when decreasing the limit. Valid values are greater than 0 and less than 1. Smaller values cause the algorithm to scale back rapidly when latency increases. Note that the new limit is rounded down after applying this ratio. The weighting of new measurements compared to older measurements. Valid values are greater than 0 and less than 1. ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current" }, { "data": "Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability. The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency). It is recommended to set this value to your services average limit if youre seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the adaptiveconcurrencylimit metric. The maximum concurrency limit. The adaptive request concurrency limit will not go above this bound. This is put in place as a safeguard. Scale of RTT deviations which are not considered anomalous. Valid values are greater than or equal to 0, and we expect reasonable values to range from 1.0 to 3.0. When calculating the past RTT average, we also compute a secondary deviation value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT. Configuration for outbound request concurrency. This can be set either to one of the below enum values or to a positive integer, which denotes a fixed concurrency limit. | Option | Description | |:|:--| | adaptive | Concurrency will be managed by Vectors Adaptive Request Concurrency feature. | | none | A fixed concurrency of 1.Only one request can be outstanding at any given time. | A fixed concurrency of 1. Only one request can be outstanding at any given time. The amount of time to wait before attempting the first retry for a failed request. After the first retry has failed, the fibonacci sequence is used to select future backoffs. | Option | Description | |:|:-| | Full | Full jitter.The random delay is anywhere from 0 up to the maximum current delay calculated by the backoff strategy.Incorporating full jitter into your backoff strategy can greatly reduce the likelihood of creating accidental denial of service (DoS) conditions against your own systems when many clients are recovering from a failure state. | | None | No jitter. | Full jitter. The random delay is anywhere from 0 up to the maximum current delay calculated by the backoff strategy. Incorporating full jitter into your backoff strategy can greatly reduce the likelihood of creating accidental denial of service (DoS) conditions against your own systems when many clients are recovering from a failure state. The time a request can take before being aborted. Datadog highly recommends that you do not lower this value below the services internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream. AWS S3 Server-Side Encryption algorithms. The Server-side Encryption algorithm used when storing these objects. | Option | Description | |:|:-| | AES256 | Each object is encrypted with AES-256 using a unique key.This corresponds to the SSE-S3 option. | | aws:kms | Each object is encrypted with AES-256 using keys managed by AWS" }, { "data": "on whether or not a KMS key ID is specified, this corresponds either to the SSE-KMS option (keys generated/managed by KMS) or the SSE-C option (keys generated by the customer, managed by KMS). | Each object is encrypted with AES-256 using a unique key. This corresponds to the SSE-S3 option. Each object is encrypted with AES-256 using keys managed by AWS KMS. Depending on whether or not a KMS key ID is specified, this corresponds either to the SSE-KMS option (keys generated/managed by KMS) or the SSE-C option (keys generated by the customer, managed by KMS). Specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed customer master key (CMK) that is used for the created objects. Only applies when serversideencryption is configured to use KMS. If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data. ``` \"abcd1234\"``` The storage class for the created objects. See the S3 Storage Classes for more details. | Option | Description | |:--|:--| | DEEP_ARCHIVE | Glacier Deep Archive. | | EXPRESS_ONEZONE | High Performance (single Availability zone). | | GLACIER | Glacier Flexible Retrieval. | | INTELLIGENT_TIERING | Intelligent Tiering. | | ONEZONE_IA | Infrequently Accessed (single Availability zone). | | REDUCED_REDUNDANCY | Reduced Redundancy. | | STANDARD | Standard Redundancy. | | STANDARD_IA | Infrequently Accessed. | Timezone to use for any date specifiers in template strings. This can refer to any valid timezone as defined in the TZ database, or local which refers to the system local timezone. It will default to the globally configured timezone. ``` \"local\"``` ``` \"America/New_York\"``` ``` \"EST5EDT\"``` Sets the list of supported ALPN protocols. Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined. Absolute path to an additional CA certificate file. The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format. ``` \"/path/to/certificate_authority.crt\"``` Absolute path to a certificate file used to identify this server. The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format. If this is set, and is not a PKCS#12 archive, key_file must also be set. ``` \"/path/to/host_certificate.crt\"``` Absolute path to a private key file used to identify this server. The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format. ``` \"/path/to/host_certificate.key\"``` Passphrase used to unlock the encrypted key file. This has no effect unless key_file is set. ``` \"${KEYPASSENV_VAR}\"``` ``` \"PassWord1\"``` Enables certificate verification. For components that create a server, this requires that the client connections have a valid client certificate. For components that initiate requests, this validates that the upstream has a valid certificate. If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate. Do NOT set this to false unless you understand the risks of not verifying the validity of certificates. Enables hostname" }, { "data": "If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension. Only relevant for outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the remote hostname. A histogram of the number of events passed in each internal batch in Vectors internal topology. Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches. | Policy | Required for | Required when | |:--|:|-:| | s3:ListBucket | healthcheck | nan | | s3:PutObject | operation | nan | Vector checks for AWS credentials in the following order: If no credentials are found, Vectors health check fails and an error is logged. This component buffers & batches data as shown in the diagram above. Youll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored. Batches are flushed when 1 of 2 conditions are met: Buffers are controlled via the buffer.* options. If youd like to exit immediately upon a health check failure, you can pass the --require-healthy flag: ``` vector --config /etc/vector/vector.yaml --require-healthy ``` Vector uses two different naming schemes for S3 objects. If you set the compression parameter to true (this is the default), Vector uses this scheme: ``` <key_prefix><timestamp>-<uuidv4>.log.gz ``` If compression isnt enabled, Vector uses this scheme (only the file extension is different): ``` <key_prefix><timestamp>-<uuidv4>.log ``` Some sample S3 object names (with and without compression, respectively): ``` date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log.gz date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log ``` Vector appends a UUIDV4 token to ensure there are no naming conflicts in the unlikely event that two Vector instances are writing data at the same time. You can control the resulting name via the key_prefix, filenametimeformat, and filenameappenduuid options. For example, to store objects at the root S3 folder, without a timestamp or UUID use these configuration options: ``` keyprefix = \"{{ myfile_name }}\" filenametimeformat = \"\" filenameappenduuid = false ``` Vector currently only supports AWS S3 object tags and does not support object metadata. If you require metadata support see issue #1694. We believe tags are more flexible since they are separate from the actual S3 object. You can freely modify tags without modifying the object. Conversely, object metadata requires a full rewrite of the object to make changes. Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post, We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required. If Adaptive Request Concurrency is not for you, you can manually set static concurrency limits by specifying an integer for request.concurrency: ``` sinks: my-sink: request: concurrency: 10``` In addition to limiting request concurrency, you can also limit the overall request throughput via the request.ratelimitdurationsecs and request.ratelimit_num options. ``` sinks: my-sink: request: ratelimitduration_secs: 1 ratelimitnum: 10 ``` These will apply to both adaptive and fixed request.concurrency values. On this page Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": "metric.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning This section covers deploying Vector. Because Vector is an end-to-end platform, you can deploy it under various roles. Vector is efficient enough to deploy as an agent and powerful enough to deploy as an aggregator. By combining these roles you can build robust and flexible topologies that fit into any infrastructure. Start by becoming familiar with the deployment roles and then take a closer look at example topologies. On this page Meta Security Releases Versioning Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": "raspbian.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning A domain-specific language for modifying your observability data Vector Remap Language (VRL) is an expression-oriented language designed for transforming observability data (logs and metrics) in a safe and performant manner. It features a simple syntax and a rich set of built-in functions tailored specifically to observability use cases. You can use VRL in Vector via the remap transform. For a more in-depth picture, see the announcement blog post. VRL programs act on a single observability event and can be used to: Those programs are specified as part of your Vector configuration. Heres an example remap transform that contains a VRL program in the source field: ``` [transforms.modify] type = \"remap\" inputs = [\"logs\"] source = ''' del(.user_info) .timestamp = now() ''' ``` This program changes the contents of each event that passes through this transform, deleting the user_info field and adding a timestamp to the event. Lets have a look at a more complex example. Imagine that youre working with HTTP log events that look like this: ``` \"{\\\"status\\\":200,\\\"timestamp\\\":\\\"2021-03-01T19:19:24.646170Z\\\",\\\"message\\\":\\\"SUCCESS\\\",\\\"username\\\":\\\"ub40fan4life\\\"}\" ``` You want to apply these changes to each event: This VRL program would accomplish all of that: ``` . = parse_json!(string!(.message)) .timestamp = tounixtimestamp(to_timestamp!(.timestamp)) del(.username) .message = downcase(string!(.message)) ``` Finally, the resulting event: ``` { \"message\": \"success\", \"status\": 200, \"timestamp\": 1614626364 } ``` The JSON parsing program in the example above modifies the contents of each event. But you can also use VRL to specify conditions, which convert events into a single Boolean expression. Heres an example filter transform that filters out all messages for which the severity field equals \"info\": ``` [transforms.filteroutinfo] type = \"filter\" inputs = [\"logs\"] condition = '.severity != \"info\"' ``` Conditions can also be more multifaceted. This condition would filter out all events for which the severity field is \"info\", the status_code field is greater than or equal to 400, and the host field isnt set: ``` condition = '.severity != \"info\" && .status_code < 400 && exists(.host) ``` All language constructs are contained in the following reference pages. Use these references as you write your VRL programs: VRL is designed to minimize the learning curve. These resources can help you get acquainted with Vector and VRL: There is an online VRL playground, where you can experiment with VRL. Some functions are currently unsupported on the playground. Functions that are currently not supported can be found with this issue filter VRL is built by the Vector team and its development is guided by two core goals, safety and performance, without compromising on flexibility. This makes VRL ideal for critical, performance-sensitive infrastructure, like observability" }, { "data": "To illustrate how we achieve these, below is a VRL feature matrix across these principles: | Feature | Safety | Performance | |:--|:|:--| | Compilation | | | | Ergonomic safety | | | | Fail safety | | nan | | Memory safety | | nan | | Vector and Rust native | | | | Statelessness | | | VRL has some core concepts that you should be aware of as you dive in. VRL offers two functions that you can use to assert that VRL values conform to your expectations: assert and assert_eq. assert aborts the VRL program and logs an error if the provided Boolean expression evaluates to false, while assert_eq fails logs an error if the provided values arent equal. Both functions also enable you to provide custom log messages to be emitted upon failure. When running Vector, assertions can be useful in situations where you need to be notified when any observability event fails a condition. When writing unit tests, assertions can provide granular insight into which test conditions have failed and why. VRL programs operate on observability events. This VRL program, for example, adds a field to a log event: ``` .new_field = \"new value\" ``` The event at hand, represented by ., is the entire context of the VRL program. The event can be set to a value other than an object, for example . = 5. If it is set to an array, each element of that array is emitted as its own event from the remap transform. For any elements that arent an object, or if the top-level . is set to a scalar value, that value is set as the message key on the emitted object. This expression, for example&mldr; ``` . = [\"hello\", 1, true, { \"foo\": \"bar\" }] ``` &mldr;results in these four events being emitted: ``` { \"message\": \"hello\" } { \"message\": 1 } { \"message\": true } { \"foo\": \"bar\" } ``` Path expressions enable you to access values inside the event: ``` .kubernetes.pod_id ``` VRL functions can be marked as deprecated. When a function is deprecated, a warning will be shown at runtime. Suggestions on how to update the VRL program can usually be found in the actual warning and the function documentation. Some VRL functions are fallible, meaning that they can error. Any potential errors thrown by fallible functions must be handled, a requirement enforced at compile time. This feature of VRL programs, which we call fail safety, is a defining characteristic of VRL and a primary source of its safety guarantees. VRL programs are compiled to and run as native Rust code. This has several important implications: VRL strives to provide high-quality, helpful error messages, streamlining the development and iteration workflow around VRL" }, { "data": "This VRL program, for example&mldr; ``` parse_json!(1) ``` &mldr;would result in this error: ``` error[E110]: invalid argument type :2:13 2 parse_json!(1) ^ this expression resolves to the exact type integer but the parameter \"value\" expects the exact type string = try: ensuring an appropriate type at runtime = = 1 = string!(1) = parse_json!(1) = = try: coercing to an appropriate type and specifying a default value as a fallback in case coercion fails = = 1 = to_string(1) ?? \"default\" = parse_json!(1) = = see documentation about error handling at https://errors.vrl.dev/#handling = learn more about error code 110 at https://errors.vrl.dev/110 = see language documentation at https://vrl.dev = try your code in the VRL REPL, learn more at https://vrl.dev/examples ``` VRLs type-safety is progressive, meaning it will implement type-safety for any value for which it knows the type. Because observability data can be quite unpredictable, its not always known which type a field might be, hence the progressive nature of VRLs type-safety. As VRL scripts are evaluated, type information is built up and used at compile-time to enforce type-safety. Lets look at an example: ``` .foo # any .foo = downcase!(.foo) # string .foo = upcase(.foo) # string ``` Breaking down the above: To avoid error handling for argument errors, you can specify the types of your fields at the top of your VRL script: ``` .foo = string!(.foo) # string .foo = downcase(.foo) # string ``` This is generally good practice, and it provides the ability to opt-into type safety as you see fit, VRL scripts are written once and evaluated many times, therefore the tradeoff for type safety will ensure reliable production execution. ``` { \"message\": \"\\u003c102\\u003e1 2020-12-22T15:22:31.111Z vector-user.biz su 2666 ID389 - Something went wrong\" }``` ``` . |= parse_syslog!(.message)``` ``` { \"log\": { \"appname\": \"su\", \"facility\": \"ntp\", \"hostname\": \"vector-user.biz\", \"message\": \"Something went wrong\", \"msgid\": \"ID389\", \"procid\": 2666, \"severity\": \"info\", \"timestamp\": \"2020-12-22T15:22:31.111Z\", \"version\": 1 } }``` ``` { \"message\": \"@timestamp=\\\"Sun Jan 10 16:47:39 EST 2021\\\" level=info msg=\\\"Stopping all fetchers\\\" tag#production=stopping_fetchers id=ConsumerFetcherManager-1382721708341 module=kafka.consumer.ConsumerFetcherManager\" }``` ``` . = parsekeyvalue!(.message)``` ``` { \"log\": { \"@timestamp\": \"Sun Jan 10 16:47:39 EST 2021\", \"id\": \"ConsumerFetcherManager-1382721708341\", \"level\": \"info\", \"module\": \"kafka.consumer.ConsumerFetcherManager\", \"msg\": \"Stopping all fetchers\", \"tag#production\": \"stopping_fetchers\" } }``` ``` { \"message\": \"2021/01/20 06:39:15 +0000 [error] 17755#17755: *3569904 open() \\\"/usr/share/nginx/html/test.php\\\" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: localhost, request: \\\"GET /test.php HTTP/1.1\\\", host: \\\"yyy.yyy.yyy.yyy\\\"\" }``` ``` . |= parse_regex!(.message, r'^(?P<timestamp>\\d+/\\d+/\\d+ \\d+:\\d+:\\d+ \\+\\d+) \\[(?P<severity>\\w+)\\] (?P<pid>\\d+)#(?P<tid>\\d+):(?: \\(?P<connid>\\d+))? (?P<message>.)$') .timestamp = parse_timestamp(.timestamp, \"%Y/%m/%d %H:%M:%S %z\") ?? now() .pid = to_int!(.pid) .tid = to_int!(.tid) message_parts = split(.message, \", \", limit: 2) structured = parsekeyvalue(messageparts[1], keyvaluedelimiter: \":\", fielddelimiter: \",\") ?? {} .message = message_parts[0] . = merge(., structured)``` ``` { \"log\": { \"client\": \"xxx.xxx.xxx.xxx\", \"connid\": \"3569904\", \"host\": \"yyy.yyy.yyy.yyy\", \"message\": \"open() \\\"/usr/share/nginx/html/test.php\\\" failed (2: No such file or directory)\", \"pid\": 17755, \"request\": \"GET /test.php" }, { "data": "\"server\": \"localhost\", \"severity\": \"error\", \"tid\": 17755, \"timestamp\": \"2021-01-20T06:39:15Z\" } }``` ``` { \"message\": \"\\u003c102\\u003e1 2020-12-22T15:22:31.111Z vector-user.biz su 2666 ID389 - Something went wrong\" }``` ``` structured = parse_syslog(.message) ?? parsecommonlog(.message) ?? parse_regex!(.message, r'^(?P<timestamp>\\d+/\\d+/\\d+ \\d+:\\d+:\\d+) \\[(?P<severity>\\w+)\\] (?P<pid>\\d+)#(?P<tid>\\d+):(?: \\(?P<connid>\\d+))? (?P<message>.)$') . = merge(., structured)``` ``` { \"log\": { \"appname\": \"su\", \"facility\": \"ntp\", \"hostname\": \"vector-user.biz\", \"message\": \"Something went wrong\", \"msgid\": \"ID389\", \"procid\": 2666, \"severity\": \"info\", \"timestamp\": \"2020-12-22T15:22:31.111Z\", \"version\": 1 } }``` ``` { \"counter\": { \"value\": 102 }, \"kind\": \"incremental\", \"name\": \"userlogintotal\", \"tags\": { \"email\": \"vic@vector.dev\", \"host\": \"my.host.com\", \"instance_id\": \"abcd1234\" } }``` ``` .tags.environment = getenvvar!(\"ENV\") # add .tags.hostname = del(.tags.host) # rename del(.tags.email)``` ``` { \"metric\": { \"counter\": { \"value\": 102 }, \"kind\": \"incremental\", \"name\": \"userlogintotal\", \"tags\": { \"environment\": \"production\", \"hostname\": \"my.host.com\", \"instance_id\": \"abcd1234\" } } }``` ``` { \"message\": \"[{\\\"message\\\": \\\"firstlog\\\"}, {\\\"message\\\": \\\"secondlog\\\"}]\" }``` ``` . = parse_json!(.message) # sets `.` to an array of objects``` ``` [ { \"log\": { \"message\": \"first_log\" } }, { \"log\": { \"message\": \"second_log\" } } ]``` ``` { \"message\": \"[5, true, \\\"hello\\\"]\" }``` ``` . = parse_json!(.message) # sets `.` to an array``` ``` [ { \"log\": { \"message\": 5 } }, { \"log\": { \"message\": true } }, { \"log\": { \"message\": \"hello\" } } ]``` ``` { \"notastring\": 1 }``` ``` upcase(42)``` ``` error[E110]: invalid argument type :1:8 1 upcase(42) ^^ this expression resolves to the exact type integer but the parameter \"value\" expects the exact type string = try: ensuring an appropriate type at runtime = = 42 = string!(42) = upcase(42) = = try: coercing to an appropriate type and specifying a default value as a fallback in case coercion fails = = 42 = to_string(42) ?? \"default\" = upcase(42) = = see documentation about error handling at https://errors.vrl.dev/#handling = learn more about error code 110 at https://errors.vrl.dev/110 = see language documentation at https://vrl.dev = try your code in the VRL REPL, learn more at https://vrl.dev/examples ``` ``` { \"message\": \"key1=value1 key2=value2\" }``` ``` structured = parsekeyvalue(.message)``` ``` error[E103]: unhandled fallible assignment :1:14 1 structured = parsekeyvalue(.message) ^^^^^^^^^^^^^^^^^^^^^^^^^ this expression is fallible because at least one argument's type cannot be verified to be valid update the expression to be infallible by adding a `!`: `parsekeyvalue!(.message)` `.message` argument type is `any` and this function expected a parameter `value` of type `string` or change this to an infallible assignment: structured, err = parsekeyvalue(.message) = see documentation about error handling at https://errors.vrl.dev/#handling = see functions characteristics documentation at https://vrl.dev/expressions/#function-call-characteristics = learn more about error code 103 at https://errors.vrl.dev/103 = see language documentation at https://vrl.dev = try your code in the VRL REPL, learn more at https://vrl.dev/examples ``` On this page Meta Security Releases Versioning Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": "setup.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning Publish metric events to AWS Cloudwatch Metrics ``` { \"sinks\": { \"mysinkid\": { \"type\": \"awscloudwatchmetrics\", \"inputs\": [ \"my-source-or-transform-id\" ], \"default_namespace\": \"service\" } } }``` ``` [sinks.mysinkid] type = \"awscloudwatchmetrics\" inputs = [ \"my-source-or-transform-id\" ] default_namespace = \"service\" ``` ``` sinks: mysinkid: type: awscloudwatchmetrics inputs: my-source-or-transform-id default_namespace: service ``` ``` { \"sinks\": { \"mysinkid\": { \"type\": \"awscloudwatchmetrics\", \"inputs\": [ \"my-source-or-transform-id\" ], \"compression\": \"none\", \"default_namespace\": \"service\", \"endpoint\": \"http://127.0.0.0:5000/path/to/service\", \"region\": \"us-east-1\" } } }``` ``` [sinks.mysinkid] type = \"awscloudwatchmetrics\" inputs = [ \"my-source-or-transform-id\" ] compression = \"none\" default_namespace = \"service\" endpoint = \"http://127.0.0.0:5000/path/to/service\" region = \"us-east-1\" ``` ``` sinks: mysinkid: type: awscloudwatchmetrics inputs: my-source-or-transform-id compression: none default_namespace: service endpoint: http://127.0.0.0:5000/path/to/service region: us-east-1 ``` Controls how acknowledgements are handled for this sink. See End-to-end Acknowledgements for more information on how event acknowledgement is handled. Whether or not end-to-end acknowledgements are enabled. When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source. Enabling or disabling acknowledgements at the sink level takes precedence over any global acknowledgements configuration. ``` \"AKIAIOSFODNN7EXAMPLE\"``` ``` \"arn:aws:iam::123456789098:role/my_role\"``` ``` \"/my/aws/credentials\"``` ``` \"randomEXAMPLEidString\"``` Timeout for successfully loading any credentials, in seconds. Relevant when the default credentials chain or assume_role is used. ``` 30``` The credentials profile to use. Used to select AWS credentials from a provided credentials file. ``` \"develop\"``` The AWS region to send STS requests to. If not set, this defaults to the configured region for the service itself. ``` \"us-west-2\"``` ``` \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\"``` The maximum size of a batch that is processed by a sink. This is based on the uncompressed size of the batched events, before they are serialized/compressed. Configures the buffering behavior for this sink. More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section. The maximum size of the buffer on disk. Must be at least ~256 megabytes (268435488 bytes). | Option | Description | |:|:--| | disk | Events are buffered on disk.This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes.Data is synchronized to disk every 500ms. | | memory | Events are buffered in memory.This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. | Events are buffered on disk. This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. | Option | Description | |:|:-| | block | Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the" }, { "data": "| | drop_newest | Drops the event instead of waiting for free space in buffer.The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. Compression configuration. All compression algorithms use the default compression level unless otherwise specified. | Option | Description | |:|:--| | gzip | Gzip compression. | | none | No compression. | | snappy | Snappy compression. | | zlib | Zlib compression. | | zstd | Zstandard compression. | The default namespace to use for metrics that do not have one. Metrics with the same name can only be differentiated by their namespace, and not all metrics have their own namespace. ``` \"service\"``` ``` \"http://127.0.0.0:5000/path/to/service\"``` A list of upstream source or transform IDs. Wildcards (*) are supported. See configuration for more info. ``` [ \"my-source-or-transform-id\", \"prefix-*\" ]``` Proxy configuration. Configure to proxy traffic through an HTTP(S) proxy when making external requests. Similar to common proxy configuration convention, you can set different proxies to use based on the type of traffic being proxied, as well as set specific hosts that should not be proxied. Proxy endpoint to use when proxying HTTP traffic. Must be a valid URI string. ``` \"http://foo.bar:3128\"``` Proxy endpoint to use when proxying HTTPS traffic. Must be a valid URI string. ``` \"http://foo.bar:3128\"``` A list of hosts to avoid proxying. Multiple patterns are allowed: | Pattern | Example match | |:--|:| | Domain names | example.com matches requests to example.com | | Wildcard domains | .example.com matches requests to example.com and its subdomains | | IP addresses | 127.0.0.1 matches requests to 127.0.0.1 | | CIDR blocks | 192.168.0.0/16 matches requests to any IP addresses in this range | | Splat | * matches all hosts | ``` \"us-east-1\"``` Middleware settings for outbound requests. Various settings can be configured, such as concurrency and rate limits, timeouts, retry behavior, etc. Note that the retry backoff policy follows the Fibonacci sequence. Configuration of adaptive concurrency parameters. These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution. The fraction of the current value to set the new concurrency limit when decreasing the limit. Valid values are greater than 0 and less than 1. Smaller values cause the algorithm to scale back rapidly when latency increases. Note that the new limit is rounded down after applying this ratio. The weighting of new measurements compared to older measurements. Valid values are greater than 0 and less than" }, { "data": "ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability. The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency). It is recommended to set this value to your services average limit if youre seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the adaptiveconcurrencylimit metric. The maximum concurrency limit. The adaptive request concurrency limit will not go above this bound. This is put in place as a safeguard. Scale of RTT deviations which are not considered anomalous. Valid values are greater than or equal to 0, and we expect reasonable values to range from 1.0 to 3.0. When calculating the past RTT average, we also compute a secondary deviation value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT. Configuration for outbound request concurrency. This can be set either to one of the below enum values or to a positive integer, which denotes a fixed concurrency limit. | Option | Description | |:|:--| | adaptive | Concurrency will be managed by Vectors Adaptive Request Concurrency feature. | | none | A fixed concurrency of 1.Only one request can be outstanding at any given time. | A fixed concurrency of 1. Only one request can be outstanding at any given time. The amount of time to wait before attempting the first retry for a failed request. After the first retry has failed, the fibonacci sequence is used to select future backoffs. | Option | Description | |:|:-| | Full | Full jitter.The random delay is anywhere from 0 up to the maximum current delay calculated by the backoff strategy.Incorporating full jitter into your backoff strategy can greatly reduce the likelihood of creating accidental denial of service (DoS) conditions against your own systems when many clients are recovering from a failure state. | | None | No jitter. | Full jitter. The random delay is anywhere from 0 up to the maximum current delay calculated by the backoff strategy. Incorporating full jitter into your backoff strategy can greatly reduce the likelihood of creating accidental denial of service (DoS) conditions against your own systems when many clients are recovering from a failure state. The time a request can take before being aborted. Datadog highly recommends that you do not lower this value below the services internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream. Sets the list of supported ALPN protocols. Declare the supported ALPN protocols, which are used during negotiation with" }, { "data": "They are prioritized in the order that they are defined. Absolute path to an additional CA certificate file. The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format. ``` \"/path/to/certificate_authority.crt\"``` Absolute path to a certificate file used to identify this server. The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format. If this is set, and is not a PKCS#12 archive, key_file must also be set. ``` \"/path/to/host_certificate.crt\"``` Absolute path to a private key file used to identify this server. The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format. ``` \"/path/to/host_certificate.key\"``` Passphrase used to unlock the encrypted key file. This has no effect unless key_file is set. ``` \"${KEYPASSENV_VAR}\"``` ``` \"PassWord1\"``` Enables certificate verification. For components that create a server, this requires that the client connections have a valid client certificate. For components that initiate requests, this validates that the upstream has a valid certificate. If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate. Do NOT set this to false unless you understand the risks of not verifying the validity of certificates. Enables hostname verification. If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension. Only relevant for outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the remote hostname. A histogram of the number of events passed in each internal batch in Vectors internal topology. Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches. | Policy | Required for | Required when | |:-|:|-:| | cloudwatch:PutMetricData | healthcheckoperation | nan | Vector checks for AWS credentials in the following order: If no credentials are found, Vectors health check fails and an error is logged. This component buffers & batches data as shown in the diagram above. Youll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored. Batches are flushed when 1 of 2 conditions are met: Buffers are controlled via the buffer.* options. If youd like to exit immediately upon a health check failure, you can pass the --require-healthy flag: ``` vector --config /etc/vector/vector.yaml --require-healthy ``` On this page Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": "sinks.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning Collapse multiple log events into a single event based on a set of conditions and merge strategies ``` { \"transforms\": { \"mytransformid\": { \"type\": \"reduce\", \"inputs\": [ \"my-source-or-transform-id\" ] } } }``` ``` [transforms.mytransformid] type = \"reduce\" inputs = [ \"my-source-or-transform-id\" ] ``` ``` transforms: mytransformid: type: reduce inputs: my-source-or-transform-id ``` ``` { \"transforms\": { \"mytransformid\": { \"type\": \"reduce\", \"inputs\": [ \"my-source-or-transform-id\" ], \"expireafterms\": 30000, \"flushperiodms\": 1000, \"group_by\": [ \"request_id\" ] } } }``` ``` [transforms.mytransformid] type = \"reduce\" inputs = [ \"my-source-or-transform-id\" ] expireafterms = 30_000 flushperiodms = 1_000 groupby = [ \"requestid\" ] ``` ``` transforms: mytransformid: type: reduce inputs: my-source-or-transform-id expireafterms: 30000 flushperiodms: 1000 group_by: request_id ``` A condition used to distinguish the final event of a transaction. If this condition resolves to true for an event, the current transaction is immediately flushed with this event. | Syntax | Description | Example | |:|:--|:| | vrl | A Vector Remap Language (VRL) Boolean expression. | .status_code != 200 && !includes([\"info\", \"debug\"], .severity) | | datadog_search | A Datadog Search query string. | *stack | | is_log | Whether the incoming event is a log. | nan | | is_metric | Whether the incoming event is a metric. | nan | | is_trace | Whether the incoming event is a trace. | nan | If you opt for the vrl syntax for this condition, you can set the condition as a string via the condition parameter, without needing to specify both a source and a type. The table below shows some examples: | Config format | Example | |:-|:| | YAML | condition: .status == 200 | | TOML | condition = \".status == 200\" | | JSON | \"condition\": \".status == 200\" | ``` ends_when: type: \"vrl\" source: \".status == 500\"``` ``` ends_when = { type = \"vrl\", source = \".status == 500\" }``` ``` \"ends_when\": { \"type\": \"vrl\", \"source\": \".status == 500\" }``` ``` ends_when: type: \"datadog_search\" source: \"*stack\"``` ``` endswhen = { type = \"datadogsearch\", source = \"*stack\" }``` ``` \"ends_when\": { \"type\": \"datadog_search\", \"source\": \"*stack\" }``` ``` ends_when: \".status == 500\"``` ``` ends_when = \".status == 500\"``` ``` \"ends_when\": \".status == 500\"``` An ordered list of fields by which to group events. Each group with matching values for the specified keys is reduced independently, allowing you to keep independent event streams separate. When no fields are specified, all events are combined in a single group. For example, if group_by = [\"host\", \"region\"], then all incoming events that have the same host and region are grouped together before being reduced. ``` [ \"request_id\", \"user_id\", \"transaction_id\" ]``` A list of upstream source or transform IDs. Wildcards (*) are supported. See configuration for more info. ``` [ \"my-source-or-transform-id\", \"prefix-*\" ]``` A map of field names to custom merge strategies. For each field specified, the given strategy is used for combining events rather than the default behavior. The default behavior is as follows: | Option | Description | |:|:| | array | Append each value to an array. | | concat | Concatenate each string value, delimited with a space. | | concat_newline | Concatenate each string value, delimited with a newline. | | concat_raw | Concatenate each string, without a delimiter. | | discard | Discard all but the first value found. | | flat_unique | Create a flattened array of all unique values. | | longest_array | Keep the longest array seen. | | max | Keep the maximum numeric value seen. | | min | Keep the minimum numeric value" }, { "data": "| | retain | Discard all but the last value found.Works as a way to coalesce by not retaining null. | | shortest_array | Keep the shortest array seen. | | sum | Sum all numeric values. | Discard all but the last value found. Works as a way to coalesce by not retaining null. ``` \"array\"``` ``` \"concat\"``` ``` \"concat_newline\"``` ``` \"concat_raw\"``` ``` \"discard\"``` ``` \"flat_unique\"``` ``` \"longest_array\"``` ``` \"max\"``` ``` \"min\"``` ``` \"retain\"``` ``` \"shortest_array\"``` ``` \"sum\"``` A condition used to distinguish the first event of a transaction. If this condition resolves to true for an event, the previous transaction is flushed (without this event) and a new transaction is started. | Syntax | Description | Example | |:|:--|:| | vrl | A Vector Remap Language (VRL) Boolean expression. | .status_code != 200 && !includes([\"info\", \"debug\"], .severity) | | datadog_search | A Datadog Search query string. | *stack | | is_log | Whether the incoming event is a log. | nan | | is_metric | Whether the incoming event is a metric. | nan | | is_trace | Whether the incoming event is a trace. | nan | If you opt for the vrl syntax for this condition, you can set the condition as a string via the condition parameter, without needing to specify both a source and a type. The table below shows some examples: | Config format | Example | |:-|:| | YAML | condition: .status == 200 | | TOML | condition = \".status == 200\" | | JSON | \"condition\": \".status == 200\" | ``` starts_when: type: \"vrl\" source: \".status == 500\"``` ``` starts_when = { type = \"vrl\", source = \".status == 500\" }``` ``` \"starts_when\": { \"type\": \"vrl\", \"source\": \".status == 500\" }``` ``` starts_when: type: \"datadog_search\" source: \"*stack\"``` ``` startswhen = { type = \"datadogsearch\", source = \"*stack\" }``` ``` \"starts_when\": { \"type\": \"datadog_search\", \"source\": \"*stack\" }``` ``` starts_when: \".status == 500\"``` ``` starts_when = \".status == 500\"``` ``` \"starts_when\": \".status == 500\"``` A histogram of the number of events passed in each internal batch in Vectors internal topology. Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches. ``` [{\"log\":{\"host\":\"host-1.hostname.com\",\"message\":\"foobar.rb:6:in `/': divided by 0 (ZeroDivisionError)\",\"pid\":1234,\"tid\":5678,\"timestamp\":\"2020-10-07T12:33:21.223543Z\"}},{\"log\":{\"host\":\"host-1.hostname.com\",\"message\":\" from foobar.rb:6:in `bar'\",\"pid\":1234,\"tid\":5678,\"timestamp\":\"2020-10-07T12:33:21.223543Z\"}},{\"log\":{\"host\":\"host-1.hostname.com\",\"message\":\" from foobar.rb:2:in `foo'\",\"pid\":1234,\"tid\":5678,\"timestamp\":\"2020-10-07T12:33:21.223543Z\"}},{\"log\":{\"host\":\"host-1.hostname.com\",\"message\":\" from foobar.rb:9:in `\\u003cmain\\u003e'\",\"pid\":1234,\"tid\":5678,\"timestamp\":\"2020-10-07T12:33:21.223543Z\"}},{\"log\":{\"host\":\"host-1.hostname.com\",\"message\":\"Hello world, I am a new log\",\"pid\":1234,\"tid\":5678,\"timestamp\":\"2020-10-07T12:33:22.123528Z\"}}]``` ``` transforms: mytransformid: type: reduce inputs: my-source-or-transform-id group_by: host pid tid merge_strategies: message: concat_newline starts_when: match(string!(.message), r'^[^\\s]') ``` ``` [transforms.mytransformid] type = \"reduce\" inputs = [ \"my-source-or-transform-id\" ] group_by = [ \"host\", \"pid\", \"tid\" ] starts_when = \"match(string!(.message), r'^[^\\\\s]')\" [transforms.mytransformid.merge_strategies] message = \"concat_newline\" ``` ``` { \"transforms\": { \"mytransformid\": { \"type\": \"reduce\", \"inputs\": [ \"my-source-or-transform-id\" ], \"group_by\": [ \"host\", \"pid\", \"tid\" ], \"merge_strategies\": { \"message\": \"concat_newline\" }, \"starts_when\": \"match(string!(.message), r'^[^\\\\s]')\" } } }``` ``` [{\"log\":{\"host\":\"host-1.hostname.com\",\"message\":\"foobar.rb:6:in `/': divided by 0 (ZeroDivisionError)\\n from foobar.rb:6:in `bar'\\n from foobar.rb:2:in `foo'\\n from foobar.rb:9:in `\\u003cmain\\u003e'\",\"pid\":1234,\"tid\":5678,\"timestamp\":\"2020-10-07T12:33:21.223543Z\"}},{\"log\":{\"host\":\"host-1.hostname.com\",\"message\":\"Hello world, I am a new log\",\"pid\":1234,\"tid\":5678,\"timestamp\":\"2020-10-07T12:33:22.123528Z\"}}]``` ``` [{\"log\":{\"message\":\"Received GET /path\",\"requestid\":\"abcd1234\",\"requestparams\":{\"key\":\"val\"},\"requestpath\":\"/path\",\"timestamp\":\"2020-10-07T12:33:21.223543Z\"}},{\"log\":{\"message\":\"Executed query in 5.2ms\",\"query\":\"SELECT * FROM table\",\"querydurationms\":5.2,\"requestid\":\"abcd1234\",\"timestamp\":\"2020-10-07T12:33:21.832345Z\"}},{\"log\":{\"message\":\"Rendered partial partial.erb in 2.3ms\",\"renderdurationms\":2.3,\"requestid\":\"abcd1234\",\"template\":\"partial.erb\",\"timestamp\":\"2020-10-07T12:33:22.457423Z\"}},{\"log\":{\"message\":\"Executed query in 7.8ms\",\"query\":\"SELECT * FROM table\",\"querydurationms\":7.8,\"requestid\":\"abcd1234\",\"timestamp\":\"2020-10-07T12:33:22.543323Z\"}},{\"log\":{\"message\":\"Sent 200 in 15.2ms\",\"requestid\":\"abcd1234\",\"responsedurationms\":5.2,\"responsestatus\":200,\"timestamp\":\"2020-10-07T12:33:22.742322Z\"}}]``` ``` transforms: mytransformid: type: reduce inputs: my-source-or-transform-id ``` ``` [transforms.mytransformid] type = \"reduce\" inputs = [ \"my-source-or-transform-id\" ] ``` ``` { \"transforms\": { \"mytransformid\": { \"type\": \"reduce\", \"inputs\": [ \"my-source-or-transform-id\" ] } } }``` ``` { \"querydurationms\": 13, \"renderdurationms\": 2.3, \"request_id\": \"abcd1234\", \"request_params\": { \"key\": \"val\" }, \"request_path\": \"/path\", \"responsedurationms\": 5.2, \"status\": 200, \"timestamp\": \"2020-10-07T12:33:21.223543Z\", \"timestamp_end\": \"2020-10-07T12:33:22.742322Z\" }``` On this page Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": "what-is-observability-pipelines.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning Send events to AMQP 0.9.1 compatible brokers like RabbitMQ ``` { \"sinks\": { \"mysinkid\": { \"type\": \"amqp\", \"inputs\": [ \"my-source-or-transform-id\" ], \"connection_string\": \"amqp://user:password@127.0.0.1:5672/%2f?timeout=10\" } } }``` ``` [sinks.mysinkid] type = \"amqp\" inputs = [ \"my-source-or-transform-id\" ] connection_string = \"amqp://user:password@127.0.0.1:5672/%2f?timeout=10\" ``` ``` sinks: mysinkid: type: amqp inputs: my-source-or-transform-id connection_string: amqp://user:password@127.0.0.1:5672/%2f?timeout=10 ``` ``` { \"sinks\": { \"mysinkid\": { \"type\": \"amqp\", \"inputs\": [ \"my-source-or-transform-id\" ], \"connection_string\": \"amqp://user:password@127.0.0.1:5672/%2f?timeout=10\" } } }``` ``` [sinks.mysinkid] type = \"amqp\" inputs = [ \"my-source-or-transform-id\" ] connection_string = \"amqp://user:password@127.0.0.1:5672/%2f?timeout=10\" ``` ``` sinks: mysinkid: type: amqp inputs: my-source-or-transform-id connection_string: amqp://user:password@127.0.0.1:5672/%2f?timeout=10 ``` Controls how acknowledgements are handled for this sink. See End-to-end Acknowledgements for more information on how event acknowledgement is handled. Whether or not end-to-end acknowledgements are enabled. When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source. Enabling or disabling acknowledgements at the sink level takes precedence over any global acknowledgements configuration. Configures the buffering behavior for this sink. More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section. The maximum size of the buffer on disk. Must be at least ~256 megabytes (268435488 bytes). | Option | Description | |:|:--| | disk | Events are buffered on disk.This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes.Data is synchronized to disk every 500ms. | | memory | Events are buffered in memory.This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. | Events are buffered on disk. This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. | Option | Description | |:|:-| | block | Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. | | drop_newest | Drops the event instead of waiting for free space in buffer.The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. Drops the event instead of waiting for free space in buffer. The event will be intentionally" }, { "data": "This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. URI for the AMQP server. The URI has the format of amqp://<user>:<password>@<host>:<port>/<vhost>?timeout=<seconds>. The default vhost can be specified by using a value of %2f. To connect over TLS, a scheme of amqps can be specified instead. For example, amqps://.... Additional TLS settings, such as client certificate verification, can be configured under the tls section. ``` \"amqp://user:password@127.0.0.1:5672/%2f?timeout=10\"``` ``` \"{ \\\"type\\\": \\\"record\\\", \\\"name\\\": \\\"log\\\", \\\"fields\\\": [{ \\\"name\\\": \\\"message\\\", \\\"type\\\": \\\"string\\\" }] }\"``` | Option | Description | |:|:| | avro | Encodes an event as an Apache Avro message. | | csv | Encodes an event as a CSV message.This codec must be configured with fields to encode. | | gelf | Encodes an event as a GELF message.This codec is experimental for the following reason:The GELF specification is more strict than the actual Graylog receiver. Vectors encoder currently adheres more strictly to the GELF spec, with the exception that some characters such as @ are allowed in field names.Other GELF codecs such as Lokis, use a Go SDK that is maintained by Graylog, and is much more relaxed than the GELF spec.Going forward, Vector will use that Go SDK as the reference implementation, which means the codec may continue to relax the enforcement of specification. | | json | Encodes an event as JSON. | | logfmt | Encodes an event as a logfmt message. | | native | Encodes an event in the native Protocol Buffers format.This codec is experimental. | | native_json | Encodes an event in the native JSON format.This codec is experimental. | | protobuf | Encodes an event as a Protobuf message. | | raw_message | No encoding.This encoding uses the message field of a log event.Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. | | text | Plain text encoding.This encoding uses the message field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. | Encodes an event as a CSV message. This codec must be configured with fields to encode. Encodes an event as a GELF message. This codec is experimental for the following reason: The GELF specification is more strict than the actual Graylog receiver. Vectors encoder currently adheres more strictly to the GELF spec, with the exception that some characters such as @ are allowed in field names. Other GELF codecs such as Lokis, use a Go SDK that is maintained by Graylog, and is much more relaxed than the GELF spec. Going forward, Vector will use that Go SDK as the reference implementation, which means the codec may continue to relax the enforcement of specification. Encodes an event in the native Protocol Buffers format. This codec is" }, { "data": "Encodes an event in the native JSON format. This codec is experimental. No encoding. This encoding uses the message field of a log event. Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. Plain text encoding. This encoding uses the message field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format. Be careful if you are modifying your log events (for example, by using a remap transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event. ``` \"avro\"``` ``` \"csv\"``` ``` \"gelf\"``` ``` \"json\"``` ``` \"logfmt\"``` ``` \"native\"``` ``` \"native_json\"``` ``` \"protobuf\"``` ``` \"raw_message\"``` ``` \"text\"``` Enable double quote escapes. This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled. The escape character to use when writing CSV. In some variants of CSV, quotes are escaped using a special escape character like \\ (instead of escaping quotes by doubling them). To use this, double_quotes needs to be disabled as well otherwise it is ignored. Configures the fields that will be encoded, as well as the order in which they appear in the output. If a field is not present in the event, the output will be an empty string. Values of type Array, Object, and Regex are not supported and the output will be an empty string. | Option | Description | |:|:--| | always | Always puts quotes around every field. | | necessary | Puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter, or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field). | | never | Never writes quotes, even if it produces invalid CSV data. | | non_numeric | Puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes are used even if they arent strictly necessary. | Controls how metric tag values are encoded. When set to single, only the last non-bare value of tags are displayed with the metric. When set to full, all metric tags are exposed as separate assignments. | Option | Description | |:|:--| | full | All tags are exposed as arrays of either string or null values. | | single | Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored. | The path to the protobuf descriptor set file. This file is the output of protoc -o <path> ... ``` \"/etc/vector/protobufdescriptorset.desc\"``` ``` \"package.Message\"``` | Option | Description | |:--|:| | rfc3339 | Represent the timestamp as a RFC 3339 timestamp. | | unix | Represent the timestamp as a Unix" }, { "data": "| | unix_float | Represent the timestamp as a Unix timestamp in floating point. | | unix_ms | Represent the timestamp as a Unix timestamp in milliseconds. | | unix_ns | Represent the timestamp as a Unix timestamp in nanoseconds. | | unix_us | Represent the timestamp as a Unix timestamp in microseconds | A list of upstream source or transform IDs. Wildcards (*) are supported. See configuration for more info. ``` [ \"my-source-or-transform-id\", \"prefix-*\" ]``` Configure the AMQP message properties. AMQP message properties. Sets the list of supported ALPN protocols. Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined. Absolute path to an additional CA certificate file. The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format. ``` \"/path/to/certificate_authority.crt\"``` Absolute path to a certificate file used to identify this server. The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format. If this is set, and is not a PKCS#12 archive, key_file must also be set. ``` \"/path/to/host_certificate.crt\"``` Absolute path to a private key file used to identify this server. The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format. ``` \"/path/to/host_certificate.key\"``` Passphrase used to unlock the encrypted key file. This has no effect unless key_file is set. ``` \"${KEYPASSENV_VAR}\"``` ``` \"PassWord1\"``` Enables certificate verification. For components that create a server, this requires that the client connections have a valid client certificate. For components that initiate requests, this validates that the upstream has a valid certificate. If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate. Do NOT set this to false unless you understand the risks of not verifying the validity of certificates. Enables hostname verification. If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension. Only relevant for outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the remote hostname. A histogram of the number of events passed in each internal batch in Vectors internal topology. Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches. If youd like to exit immediately upon a health check failure, you can pass the --require-healthy flag: ``` vector --config /etc/vector/vector.yaml --require-healthy ``` On this page Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Observability and Analysis", "file_name": "sources.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning macOS is the primary operating system for Apples Mac computers. Its a certified Unix system based on Apples Darwin operating system. This page covers installing and managing Vector on the macOS operating system. On this page Meta Security Releases Versioning Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "APIOAK", "subcategory": "API Gateway" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Akana", "subcategory": "API Gateway" }
[ { "data": "Akana products help you to securely transform your enterprise. Release Notes View the release notes for the latest product update. Earlier Versions This documentation set includes the latest version. For supported earlier versions, see Akana Documentation: All products, versions up to 2020.2.x. Deployment Platform The container, or underlying infrastructure, for all products. Community Manager Develop a customized, secure API portal to make your APIs accessible to vendors and internal users. Quickly onboard partners, provision APIs, and establish active social channels. Help developers find and access the right API. API Gateway Develop and manage your APIs by securely connecting applications across platforms, devices, and channels. Includes Policy Manager and Network Director. Envision Analyze data collected by the API management platform and visualize trends and insights. Lifecycle Manager Automate machine and role-based validations and signoffs across the software development lifecycle. Sola Digital transformation of mainframe assets, by exposing applications as modern APIs. 2024 Perforce Software, Inc. All rights reserved. Perforce and other identified trademarks are the property of Perforce Software, Inc., or one of its affiliates. Such trademarks are claimed and/or registered in the U.S. and other countries and regions. All third-party trademarks are the property of their respective holders. References to third-party trademarks do not imply endorsement or sponsorship of any products or services by the trademark holder. Contact Perforce Software, Inc., for further details." } ]
{ "category": "Observability and Analysis", "file_name": "windows.md", "project_name": "Vector", "subcategory": "Observability" }
[ { "data": "Meta Security Releases Versioning Microsoft Windows is an operating system developed and sold by Microsoft. This page covers installing and managing Vector on the Windows operating system. On this page Meta Security Releases Versioning Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter 2024 Datadog, Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "github-privacy-statement.md", "project_name": "APIOAK", "subcategory": "API Gateway" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "understanding-github-code-search-syntax.md", "project_name": "APIOAK", "subcategory": "API Gateway" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "APIOAK", "subcategory": "API Gateway" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]